AI-powered literature discovery and review engine for medical/scientific papers

Overview

AI-powered literature discovery and review engine for medical/scientific papers

Version GitHub Release Date GitHub issues GitHub last commit Build Status Coverage Status


demo

paperai is an AI-powered literature discovery and review engine for medical/scientific papers. paperai helps automate tedious literature reviews allowing researchers to focus on their core work. Queries are run to filter papers with specified criteria. Reports powered by extractive question-answering are run to identify answers to key questions within sets of medical/scientific papers.

paperai was used to analyze the COVID-19 Open Research Dataset (CORD-19), winning multiple awards in the CORD-19 Kaggle challenge.

paperai and/or NeuML has been recognized in the following articles:

Installation

The easiest way to install is via pip and PyPI

pip install paperai

You can also install paperai directly from GitHub. Using a Python Virtual Environment is recommended.

pip install git+https://github.com/neuml/paperai

Python 3.6+ is supported

See this link to help resolve environment-specific install issues.

Docker

A Dockerfile with commands to install paperai, all dependencies and scripts is available in this repository.

Clone this git repository and run the following to build and run the Docker image.

docker build -t paperai -f docker/Dockerfile .
docker run --name paperai --rm -it paperai

This will bring up a paperai command shell. Standard Docker commands can be used to copy files over or commands can be run directly in the shell to retrieve input content. All scripts in the following examples are available in this environment.

paperetl's Dockerfile can be combined with this Dockerfile to have a single image that can index and query content. The files from the paperetl project scripts directory needs to be placed in paperai's scripts directory. The paperetl Dockerfile also needs to be copied over (it's referenced as paperetl.Dockerfile here).

docker build -t base -f docker/Dockerfile .
docker build -t paperai --build-arg BASE_IMAGE=base -f docker/paperetl.Dockerfile .
docker run --name paperai --rm -it paperai

Examples

The following notebooks and applications demonstrate the capabilities provided by paperai.

Notebooks

Notebook Description
CORD-19 Analysis with Sentence Embeddings Builds paperai-based submissions for the CORD-19 Challenge
CORD-19 Report Builder Template for building new reports

Applications

Application Description
Search Search a paperai index. Set query parameters, execute searches and display results.

Building a model

paperai indexes databases previously built with paperetl. paperai currently supports querying SQLite databases.

The following sections show how to build an index for a SQLite articles database.

This example assumes the database and model path is cord19/models. Substitute as appropriate.

  1. Download CORD-19 fastText vectors

    scripts/getvectors.sh cord19/vectors

    A full vector model build can optionally be run with the following command.

    python -m paperai.vectors cord19/models

    CORD-19 fastText vectors are also available on Kaggle.

  2. Build embeddings index

    python -m paperai.index cord19/models cord19/vectors/cord19-300d.magnitude

The paperai.index process takes two optional arguments, the model path and the vector file path. The default model location is ~/.cord19 if no parameters are passed in.

Building a report file

Reports support generating output in multiple formats. An example report call:

python -m paperai.report tasks/risks.yml 50 md cord19/models

The following report formats are supported:

  • Markdown (Default) - Renders a Markdown report. Columns and answers are extracted from articles with the results stored in a Markdown file.
  • CSV - Renders a CSV report. Columns and answers are extracted from articles with the results stored in a CSV file.
  • Annotation - Columns and answers are extracted from articles with the results annotated over the original PDF files. Requires passing in a path with the original PDF files.

In the example above, a file named tasks/risk_factors.md will be created. Example report configuration files can be found here.

Running queries

The fastest way to run queries is to start a paperai shell

paperai cord19/models

A prompt will come up. Queries can be typed directly into the console.

Tech Overview

The tech stack is built on Python and creates a sentence embeddings index with FastText + BM25. Background on this method can be found in this Medium article.

The model is a combination of a sentence embeddings index and a SQLite database with the articles. Each article is parsed into sentences and stored in SQLite along with the article metadata. FastText vectors are built over the full corpus. The sentence embeddings index only uses tagged articles, which helps produce the most relevant results.

Multiple entry points exist to interact with the model.

  • paperai.report - Builds a markdown report for a series of queries. For each query, the best articles are shown, top matches from those articles and a highlights section which shows the most relevant sections from the embeddings search for the query.
  • paperai.query - Runs a single query from the terminal
  • paperai.shell - Allows running multiple queries from the terminal
Comments
  • Vector model file not found (cord19-300d.magnitude)

    Vector model file not found (cord19-300d.magnitude)

    • issue moved from wrong project to here -

    Hi,

    I get the following error when running python -m paperai.index

    raise IOError(ENOENT, "Vector model file not found", path) FileNotFoundError: [Errno 2] Vector model file not found: 'C:\Users\x\.cord19\vectors\cord19-300d.magnitude'

    PS. I am quite new to all this; so, apologies if the mistake is on my end.

    When trying to download cord19-300d.magnitude from https://www.kaggle.com/davidmezzetti/cord19-fasttext-vectors#cord19-300d.magnitude, I get the error: "Too many requests"

    opened by fomar1994 30
  • Installation issues

    Installation issues

    The system would report issue with "UnicodeDecodeError: 'gbk' codec can't decode byte 0x82 in position 12007: illegal multibyte sequence" when I execute this command "pip install paperai". I wonder if WINDOWS SYSTEM cannot decompress tar.gz-type packages. 微信图片_20201215222928

    opened by albertY-C 16
  • I'm not sure to have followed correctly the procedure for running paperai with pre-trained vectors

    I'm not sure to have followed correctly the procedure for running paperai with pre-trained vectors

    After successfully installing paperai in Linux (Ubuntu 20.04.1 LTS), I tried to run it by using the pre-trained vectors option to build the model, as follows:

    (1) I downloaded the vectors from https://www.kaggle.com/davidmezzetti/cord19-fasttext-vectors#cord19-300d.magnitude (2) My Downloads folder in my computer ended up with a Zip file containing the vectors. (3) I created a directory ~/.cord19/vectors/ and moved the downloaded Zip file into this directory (see yellow folder in the figure below). (4) I extracted the Zip file, which resulted in the grey folder shown below, which contained the file cord19-300d.magnitude (5) I moved the cord19-300d.magnitude file outside of the grey folder and thus into the ~/.cord19/vectors/ directory (see figure below)

    Screenshot from 2020-08-06 22-07-59

    (6) I excuted the following command to build the embeddings index with the above pre-trained vectors:

    python -m paperai.index

    Upon performing the above I got the following error message (see below)

    ppai1 9 ErrorLaptopRun

    Am I getting this error because the above steps are not the correct ones? If so, what would be the correct steps? Otherwise, what other things should I try to eliminate the issue?

    opened by DavidRivasPhD 10
  • Windows install issue

    Windows install issue

    It was reported that paperai can't be installed in a Windows environment due to the following error:

    ValueError: path 'src/python/' cannot end with '/'

    bug 
    opened by davidmezzetti 5
  • Added pdf output build option

    Added pdf output build option

    Modified export.py to create a pdf output option This is done by the new method in export, streampdf

    This edit done for educational purposes as a participant in York University's software design course

    Thank you for your time

    opened by will0710 3
  • Processing custom sqlite file

    Processing custom sqlite file

    I want to create an index and vector file over a Custom sqlite articles database. I have created a articles.sqlite database on medical papers, using paperetl. But I did not find any instruction as to how to process it . Can you please give instructions on this ?

    opened by choudharya3 3
  • risk-factors.yml issues

    risk-factors.yml issues

    when i run command "python -m paperai.report tasks/risk-factors.yml 50 md cord19/models ", i can't find file risk-factors.yml, and i can't understand argument "50"

    opened by Zhip-S 2
  • Integration: DeepSource

    Integration: DeepSource

    I ran DeepSource analysis on my fork of this repository and found some code quality issues. Have a look at the issues caught in this repository by DeepSource here.

    DeepSource is a code review automation tool that detects code quality issues and helps you to automatically fix some of them. You can use DeepSource to track test coverage, Detect problems in Dockerfiles, etc. in addition to detecting issues in code.

    The PR #24 fixed some of the issues caught by DeepSource.

    All the features of the DeepSource are mentioned here. I'd suggest you integrate DeepSource since it is free for Open Source projects forever.

    Integrating DeepSource to continuously analyze your repository:

    • Install DeepSource on your repository here.
    • Create .deepsource.toml configuration specific to this repo or use the configuration mentioned below which I used to run the analysis on the fork of this repo.
    • Activate analysis here.
    version = 1
    
    test_patterns = ["/test/python/*.py"]
    
    [[analyzers]]
    name = "python"
    enabled = true
    
      [analyzers.meta]
      runtime_version = "3.x.x"
    
    opened by withshubh 2
  • RuntimeError: CUDA error: out of memory (NVidia V100, 32 GB DDRAM)

    RuntimeError: CUDA error: out of memory (NVidia V100, 32 GB DDRAM)

    What are the minimum memory requirements for the PaperAI? When running on Nvidia V100, 32 GB DDRAM I got: RuntimeError: CUDA error: out of memory. GPU memory seems to be completely free.

    Is there a way how to run it from GPU, or can I run it exclusively on TPUs?

    from txtai.embeddings import Embeddings
    import torch
    
    torch.cuda.empty_cache()
    
    # MEMORY
    id = 1
    t = torch.cuda.get_device_properties(id).total_memory
    c = torch.cuda.memory_cached(id)
    a = torch.cuda.memory_allocated(id)
    f = c-a  # free inside cache
    
    print("TOTAL", t / 1024/1024/1024," GB")
    print("ALLOCATED", a)
    
    # Create embeddings model, backed by sentence-transformers & transformers
    embeddings = Embeddings({"method": "transformers", "path": "sentence-transformers/bert-base-nli-mean-tokens"})
    
    import numpy as np
    
    sections = ["US tops 5 million confirmed virus cases",
                "Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
                "Beijing mobilises invasion craft along coast as Taiwan tensions escalate",
                "The National Park Service warns against sacrificing slower friends in a bear attack",
                "Maine man wins $1M from $25 lottery ticket",
                "Make huge profits without work, earn up to $100,000 a day"]
    
    
    query = "health"
    uid = np.argmax(embeddings.similarity(query, sections))
    print("%-20s %s" % (query, sections[uid]))
    

    TOTAL 31.74853515625 GB ALLOCATED 0 Traceback (most recent call last): File "pokus2.py", line 32, in uid = np.argmax(embeddings.similarity(query, sections)) File "/home/user/.local/lib/python3.8/site-packages/txtai/embeddings.py", line 228, in similarity query = self.transform((None, query, None)).reshape(1, -1) File "/home/user/.local/lib/python3.8/site-packages/txtai/embeddings.py", line 179, in transform embedding = self.model.transform(document) File "/home/user/.local/lib/python3.8/site-packages/txtai/vectors.py", line 264, in transform return self.model.encode([" ".join(document[1])], show_progress_bar=False)[0]

    opened by burgetrm 2
  • Wrong annotation places

    Wrong annotation places

    Need fix to correctly annotate the pdf text from query text that has the different pages, columns, or others placing positions in the pdf. In the screenshots, the annotator trying to annotate text that in the different positions only by per page consideration rather than all placing positions consideration. That method made the annotator annotate text that should not be annotated because the annotator only found the text in its current scope only. Also, the annotation that covers texts that should not be annotated leads to confusing annotation indicators too.

    Columns problem:

    • Query Screenshot_20
    • Annotations Screenshot_16

    Pages problem:

    • Query Screenshot_19
    • Annotations Screenshot_17 Screenshot_18
    opened by muazhari 1
  • sqlite3.OperationalError: no such table: sections

    sqlite3.OperationalError: no such table: sections

    when I command in a docker: python -m paperai.vectors cord19/models, the output srror is "sqlite3.OperationalError: no such table: sections"

    opened by wspspring 1
  • paperai for beginners

    paperai for beginners

    First and foremost, thank you for offering such a great library. Nonetheless I was wondering if possible can you provide a simple guideline on using such a awesome library for new research project like from loading pdf files to querying the topics. I went through the examples but could not grasp the overall idea. I believe a small effort of yours would be really help for beginners like me to use this library in research work.

    opened by satishchaudhary382 1
Releases(v2.0.0)
  • v2.0.0(Mar 12, 2022)

    This release adds the following enhancements and bug fixes:

    • Allow setting report options within task yml files (#42)
    • Allow running reports against full databases (#43)
    • Batch extractor queries (#44)
    • Remove study design columns (#46)
    • Add option to specify extraction column context (#47)
    • Add report reference column (#48)
    • Add report column format parameter (#49)
    • Add pre-commit checks (#50)
    • Add check to report sections query to ensure text has tokens (#51)
    • Remove default home directory cord19 path defaults (#52)
    • Require Python 3.7+ (#54)
    • Update txtai to 4.3.1 (#56)
    Source code(tar.gz)
    Source code(zip)
  • v1.10.0(Sep 10, 2021)

  • v1.9.0(Aug 18, 2021)

  • v1.8.0(Apr 23, 2021)

    This release adds the following enhancements and bug fixes:

    • Add ability to read index yml (#18)
    • Switch from mdv to mdv3 to support Python 3.9 (#21)
    • Add enhanced API for paperai (#30)
    • Add configurable query threshold, (#31)
    • Support query negation (#32)
    • Add search application (#33)
    Source code(tar.gz)
    Source code(zip)
  • v1.7.0(Feb 24, 2021)

  • v1.6.0(Jan 13, 2021)

  • v1.5.0(Dec 11, 2020)

  • v1.4.0(Nov 6, 2020)

    This release adds the following enhancements and bug fixes:

    • Allow specifying vector output file (#10, #11, #13)
    • Build test suite (#12)
    • Add additional column parameters (#14)
    • Allow indexing partial datasources (#15)
    • Add GitHub actions build script (#16)
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Aug 18, 2020)

  • v1.2.1(Aug 12, 2020)

  • v1.2.0(Aug 11, 2020)

    Release addresses the following:

    • Allow customized the QA model used for QA extraction (#5)
    • Migrated embeddings index logic to txtai project (#7)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Aug 5, 2020)

    Release addresses the following:

    • Add wildcard report queries (#1) - Add ability to run report against entire database. This is only practical for smaller datasets.
    • Fix Windows install issues (#2)
    • Embeddings index memory improvements (#3) - Various improvements to limit memory usage when building an embeddings index
    • Support must clauses for custom query columns (#4) - Add same logic already present in general queries to require a term to be present when deriving report query columns
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Jul 21, 2020)

Owner
NeuML
Applying machine learning to solve everyday problems
NeuML
Unifying Cross-Lingual Semantic Role Labeling with Heterogeneous Linguistic Resources (NAACL-2021).

Unifying Cross-Lingual Semantic Role Labeling with Heterogeneous Linguistic Resources Description This is the repository for the paper Unifying Cross-

Sapienza NLP group 16 Sep 09, 2022
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents

Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents [Project Page] [Paper] [Video] Wenlong Huang1, Pieter Abbee

Wenlong Huang 114 Dec 29, 2022
Spokestack is a library that allows a user to easily incorporate a voice interface into any Python application with a focus on embedded systems.

Welcome to Spokestack Python! This library is intended for developing voice interfaces in Python. This can include anything from Raspberry Pi applicat

Spokestack 133 Sep 20, 2022
⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x using fastT5.

Reduce T5 model size by 3X and increase the inference speed up to 5X. Install Usage Details Functionalities Benchmarks Onnx model Quantized onnx model

Kiran R 399 Jan 05, 2023
Poetry PEP 517 Build Backend & Core Utilities

Poetry Core A PEP 517 build backend implementation developed for Poetry. This project is intended to be a light weight, fully compliant, self-containe

Poetry 293 Jan 02, 2023
DAGAN - Dual Attention GANs for Semantic Image Synthesis

Contents Semantic Image Synthesis with DAGAN Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Evalu

Hao Tang 104 Oct 08, 2022
DziriBERT: a Pre-trained Language Model for the Algerian Dialect

DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect.

117 Jan 07, 2023
Source code for CsiNet and CRNet using Fully Connected Layer-Shared feedback architecture.

FCS-applications Source code for CsiNet and CRNet using the Fully Connected Layer-Shared feedback architecture. Introduction This repository contains

Boyuan Zhang 4 Oct 07, 2022
Generate product descriptions, blogs, ads and more using GPT architecture with a single request to TextCortex API a.k.a Hemingwai

TextCortex - HemingwAI Generate product descriptions, blogs, ads and more using GPT architecture with a single request to TextCortex API a.k.a Hemingw

TextCortex AI 27 Nov 28, 2022
Random Directed Acyclic Graph Generator

DAG_Generator Random Directed Acyclic Graph Generator verison1.0 简介 工作流通常由DAG(有向无环图)来定义,其中每个计算任务$T_i$由一个顶点(node,task,vertex)表示。同时,任务之间的每个数据或控制依赖性由一条加权

Livion 17 Dec 27, 2022
Python package for performing Entity and Text Matching using Deep Learning.

DeepMatcher DeepMatcher is a Python package for performing entity and text matching using deep learning. It provides built-in neural networks and util

461 Dec 28, 2022
Twitter Sentiment Analysis using #tag, words and username

Twitter Sentment Analysis Web App using #tag, words and username to fetch data finds Insides of data and Tells Sentiment of the perticular #tag, words or username.

Kumar Saksham 26 Dec 25, 2022
A design of MIDI language for music generation task, specifically for Natural Language Processing (NLP) models.

MIDI Language Introduction Reference Paper: Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions: code This

Robert Bogan Kang 3 May 25, 2022
Converts python code into c++ by using OpenAI CODEX.

🦾 codex_py2cpp 🤖 OpenAI Codex Python to C++ Code Generator Your Python Code is too slow? 🐌 You want to speed it up but forgot how to code in C++? ⌨

Alexander 423 Jan 01, 2023
source code for paper: WhiteningBERT: An Easy Unsupervised Sentence Embedding Approach.

WhiteningBERT Source code and data for paper WhiteningBERT: An Easy Unsupervised Sentence Embedding Approach. Preparation git clone https://github.com

49 Dec 17, 2022
ASCEND Chinese-English code-switching dataset

ASCEND (A Spontaneous Chinese-English Dataset) introduces a high-quality resource of spontaneous multi-turn conversational dialogue Chinese-English code-switching corpus collected in Hong Kong.

CAiRE 11 Dec 09, 2022
End-to-End Speech Processing Toolkit

ESPnet: end-to-end speech processing toolkit system/pytorch ver. 1.0.1 1.1.0 1.2.0 1.3.1 1.4.0 1.5.1 1.6.0 1.7.1 1.8.1 ubuntu18/python3.8/pip ubuntu18

ESPnet 5.9k Jan 03, 2023
pytorch implementation of Attention is all you need

A Pytorch Implementation of the Transformer: Attention Is All You Need Our implementation is largely based on Tensorflow implementation Requirements N

230 Dec 07, 2022
Twitter-NLP-Analysis - Twitter Natural Language Processing Analysis

Twitter-NLP-Analysis Business Problem I got last @turk_politika 3000 tweets with

Çağrı Karadeniz 7 Mar 12, 2022
Translate U is capable of translating the text present in an image from one language to the other.

Translate U is capable of translating the text present in an image from one language to the other. The app uses OCR and Google translate to identify and translate across 80+ languages.

Neelanjan Manna 1 Dec 22, 2021