STonKGs is a Sophisticated Transformer that can be jointly trained on biomedical text and knowledge graphs

Overview

STonKGs

Tests PyPI PyPI - Python Version PyPI - License Documentation Status DOI Code style: black

STonKGs is a Sophisticated Transformer that can be jointly trained on biomedical text and knowledge graphs. This multimodal Transformer combines structured information from KGs with unstructured text data to learn joint representations. While we demonstrated STonKGs on a biomedical knowledge graph ( i.e., from INDRA), the model can be applied other domains. In the following sections we describe the scripts that are necessary to be run to train the model on any given dataset.

💪 Getting Started

Data Format

Since STonKGs is operating on both text and KG data, it's expected that the respective data files include columns for both modalities. More specifically, the expected data format is a pandas dataframe (or a pickled pandas dataframe for the pre-training script), in which each row is containing one text-triple pair. The following columns are expected:

  • source: Source node in the triple of a given text-triple pair
  • target: Target node in the triple of a given text-triple pair
  • evidence: Text of a given text-triple pair
  • (optional) class: Class label for a given text-triple pair in fine-tuning tasks (does not apply to the pre-training procedure)

Note that both source and target nodes are required to be in the Biological Expression Langauge (BEL) format, more specifically, they need to be contained in the INDRA KG. For more details on the BEL format, see for example the INDRA documentation for BEL processor and PyBEL.

Pre-training STonKGs

Once you have installed STonKGs as a Python package (see below), you can start training the STonKGs on your dataset by running:

$ python3 -m stonkgs.models.stonkgs_pretraining

The configuration of the model can be easily modified by altering the parameters of the pretrain_stonkgs method. The only required argument to be changed is PRETRAINING_PREPROCESSED_POSITIVE_DF_PATH, which should point to your dataset.

Downloading the pre-trained STonKGs model on the INDRA KG

We released the pre-trained STonKGs models on the INDRA KG for possible future adaptations, such as further pre-training on other KGs. Both STonKGs150k as well as STonKGs300k are accessible through Hugging Face's model hub.

The easiest way to download and initialize the pre-trained STonKGs model is to use the from_default_pretrained() class method (with STonKGs150k being the default):

from stonkgs import STonKGsForPreTraining

# Download the model from the model hub and initialize it for pre-training 
# using from_default_pretrained
stonkgs_pretraining = STonKGsForPreTraining.from_default_pretrained()

Alternatively, since our code is based on Hugging Face's transformers package, the pre-trained model can be easily downloaded and initialized using the .from_pretrained() function:

from stonkgs import STonKGsForPreTraining

# Download the model from the model hub and initialize it for pre-training 
# using from_pretrained
stonkgs_pretraining = STonKGsForPreTraining.from_pretrained(
    'stonkgs/stonkgs-150k',
)

Extracting Embeddings

The learned embeddings of the pre-trained STonKGs models (or your own STonKGs variants) can be extracted in two simple steps. First, a given dataset with text-triple pairs (a pandas DataFrame, see Data Format) needs to be preprocessed using the preprocess_file_for_embeddings function. Then, one can obtain the learned embeddings using the preprocessed data and the get_stonkgs_embeddings function:

import pandas as pd

from stonkgs import get_stonkgs_embeddings, preprocess_df_for_embeddings

# Generate some example data
# Note that the evidence sentences are typically longer than in this example data
rows = [
    [
        "p(HGNC:1748 ! CDH1)",
        "p(HGNC:2515 ! CTNND1)",
        "Some example sentence about CDH1 and CTNND1.",
    ],
    [
        "p(HGNC:6871 ! MAPK1)",
        "p(HGNC:6018 ! IL6)",
        "Another example about some interaction between MAPK and IL6.",
    ],
    [
        "p(HGNC:3229 ! EGF)",
        "p(HGNC:4066 ! GAB1)",
        "One last example in which Gab1 and EGF are mentioned.",
    ],
]
example_df = pd.DataFrame(rows, columns=["source", "target", "evidence"])

# 1. Preprocess the text-triple data for embedding extraction
preprocessed_df_for_embeddings = preprocess_df_for_embeddings(example_df)

# 2. Extract the embeddings 
embedding_df = get_stonkgs_embeddings(preprocessed_df_for_embeddings)

Fine-tuning STonKGs

The most straightforward way of fine-tuning STonKGs on the original six classfication tasks is to run the fine-tuning script (note that this script assumes that you have a mlflow logger specified, e.g. using the --logging_dir argument):

$ python3 -m stonkgs.models.stonkgs_finetuning

Moreover, using STonKGs for your own fine-tuning tasks (i.e., sequence classification tasks) in your own code is just as easy as initializing the pre-trained model:

from stonkgs import STonKGsForSequenceClassification

# Download the model from the model hub and initialize it for fine-tuning
stonkgs_model_finetuning = STonKGsForSequenceClassification.from_default_pretrained(
    num_labels=number_of_labels_in_your_task,
)

# Initialize a Trainer based on the training dataset
trainer = Trainer(
    model=model,
    args=some_previously_defined_training_args,
    train_dataset=some_previously_defined_finetuning_data,
)

# Fine-tune the model to the moon 
trainer.train()

Using STonKGs for Inference

You can generate new predictions for previously unseen text-triple pairs (as long as the nodes are contained in the INDRA KG) based on either 1) the fine-tuned models used for the benchmark or 2) your own fine-tuned models. In order to do that, you first need to load/initialize the fine-tuned model:

from stonkgs.api import get_species_model, infer

model = get_species_model()

# Next, you want to use that model on your dataframe (consisting of at least source, target
# and evidence columns, see **Data Format**) to generate the class probabilities for each
# text-triple pair belonging to each of the specified classes in the respective fine-tuning task:
example_data = ...

# See Extracting Embeddings for the initialization of the example data
# This returns both the raw (transformers) PredictionOutput as well as the class probabilities 
# for each text-triple pair
raw_results, probabilities = infer(model, example_data)

ProtSTonKGs

It is possible to download the extension of STonKGs, the pre-trained ProtSTonKGs model, and initialize it for further pre-training on text, KG and amino acid sequence data:

from stonkgs import ProtSTonKGsForPreTraining

# Download the model from the model hub and initialize it for pre-training 
# using from_pretrained
protstonkgs_pretraining = ProtSTonKGsForPreTraining.from_pretrained(
    'stonkgs/protstonkgs',
)

Moreover, analogous to STonKGs, ProtSTonKGs can be used for fine-tuning sequence classification tasks as well:

from stonkgs import ProtSTonKGsForSequenceClassification

# Download the model from the model hub and initialize it for fine-tuning
protstonkgs_model_finetuning = ProtSTonKGsForSequenceClassification.from_default_pretrained(
    num_labels=number_of_labels_in_your_task,
)

# Initialize a Trainer based on the training dataset
trainer = Trainer(
    model=model,
    args=some_previously_defined_training_args,
    train_dataset=some_previously_defined_finetuning_data,
)

# Fine-tune the model to the moon 
trainer.train()

⬇️ Installation

The most recent release can be installed from PyPI with:

$ pip install stonkgs

The most recent code and data can be installed directly from GitHub with:

$ pip install git+https://github.com/stonkgs/stonkgs.git

To install in development mode, use the following:

$ git clone git+https://github.com/stonkgs/stonkgs.git
$ cd stonkgs
$ pip install -e .

Warning: Because stellargraph doesn't currently work on Python 3.9, this software can only be installed on Python 3.8.

Artifacts

The pre-trained models are hosted on HuggingFace The fine-tuned models are hosted on the STonKGs community page on Zenodo along with the other artifacts (node2vec embeddings, random walks, etc.)

Acknowledgements

⚖️ License

The code in this package is licensed under the MIT License.

📖 Citation

Balabin H., Hoyt C.T., Birkenbihl C., Gyori B.M., Bachman J.A., Komdaullil A.T., Plöger P.G., Hofmann-Apitius M., Domingo-Fernández D. STonKGs: A Sophisticated Transformer Trained on Biomedical Text and Knowledge Graphs (2021), bioRxiv, 2021.08.17.456616v1.

🎁 Support

This project has been supported by several organizations (in alphabetical order):

💰 Funding

This project has been funded by the following grants:

Funding Body Program Grant
DARPA Automating Scientific Knowledge Extraction (ASKE) HR00111990009

🍪 Cookiecutter

This package was created with @audreyfeldroy's cookiecutter package using @cthoyt's cookiecutter-snekpack template.

🛠️ Development

The final section of the README is for if you want to get involved by making a code contribution.

Testing

After cloning the repository and installing tox with pip install tox, the unit tests in the tests/ folder can be run reproducibly with:

$ tox

Additionally, these tests are automatically re-run with each commit in a GitHub Action.

📦 Making a Release

After installing the package in development mode and installing tox with pip install tox, the commands for making a new release are contained within the finish environment in tox.ini. Run the following from the shell:

$ tox -e finish

This script does the following:

  1. Uses BumpVersion to switch the version number in the setup.cfg and src/stonkgs/version.py to not have the -dev suffix
  2. Packages the code in both a tar archive and a wheel
  3. Uploads to PyPI using twine. Be sure to have a .pypirc file configured to avoid the need for manual input at this step
  4. Push to GitHub. You'll need to make a release going with the commit where the version was bumped.
  5. Bump the version to the next patch. If you made big changes and want to bump the version by minor, you can use tox -e bumpversion minor after.
Comments
  • Add default loading function

    Add default loading function

    This PR uses the top-level __init__.py to make imports a bit less verbose and adds a class method that wraps loading from huggingface's hub so users don't need so many constants

    opened by cthoyt 3
  • Add code for using fine-tuned models

    Add code for using fine-tuned models

    • [x] Upload fine-tuned models to Zenodo under CC 0 license each in their own record (@cthoyt)
      • [x] species (https://zenodo.org/record/5205530)
      • [x] location (https://zenodo.org/record/5205553)
      • [x] disease (https://zenodo.org/record/5205592)
      • [x] correct (multiclass; https://zenodo.org/record/5206139)
      • [x] correct (binary; https://zenodo.org/record/5205989)
      • [x] cell line (https://zenodo.org/record/5205915)
    • [x] Automate download of fine-tuned models with PyStow (@cthoyt; see example in 660920d, done in 5dfecde)
    • [x] Show how to load model into python object (@helena-balabin)
    • [x] Show how to make a prediction on a given text triple, similarly to the README example that's on the general pre-trained model (@helena-balabin)
    opened by cthoyt 2
  • Prepare code for black

    Prepare code for black

    black is an automatic code formatter and linter. After all of the flake8 hell, it's a solution we can use to automate most of the pain away. This PR prepares the project for applying black by updating the flake8 config (adding a specific ignore) and adding a tox environment for applying black itself.

    After accepting this PR, do two things in order:

    1. Uncomment the flake8-black plugin in the flake8 environment of the tox.ini
    2. Run tox -e lint
    opened by cthoyt 1
  • Stream, yo

    Stream, yo

    I got a sigkill from lack of memory when I did this on the full COVID19 emmaa model, so I'm switching everything over to iterables and streams rather than building up full pandas dataframes each time.

    opened by cthoyt 0
  • Improve label to identifier mapping

    Improve label to identifier mapping

    Make sure that during inference, the predictions are correctly mapped to any kind of class labels when saving the dataframe. (This requires accessing/saving the id2tag attribute in each of the fine-tuning model classes.)

    opened by helena-balabin 0
Releases(v0.1.5)
Owner
STonKGs
Multimodal Transformers for biomedical text and Knowledge Graph data
STonKGs
GraphNLI: A Graph-based Natural Language Inference Model for Polarity Prediction in Online Debates

GraphNLI: A Graph-based Natural Language Inference Model for Polarity Prediction in Online Debates Vibhor Agarwal, Sagar Joglekar, Anthony P. Young an

Vibhor Agarwal 2 Jun 30, 2022
Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

9 Jan 08, 2023
GooAQ 🥑 : Google Answers to Google Questions!

This repository contains the code/data accompanying our recent work on long-form question answering.

AI2 112 Nov 06, 2022
PyTorch Implementation of "Bridging Pre-trained Language Models and Hand-crafted Features for Unsupervised POS Tagging" (Findings of ACL 2022)

Feature_CRF_AE Feature_CRF_AE provides a implementation of Bridging Pre-trained Language Models and Hand-crafted Features for Unsupervised POS Tagging

Jacob Zhou 6 Apr 29, 2022
NLP Core Library and Model Zoo based on PaddlePaddle 2.0

PaddleNLP 2.0拥有丰富的模型库、简洁易用的API与高性能的分布式训练的能力,旨在为飞桨开发者提升文本建模效率,并提供基于PaddlePaddle 2.0的NLP领域最佳实践。

6.9k Jan 01, 2023
Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks

Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks, which modifies the input text with a textual template and directly uses PLMs to conduct pre

THUNLP 2.3k Jan 08, 2023
This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

Graph4AI 230 Nov 22, 2022
Ecco is a python library for exploring and explaining Natural Language Processing models using interactive visualizations.

Visualize, analyze, and explore NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BER

Jay Alammar 1.6k Dec 25, 2022
Contains the code and data for our #ICSE2022 paper titled as "CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Naming Sequences"

CodeFill This repository contains the code for our paper titled as "CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Namin

Software Analytics Lab 11 Oct 31, 2022
PyKaldi is a Python scripting layer for the Kaldi speech recognition toolkit.

PyKaldi is a Python scripting layer for the Kaldi speech recognition toolkit. It provides easy-to-use, low-overhead, first-class Python wrappers for t

922 Dec 31, 2022
Client library to download and publish models and other files on the huggingface.co hub

huggingface_hub Client library to download and publish models and other files on the huggingface.co hub Do you have an open source ML library? We're l

Hugging Face 644 Jan 01, 2023
Python bot created with Selenium that can guess the daily Wordle word correct 96.8% of the time.

Wordle_Bot Python bot created with Selenium that can guess the daily Wordle word correct 96.8% of the time. It will log onto the wordle website and en

Lucas Polidori 15 Dec 11, 2022
Code for "Finetuning Pretrained Transformers into Variational Autoencoders"

transformers-into-vaes Code for Finetuning Pretrained Transformers into Variational Autoencoders (our submission to NLP Insights Workshop 2021). Gathe

Seongmin Park 22 Nov 26, 2022
Open-Source Toolkit for End-to-End Speech Recognition leveraging PyTorch-Lightning and Hydra.

🤗 Contributing to OpenSpeech 🤗 OpenSpeech provides reference implementations of various ASR modeling papers and three languages recipe to perform ta

Openspeech TEAM 513 Jan 03, 2023
DeepSpeech - Easy-to-use Speech Toolkit including SOTA ASR pipeline, influential TTS with text frontend and End-to-End Speech Simultaneous Translation.

(简体中文|English) Quick Start | Documents | Models List PaddleSpeech is an open-source toolkit on PaddlePaddle platform for a variety of critical tasks i

5.6k Jan 03, 2023
End-2-end speech synthesis with recurrent neural networks

Introduction New: Interactive demo using Google Colaboratory can be found here TTS-Cube is an end-2-end speech synthesis system that provides a full p

Tiberiu Boros 214 Dec 07, 2022
A repo for materials relating to the tutorial of CS-332 NLP

CS-332-NLP A repo for materials relating to the tutorial of CS-332 NLP Contents Tutorial 1: Introduction Corpus Regular expression Tokenization Tutori

Alok singh 9 Feb 15, 2022
【原神】自动演奏风物之诗琴的程序

疯物之诗琴 读取midi并自动演奏原神风物之诗琴。 可以自定义配置文件自动调整音符来适配风物之诗琴。 (原神1.4直播那天就开始做了!到现在才能放出来。。) 如何使用 在Release页面中下载打包好的程序和midi压缩包并解压。 双击运行“疯物之诗琴.exe”。 在原神中打开风物之诗琴,软件内输入

435 Jan 04, 2023
Japanese Long-Unit-Word Tokenizer with RemBertTokenizerFast of Transformers

Japanese-LUW-Tokenizer Japanese Long-Unit-Word (国語研長単位) Tokenizer for Transformers based on 青空文庫 Basic Usage from transformers import RemBertToken

Koichi Yasuoka 3 Dec 22, 2021
FastFormers - highly efficient transformer models for NLU

FastFormers FastFormers provides a set of recipes and methods to achieve highly efficient inference of Transformer models for Natural Language Underst

Microsoft 678 Jan 05, 2023