A package, and script, to perform imaging transcriptomics on a neuroimaging scan.

Overview

Imaging Transcriptomics

DOI License: GPL v3 Maintainer Generic badge Documentation Status

Imaging-transcriptomics_overwiew

Imaging transcriptomics is a methodology that allows to identify patterns of correlation between gene expression and some property of brain structure or function as measured by neuroimaging (e.g., MRI, fMRI, PET).


The imaging-transcriptomics package allows performing imaging transcriptomics analysis on a neuroimaging scan (e.g., PET, MRI, fMRI...).

The software is implemented in Python3 (v.3.7), its source code is available on GitHub, it can be installed via Pypi and is released under the GPL v3 license.

NOTE Versions from v1.0.0 are or will be maintained. The original script linked by the BioRxiv preprint (v0.0) is still available on GitHub but no changes will be made to that code. If you have downloaded or used that script please update to the newer version by installing this new version.

Installation

NOTE We recommend to install the package in a dedicated environment of your choice (e.g., venv or anaconda). Once you have created your environment and you have activated it, you can follow the below guide to install the package and dependencies. This process will avoid clashes between conflicting packages that could happen during/after the installation.

To install the imaging-transcriptomics Python package, first you will need to install a packages that can't be installed directly from PyPi, but require to be downloaded from GitHub. The package to install is pypyls. To install this package you can follow the installation on the documentation for the package or simply run the command

pip install -e git+https://github.com/netneurolab/pypyls.git/#egg=pyls

to download the package, and its dependencies directly from GitHub by using pip.

Once this package is installed you can install the imaging-transcriptomics package by running

pip install imaging-transcriptomics

Usage

Once installed the software can be used in two ways:

  • as standalone script
  • as part of some python script

WARNING Before running the script make sure the Pyhton environment where you have installed the package is activated.

Standalone script


To run the standalone script from the terminal use the command:

imagingtranscriptomics options

The options available are:

  • -i (--input): Path to the imaging file to analise. The path should be given to the program as an absolute path (e.g., /Users/myusername/Documents/my_scan.nii, since a relative path could raise permission errors and crashes. The script only accepts imaging files in the NIfTI format (.nii, .nii.gz).
  • -v (--variance): Amount of variance that the PLS components must explain. This MUST be in the range 0-100.

    NOTE: if the variance given as input is in the range 0-1 the script will treat this as 30% the same way as if the number was in the range 10-100 (e.g., the script treats the inputs -v 30 and -v 0.3 in the exact same way and the resulting components will explain 30% of the variance).

  • -n (--ncomp): Number of components to be used in the PLS regression. The number MUST be in the range 1-15.
  • --corr: Run the analysis using Spearman correlation instead of PLS.

    NOTE: if you run with the --corr command no other input is required, apart from the input scan (-i).

  • -o (--output) (optional): Path where to save the results. If none is provided the results will be saved in the same directory as the input scan.

WARNING: The -i flag is MANDATORY to run the script, and so is one, and only one, of the -n or -v flags. These last two are mutually exclusive, meaning that ONLY one of the two has to be given as input.

Part of Python script


When used as part of a Python script the library can be imported as:

import imaging_transcriptomics as imt

The core class of the package is the ImagingTranscriptomics class which gives access to the methods used in the standalone script. To use the analysis in your scripts you can initialise the class and then simply call the ImagingTranscriptomics().run() method.

import numpy as np
import imaging_transcriptomics as imt
my_data = np.ones(41)  # MUST be of size 41 
                       # (corresponds to the regions in left hemisphere of the DK atlas)

analysis = imt.ImagingTranscriptomics(my_data, n_components=1)
analysis.run()
# If instead of running PLS you want to analysze the data with correlation you can run the analysis with:
analysis.run(method="corr")

Once completed the results will be part of the analysis object and can be accessed with analysis.gene_results.

The import of the imaging_transcriptomics package will import other helpful functions for input and reporting. For a complete explanation of this please refer to the official documentation of the package.

Documentation

The documentation of the script is available at imaging-transcriptomics.rtfd.io/.

Troubleshooting

For any problems with the software you can open an issue in GitHub or contact the maintainer) of the package.

Citing

If you publish work using imaging-transcriptomics as part of your analysis please cite:

Imaging transcriptomics: Convergent cellular, transcriptomic, and molecular neuroimaging signatures in the healthy adult human brain. Daniel Martins, Alessio Giacomel, Steven CR Williams, Federico Turkheimer, Ottavia Dipasquale, Mattia Veronese, PET templates working group. bioRxiv 2021.06.18.448872; doi: https://doi.org/10.1101/2021.06.18.448872

Imaging-transcriptomics: Second release update (v1.0.2).Alessio Giacomel, & Daniel Martins. (2021). Zenodo. https://doi.org/10.5281/zenodo.5726839

Comments
  • pip installation can not resolve enigmatoolbox dependencies

    pip installation can not resolve enigmatoolbox dependencies

    After pip install -e git+https://github.com/netneurolab/pypyls.git/#egg=pyls and pip install imaging-transcriptomics in a new conda environment with Python=3.8, an error was occurred when import imaging-transcriptomics package that it can't find the module named enigmatoolbox. I figured out that the enigmatoolbox package seems can not be resolve by pip installation automatically, so I have to install the enigmatoolbox package from Github manually, with the code bellow according to the document of enigmatoolbox:

    git clone https://github.com/MICA-MNI/ENIGMA.git
    cd ENIGMA
    python setup.py install
    
    opened by YCHuang0610 4
  • DK atlas regions

    DK atlas regions

    Dear alegiac95,

    thanks for providing the scripts! I have just gone through the paper and description of this GitHub repo and I want to adapt your software to my project. However, I use the typical implementation of the DK from Freesurfer with 34 cortical DK ROIs instead of the 41 ROIs that you have used and, if I'm not mistaken, 41 ROIs are required to implement the script as ist is. Is it possible to change the input to other cortical parcellations as well (i.e., DK-34)?

    Cheers, Melissa

    enhancement 
    opened by Melissa1909 3
  • Script not calling the correct python version

    Script not calling the correct python version

    The script in version v1.0.0 is invoking the #!/usr/bin/env python interpreter, which could generate some issue if you default python is python2 (e.g., in older MacOS versions).

    bug 
    opened by alegiac95 1
  • Version 1.1.0

    Version 1.1.0

    Updated the scripts with:

    • support for both full brain analysis and cortical regions only
    • GSEA analysis (both during the analysis and as a separate script)
    • pdf report of the analysis
    opened by alegiac95 0
  • clean code and fix test

    clean code and fix test

    This commit does an extensive code cleaning following the PEP8 standard. It also fixes a test that was most probably intended for previous unstable versions of the software.

    Still to do:

    • Remove logging
    opened by matteofrigo 0
  • Add mathematical background on PLS

    Add mathematical background on PLS

    A more detailed explanation on PLS model and regression is required in the docs.

    • [ ] Add a general mathematical formulation of PLS
    • [ ] Use of PLS in neuroimaging applications
    • [ ] Description of the SIMPLS algorithm used by pypls

    In addition provide some background on correlation, since it is now added to the methods available in the python package/script

    documentation 
    opened by alegiac95 0
Releases(v.1.1.8)
Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images"

GANInversion_with_ConsecutiveImgs Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images" https://a

QingyangXu 38 Dec 07, 2022
SMPL-X: A new joint 3D model of the human body, face and hands together

SMPL-X: A new joint 3D model of the human body, face and hands together [Paper Page] [Paper] [Supp. Mat.] Table of Contents License Description News I

Vassilis Choutas 1k Jan 09, 2023
Global Rhythm Style Transfer Without Text Transcriptions

Global Prosody Style Transfer Without Text Transcriptions This repository provides a PyTorch implementation of AutoPST, which enables unsupervised glo

Kaizhi Qian 193 Dec 30, 2022
Group Fisher Pruning for Practical Network Compression(ICML2021)

Group Fisher Pruning for Practical Network Compression (ICML2021) By Liyang Liu*, Shilong Zhang*, Zhanghui Kuang, Jing-Hao Xue, Aojun Zhou, Xinjiang W

Shilong Zhang 129 Dec 13, 2022
Code and Resources for the Transformer Encoder Reasoning Network (TERN)

Transformer Encoder Reasoning Network Code for the cross-modal visual-linguistic retrieval method from "Transformer Reasoning Network for Image-Text M

Nicola Messina 53 Dec 30, 2022
Code of TIP2021 Paper《SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition》. We provide both MxNet and Pytorch versions.

SFace Code of TIP2021 Paper 《SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition》. We provide both MxNet, PyTorch and Jittor versi

Zhong Yaoyao 47 Nov 25, 2022
QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

Introduction QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and

Yu 1.4k Dec 30, 2022
EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network

EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network This repo contains the official Pytorch implementaion code and conf

Hu Zhang 175 Jan 07, 2023
Using modified BiSeNet for face parsing in PyTorch

face-parsing.PyTorch Contents Training Demo References Training Prepare training data: -- download CelebAMask-HQ dataset -- change file path in the pr

zll 1.6k Jan 08, 2023
Moiré Attack (MA): A New Potential Risk of Screen Photos [NeurIPS 2021]

Moiré Attack (MA): A New Potential Risk of Screen Photos [NeurIPS 2021] This repository is the official implementation of Moiré Attack (MA): A New Pot

Dantong Niu 22 Dec 24, 2022
Efficient 3D human pose estimation in video using 2D keypoint trajectories

3D human pose estimation in video with temporal convolutions and semi-supervised training This is the implementation of the approach described in the

Meta Research 3.1k Dec 29, 2022
This is a yolo3 implemented via tensorflow 2.7

YoloV3 - an object detection algorithm implemented via TF 2.x source code In this article I assume you've already familiar with basic computer vision

2 Jan 17, 2022
An implementation of IMLE-Net: An Interpretable Multi-level Multi-channel Model for ECG Classification

IMLE-Net: An Interpretable Multi-level Multi-channel Model for ECG Classification The repostiory consists of the code, results and data set links for

12 Dec 26, 2022
[NeurIPS'21 Spotlight] PyTorch code for our paper "Aligned Structured Sparsity Learning for Efficient Image Super-Resolution"

ASSL This repository is for a new network pruning method (Aligned Structured Sparsity Learning, ASSL) for efficient single image super-resolution (SR)

Huan Wang 47 Nov 28, 2022
Multi-Modal Machine Learning toolkit based on PyTorch.

简体中文 | English TorchMM 简介 多模态学习工具包 TorchMM 旨在于提供模态联合学习和跨模态学习算法模型库,为处理图片文本等多模态数据提供高效的解决方案,助力多模态学习应用落地。 近期更新 2022.1.5 发布 TorchMM 初始版本 v1.0 特性 丰富的任务场景:工具

njustkmg 1 Jan 05, 2022
Real-time Joint Semantic Reasoning for Autonomous Driving

MultiNet MultiNet is able to jointly perform road segmentation, car detection and street classification. The model achieves real-time speed and state-

Marvin Teichmann 518 Dec 12, 2022
The project was to detect traffic signs, based on the Megengine framework.

trafficsign 赛题 旷视AI智慧交通开源赛道,初赛1/177,复赛1/12。 本赛题为复杂场景的交通标志检测,对五种交通标志进行识别。 框架 megengine 算法方案 网络框架 atss + resnext101_32x8d 训练阶段 图片尺寸 最终提交版本输入图片尺寸为(1500,2

20 Dec 02, 2022
[CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos

Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos Created by Muheng Li, Lei Chen, Yueqi Duan, Zhilan Hu, Jianjiang Feng, Jie

58 Dec 23, 2022
A simple consistency training framework for semi-supervised image semantic segmentation

PseudoSeg: Designing Pseudo Labels for Semantic Segmentation PseudoSeg is a simple consistency training framework for semi-supervised image semantic s

Google Interns 143 Dec 13, 2022
Computational Pathology Toolbox developed by TIA Centre, University of Warwick.

TIA Toolbox Computational Pathology Toolbox developed at the TIA Centre Getting Started All Users This package is for those interested in digital path

Tissue Image Analytics (TIA) Centre 156 Jan 08, 2023