Machine learning for NeuroImaging in Python

Overview
Github Actions Build Status Coverage Status Azure Build Status

nilearn

Nilearn enables approachable and versatile analyses of brain volumes. It provides statistical and machine-learning tools, with instructive documentation & friendly community.

It supports general linear model (GLM) based analysis and leverages the scikit-learn Python toolbox for multivariate statistics with applications such as predictive modelling, classification, decoding, or connectivity analysis.

Important links

Dependencies

The required dependencies to use the software are:

  • Python >= 3.5,
  • setuptools
  • Numpy >= 1.11
  • SciPy >= 0.19
  • Scikit-learn >= 0.19
  • Joblib >= 0.12
  • Nibabel >= 2.0.2

If you are using nilearn plotting functionalities or running the examples, matplotlib >= 1.5.1 is required.

If you want to run the tests, you need pytest >= 3.9 and pytest-cov for coverage reporting.

Install

First make sure you have installed all the dependencies listed above. Then you can install nilearn by running the following command in a command prompt:

pip install -U --user nilearn

More detailed instructions are available at http://nilearn.github.io/introduction.html#installation.

Development

Detailed instructions on how to contribute are available at http://nilearn.github.io/development.html

Comments
  • [ENH] Initial visual reports

    [ENH] Initial visual reports

    Closes #2022 .

    An initial implementation of visual reports for Nilearn. Adds:

    • [x] The templating library tempita as an external dependency
    • [x] A reorganization of HTMLDocument into a new reporting module
    • [x] A new reporting HTML template
    • [x] A super class Report to populate the report HTML template with tempita populated text
    • [x] Relevant CSS styling to improve report UX, using pure-css
    • [x] An ability to display reports directly in Jupyter Notebooks, without iframe rendering, thanks to @GaelVaroquaux
    • [x] Documentation of this functionality, with examples
    • [x] A new Sphinx Gallery image scraper to embed these example HTML reports

    For a current rendering of reports see: https://github.com/emdupre/nilearn/pull/4#issuecomment-527984327 and the plot_mask_computation example.

    opened by emdupre 172
  • switch from papaya to brainsprite in plotting.view_stat_map

    switch from papaya to brainsprite in plotting.view_stat_map

    I really love the new 3D interactive viewer (plotting.view_stat_map), but the notebooks it is producing are huge. In this PR, I am proposing to switch from papaya to brainsprite, which is a js library I developed for the exact purpose of embedding lightweight 3D viewers in html pages (http://github.com/simexp/brainsprite.js).

    The first difference with papaya is that it is using a jpg or a png containing all sagital slices of a volume as well as json metadata to store the brain images. That tend to be quite smaller than a nifti (depending on the numerical precision of the nifti). That also means that brainsprite can render brains with core html5 features, and no dependencies. So the brainsprite library weighs 15kb (500 lines...), as opposed to 2Mb for the current papaya html template. I have attached two brain viewers embedded in jupyter notebooks. The Papaya-based notebook is 12Mb, while the brainsprite-based notebook is 500kb. Again, this reflects a core difference in design: papaya is a full brain viewer app, featuring nifti reading as well as colorbar etc. Brainsprite is a minimal, fast brain viewer working from a pre-generated sprite.

    Which makes a transition with the second point: all the action for the generation of the brain volume happens in python. There is a new function called save_sprite that generates the brain sprite as well as the json meta data. It relies on matplotlib, as well as nilearn's own functions. In particular, thresholding and colormap generation are all done with nilearn's code. Resampling as well. This means that it will be easier to maintain and evolve for nilearn's developpers. The current version replicates all the arguments of plot_stat_map, including draw_cross, annotate, cut_coords and a few other (with a few as bonus, such as opacity).

    This PR is far from polished, there are a few oustanding issues, here. I also need to look into the doc and testing. Finally, I dumped some functions in html_stat_map.py which should probably live elsewhere. But I think it is time to get feedback, and in particular I'd like to know if there is an interest in merging this PR at all...

    opened by pbellec 166
  • [MRG] Cortex surface projections

    [MRG] Cortex surface projections

    Hello @GaelVaroquaux , @mrahim , @agramfort, @juhuntenburg and others, this is a PR about surface plotting.

    nilearn has some awesome functions to plot surface data in nilearn.plotting.surf_plotting. However, it doesn't offer a conversion from volumetric to surface data.

    It would be great to add a function to sample or project volumetric data on the nodes of a cortical mesh; this would allow users to look at surface plots of their 3d images (e.g. statistical maps).

    In this PR we will try to add this to nilearn.

    Most tools which offer this functionality (e.g. caret, freesurfer, pycortex) usually propose several projection and sampling strategies, offering different quality / speed tradeoffs. However, it seems to me that naive strategies are not so far behind more elaborate ones - see for example [Operto, Grégory, et al. "Projection of fMRI data onto the cortical surface using anatomically-informed convolution kernels." Neuroimage 39.1 (2008): 127-135]. For plotting and visualisation, the results of a simple strategy are probably accurate enough for most users.

    I therefore suggest to start by including a very simple and fast projection scheme, and we can add more elaborate ones later if we want. I'm just getting started but I think we can already start a discussion.

    The proposed strategy is simply to draw a sample from a 3mm sphere around each mesh node, and average the measures.

    The image below illustrates that strategy: each red circle is a mesh node. Samples are drawn from the blue crosses that are attached to it, and are inside the image, and then averaged to compute the color inside the circle. (This image is produced by the show_sampling.py example, which is only there to clarify the strategy implemented in this PR and will be removed).

    illustration_2d

    Here is an example surface plot for a brainpedia image (id 32015 on Neurovault, https://neurovault.org/media/images/1952/task007_face_vs_baseline_pycortex/index.html), produced by brainpedia_surface.py:

    brainpedia_inflated

    And here is the plot produced by pycortex for the same image, as shown on Neurovault:

    brainpedia_pycortex

    Note about performance: in order to choose the positions of the samples to draw from a unit ball, for now, we cluster points drawn from a uniform distribution on the ball and keep the centroids (we can think of something better). This takes a few seconds and the results are cached with joblib for the time being, but since it only needs to be done once, when we have decided how many samples we want, the positions will be hardcoded once and for all (no computing, no caching). with 100 samples per ball, projecting a stat map of the full brain with 2mm voxels on an fsaverage hemisphere mesh takes around 60 ms.

    opened by jeromedockes 79
  • (WIP) Sparse models: S-LASSO and TV-l1

    (WIP) Sparse models: S-LASSO and TV-l1

    • Supports TV-l1 and S-LASSO priors
    • Supports logistic and squared losses
    • Has cross validation
    • Can automatically select alpha by CV (+ automatic computation of useful alpha ranges for the CV)
    • Warning: User must supply l1_ratio
    opened by dohmatob 78
  • Add a Neurovault fetcher.

    Add a Neurovault fetcher.

    This is based on PR #832 opened by @bcipolli in answer to issue #640. The contribution is to add a fetcher for downloading selected

    images from http://neurovault.org/. I have tried to address some of the remarks that were made in the discussion about #832. I have included the examples plot_ica_neurovault.py and plot_neurovault_meta_analysis.py from the previous PR; they remain almost identical.

    The interface to the fetcher is similar to that proposed in #832. I have kept the possibility to filter collections and images either with a function or with a dictionary of {field: desired_value} pairs (or both), but they are now separate arguments called respectively collection_filter (image_filter for images) and collection_terms (image_terms for images). Also, now the filters passed in a dictionary are only inserted in the query URL if they are actually available on the server (which is the case for the collection owner, the collection DOI, and the collection name); otherwise they are applied to the metadata once it has been downloaded.

    For users who want a specific set of image or collection ids, downloading all the Neurovault metadata and filtering on the ids is inefficient; so they can use the image_ids or collection_ids parameters to pass the lists of ids they want to download; in this case any filter is ignored and the server is queried directly for the required image and collection ids. Note: this is also done under the hood if the collection id is used as a filter term - either by specifying the collection 'id' field or the image 'collection_id' field - except that in this case the other filters are still applied.

    I have included a ResultFilter class and a bunch of special values such as NotNull which can be used to specify filters more easily. In some neurovault metadata jsons, some values are strings such as "", "null", "None"... instead of actual null; these (more precisely strings that match ($|n/?a$|none|null) in a case-insensitive way) are replaced by a true null value (null in the json, converted to None when loaded in a python dict) upon download so that comparing to None or testing for truth should yield the expected results. In particular dict.get(field) will give the same value wether the original value was null, "null", ... , or plain missing.

    This should make using fetch_neurovault somewhat easier. However, for users who are interested in a large subset of the neurovault data and are not too short on disk space, I would recommend using only very simple filters (e.g. the defaults) when calling fetch_neurovault to download (almost all) the data, and, once it is on disk, only access it through read_sql_query, local_database_connection or local_database_cursor. The metadata is stored in an sqlite database, so instead of having to read the docstring for fetch_neurovault, writing their own filters, etc., most users will probably prefer to download it all and then simply use SQL syntax to select the subset they're interested in. read_sql_query queries the database and returns the result as an OrderedDict of columns (the default) or as a list of rows. local_database_connection gives a connection to the sqlite file, so that pandas users can load e.g., all the images' metadata by typing:

    images = pandas.read_sql_query(
         "SELECT * FROM images", neurovault.local_database_connection())
    

    Of course if they prefer manipulating sqlite3 objects directly they can use the connection given by local_database_connection or the cursor given by local_database_cursor.

    @bcipolli, @chrisfilo, @GaelVaroquaux, or anyone else, please let me know what modifications need to be made! In particular:

    • What should be the default filters? The current behaviour is to exclude:
      • Collections from a list of known bad collections (found in PR #832, with one addition).
      • Empty collections.
      • Images from a list of known bad images (found in PR #832).
      • Images that are not in MNI space.
      • Images for which the metadata field 'is_valid' is cleared.
      • Images for which the metadata field 'is_thresholded' is set.
      • Images for which 'map_type' is 'ROI/mask', 'anatomical' or 'parcellation'
      • Images for which 'image_type' is 'atlas'
    • Should the fetcher, by default, download all the images matching the filters, or only a limited number (the current behaviour)?
    opened by jeromedockes 67
  • [MRG] Glass brain visualisation

    [MRG] Glass brain visualisation

    This pull request is WIP for now and was only opened to get some feedback, both visualisation-wise and code-wise.

    Here is how it looks like atm: glass_brain

    The code to generate the plots is there.

    New feature Plotting 
    opened by lesteve 64
  • Surface plots

    Surface plots

    Hi again,

    This PR replaces #2454 which is a continuation of #1730 submitted by @dangom to resolve #1722. The intention is to check that everything other than the symmetric_cbar is working properly and merge the surface montages into master. I'll open an issue for the symmetric_cbar and fix that separately.

    Also, thank you so much @GaelVaroquaux and @effigies for your help creating the PR properly.

    opened by ZviBaratz 62
  • Brainomics/Localizer

    Brainomics/Localizer

    opened by DimitriPapadopoulos 60
  • SpaceNet (this PR succeeds PR #219)

    SpaceNet (this PR succeeds PR #219)

    This PR succeeds PR #219 (aka unicorn factory). All discussions should be done here henceforth. #219 is now classified, and should be referred to solely for histological purposes.

    opened by dohmatob 54
  • [ENH, MRG] fREM

    [ENH, MRG] fREM

    Following the merge of #2000, here is the introduction of fREMClassifier and fREMRegressor objects which run pipelines with clustering, feature selection and ensembling of best models across a grid of parameters.

    • The files changes are minor.
    • The current implementation yields results as expected on an exemple (see attached below on plot_haxby_tutorial)
    • But the test of fREM accuracy on small datasets in test_decoder.py keep failing just because accuracy of fREM is not good enough in this setting but after trying various parameters I don't know which tests to do that would be better suited to this usecase. Anybody ?
    • If I remember correctly you wanted to replace Decoder use by fREM in many examples to alleviate their computational cost @GaelVaroquaux . Which one are the main targets ? I will benchmark how many time we could gain but on ROIs it doesn't seem very useful to cluster. (e.g. it slows things down on Haxby ROI example)
    Capture d’écran 2020-03-05 à 18 40 24
    opened by thomasbazeille 52
  • [MRG] Dictionary learning + nilearn.decomposition refactoring

    [MRG] Dictionary learning + nilearn.decomposition refactoring

    Decomposition estimators (DictLearning / MultiPCA) now inherit a DecompositionEstimator.

    Loading of data is made through a PCAMultiNiftiMasker, which loads data from files and compress it.

    Potentially, the function check_masker could solve issue #688, as it factorizes the input checking of estimator to which you provide either a masker or parameters for a mask. It is tuned to be able to use PCAMultiNiftiMasker

    opened by arthurmensch 51
  • Collaboration with NiiVue and possibilities for integration

    Collaboration with NiiVue and possibilities for integration

    The NiiVue project uses WebGL 2.0 to provide interactive web-based visualization capabilities for viewing medical imaging. We have started to meet with NiiVue developers to discuss possible avenues for integration of NiiVue into Nilearn. They have started working on a Python interface so one path to take would be to work on building this with them. We can also start by providing examples of how Nilearn users can make use of NiiVue functionality. Some relevant repositories include https://github.com/niivue/niivue and https://github.com/niivue/ipyniivue. NiiVue demos can be found here: https://niivue.github.io/niivue/. We can keep track of progress and decisions made to advance this collaboration here as well as have a general discussion on benefits of the integration.

    Enhancement Discussion 
    opened by ymzayek 1
  • Remove ci-skip action from GitHub Actions workflows

    Remove ci-skip action from GitHub Actions workflows

    GitHub Actions now supports skipping workflows out of the box: https://docs.github.com/en/actions/managing-workflow-runs/skipping-workflow-runs and so the Action mstachniuk/ci-skip is deprecated and should be removed from all workflows before it starts failing.

    Infrastructure 
    opened by ymzayek 4
  • [ENH] Flat maps for all fsaverage resolutions

    [ENH] Flat maps for all fsaverage resolutions

    Closes #3171 .

    If we're happy with the flat maps generated in #3171 for all fsaverage resolutions, I suggest the following roadmap to integrate them to nilearn:

    • [x] clean up my current script so that it generates flat maps for both hemispheres and all fsaverage resolutions
    • [ ] publish it somewhere? (as a gist maybe? I'm happy to hear suggestions here)
    • [x] run the script I used in #2815 to generate our fsaverage tarballs, and add flat maps to it
    • [ ] update our OSF datasets for fsaverage 3-7
    • [ ] have at least one example's thumbnail display this feature (which is why this PR leverages unmerged changes from #3173, as I want this thumbnail to show curvature sign as well)

    As a reminder, here is what the current flat maps look like for fsaverage 3 to 7:

    image

    image

    image

    image

    image

    opened by alexisthual 7
  • `filter` option in `signal.clean` is not exposed to `nilearn.maskers.NiftiMasker` and potentially other masker objects

    `filter` option in `signal.clean` is not exposed to `nilearn.maskers.NiftiMasker` and potentially other masker objects

    However, I don't think a filter option is provided for nilearn.maskers.NiftiMasker so seems it's using default aka butterworth

    Originally posted by @DasDominus in https://github.com/nilearn/nilearn/issues/3434#issuecomment-1333064573

    Good first issue effort: low 
    opened by htwangtw 6
  • All CI failing because flake8 --diff option has been removed

    All CI failing because flake8 --diff option has been removed

    All CI are currently failing because flake8==6.0.0 (which has been released on Nov 23) does not have the --diff flag anymore.

    The reason has been explained in this issue.

    A solution which is an adaptation of this comment is replacing the flake8 --diff from build_tools/flake8_diff.sh#L79 with the following command:

    git diff --name-only $COMMIT | grep '\.py$' | xargs --delimiter='\n' --no-run-if-empty flake8 --show-source
    

    Opened PR #3432

    Bug 
    opened by RaphaelMeudec 2
Deep motion generator collections

GenMotion GenMotion (/gen’motion/) is a Python library for making skeletal animations. It enables easy dataset loading and experiment sharing for synt

23 May 24, 2022
Pure python PEMDAS expression solver without using built-in eval function

pypemdas Pure python PEMDAS expression solver without using built-in eval function. Supports nested parenthesis. Supported operators: + - * / ^ Exampl

1 Dec 22, 2021
Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated Learning

Federated_Learning This repo provides a federated learning framework that allows to carry out backdoor attacks under varying conditions. This is a ker

Arçelik ARGE Açık Kaynak Yazılım Organizasyonu 0 Nov 30, 2021
Scripts used to make and evaluate OpenAlex's concept tagging model

openalex-concept-tagging This repository contains all of the code for getting the concept tagger up and running. To learn more about where this model

OurResearch 18 Dec 09, 2022
PyTorch implementation of neural style transfer algorithm

neural-style-pt This is a PyTorch implementation of the paper A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias

770 Jan 02, 2023
Checkout some cool self-projects you can try your hands on to curb your boredom this December!

SoC-Winter Checkout some cool self-projects you can try your hands on to curb your boredom this December! These are short projects that you can do you

Web and Coding Club, IIT Bombay 29 Nov 08, 2022
PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020).

NHDRRNet-PyTorch This is the PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020). 0. Differences between Original Paper and

Yutong Zhang 1 Mar 01, 2022
Final Project for the CS238: Decision Making Under Uncertainty course at Stanford University in Autumn '21.

Final Project for the CS238: Decision Making Under Uncertainty course at Stanford University in Autumn '21. We optimized wind turbine placement in a wind farm, subject to wake effects, using Q-learni

Manasi Sharma 2 Sep 27, 2022
This is the code for CVPR 2021 oral paper: Jigsaw Clustering for Unsupervised Visual Representation Learning

JigsawClustering Jigsaw Clustering for Unsupervised Visual Representation Learning Pengguang Chen, Shu Liu, Jiaya Jia Introduction This project provid

DV Lab 73 Sep 18, 2022
The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction".

LEAR The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction". See below for an overview of

杨攀 93 Jan 07, 2023
This library is a location of the LegacyLogger for PyTorch Lightning.

neptune-contrib Documentation See neptune-contrib documentation site Installation Get prerequisites python versions 3.5.6/3.6 are supported Install li

neptune.ai 26 Oct 07, 2021
A sequence of Jupyter notebooks featuring the 12 Steps to Navier-Stokes

CFD Python Please cite as: Barba, Lorena A., and Forsyth, Gilbert F. (2018). CFD Python: the 12 steps to Navier-Stokes equations. Journal of Open Sour

Barba group 2.6k Dec 30, 2022
Kindle is an easy model build package for PyTorch.

Kindle is an easy model build package for PyTorch. Building a deep learning model became so simple that almost all model can be made by copy and paste from other existing model codes. So why code? wh

Jongkuk Lim 77 Nov 11, 2022
This repository is the code of the paper "Sparse Spatial Transformers for Few-Shot Learning".

🌟 Sparse Spatial Transformers for Few-Shot Learning This code implements the Sparse Spatial Transformers for Few-Shot Learning(SSFormers). Our code i

chx_nju 38 Dec 13, 2022
Latte: Cross-framework Python Package for Evaluation of Latent-based Generative Models

Cross-framework Python Package for Evaluation of Latent-based Generative Models Latte Latte (for LATent Tensor Evaluation) is a cross-framework Python

Karn Watcharasupat 30 Sep 08, 2022
Simulation environments for the CrazyFlie quadrotor: Used for Reinforcement Learning and Sim-to-Real Transfer

Phoenix-Drone-Simulation An OpenAI Gym environment based on PyBullet for learning to control the CrazyFlie quadrotor: Can be used for Reinforcement Le

Sven Gronauer 8 Dec 07, 2022
This is the repository for CVPR2021 Dynamic Metric Learning: Towards a Scalable Metric Space to Accommodate Multiple Semantic Scales

Intro This is the repository for CVPR2021 Dynamic Metric Learning: Towards a Scalable Metric Space to Accommodate Multiple Semantic Scales Vehicle Sam

39 Jul 21, 2022
MultiLexNorm 2021 competition system from ÚFAL

ÚFAL at MultiLexNorm 2021: Improving Multilingual Lexical Normalization by Fine-tuning ByT5 David Samuel & Milan Straka Charles University Faculty of

ÚFAL 13 Jun 28, 2022
Tiny Kinetics-400 for test

Kinetics-400迷你数据集 English | 简体中文 该数据集旨在解决的问题:参照Kinetics-400数据格式,训练基于自己数据的视频理解模型。 数据集介绍 Kinetics-400是视频领域benchmark常用数据集,详细介绍可以参考其官方网站Kinetics。整个数据集包含40

38 Jan 06, 2023
Drslmarkov - Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

1 Nov 24, 2022