Real-time analysis of intracranial neurophysiology recordings.

Overview

py_neuromodulation

Click this button to run the "Tutorial ML with py_neuro" notebooks:

The py_neuromodulation toolbox allows for real time capable processing of multimodal electrophysiological data. The primary use is movement prediction for adaptive deep brain stimulation.

Find the documentation here https://neuromodulation.github.io/py_neuromodulation/ for example usage and parametrization.

Setup

For running this toolbox first create a new virtual conda environment:

conda env create --file=env.yml --user

The main modules include running real time enabled feature preprocessing based on iEEG BIDS data.

Different features can be enabled/disabled and parametrized in the `https://github.com/neuromodulation/py_neuromodulation/blob/main/pyneuromodulation/nm_settings.json>`_.

The current implementation mainly focuses band power and sharpwave feature estimation.

An example folder with a mock subject and derivate feature set was estimated.

To run feature estimation given the example BIDS data run in root directory.

python main.py

This will write a feature_arr.csv file in the 'examples/data/derivatives' folder.

For further documentation view ParametrizationDefinition for description of necessary parametrization files. FeatureEstimationDemo walks through an example feature estimation and explains sharpwave estimation.

Comments
  • Adopt the usage of an argument parser (argparse)

    Adopt the usage of an argument parser (argparse)

    argparse is a python package that allows for in command line options, arguments and sub-commands

    documentation: https://docs.python.org/3/library/argparse.html

    enhancement 
    opened by mousa-saeed 7
  • Conda environment + setup

    Conda environment + setup

    @timonmerk I tried to install pyneuromodulation as package, but it wasn't so easy to make it work. I did the following:

    • I modified env.yml (previously settting up the environment didn't work - and additionally env.yml was configured to install all packages via pip, now they are (except pybids) installed via conda.
    • I installed the conda environment via

    conda env create -f env.yml

    • I activated the conda environment via

    conda activate pyneuromodulation_test

    • I installed pyneuromodulation in editable mode

    pip install -r requirements.txt --editable .

    • In python, I ran

    import py_neuromodulation print(dir(py_neuromodulation))

    • Which told me that the only items in this package are

    ['builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec'] <

    • Only once I had added the line: "from . import nm_BidsStream" to init.py, did I get the following output for dir():

    ['builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec', 'nm_BidsStream', 'nm_IO', 'nm_bandpower', 'nm_coherence', 'nm_define_nmchannels', 'nm_eval_timing', 'nm_features', 'nm_fft', 'nm_filter', 'nm_generator', 'nm_hjorth_raw', 'nm_kalmanfilter', 'nm_normalization', 'nm_notch_filter', 'nm_plots', 'nm_projection', 'nm_rereference', 'nm_resample', 'nm_run_analysis', 'nm_sharpwaves', 'nm_stft', 'nm_stream', 'nm_test_settings']

    Questions:

    • Was this the intended way of setting up the environment and the package?
    • What could the problem with init.py be related to?
    opened by richardkoehler 4
  • Add Coherence and temporarily remove PDC and DTC

    Add Coherence and temporarily remove PDC and DTC

    @timonmerk We should maybe think about removing PDC and DTC from main again (and moving them to a branch) until we make sure that they are implemented correctly. Instead we could think about adding simple coherence, which might be less specific but easier to implement and less computationally expensive.

    enhancement 
    opened by richardkoehler 4
  • Specifying segment_lengths for frequency_ranges explicitly

    Specifying segment_lengths for frequency_ranges explicitly

    @timonmerk

    Currently the specification of segment_lengths is not very intuitive and also done implicitly, e.g. 10 == 1/10 of the given segment. Maybe we could write the segment_lenghts explicitly in milliseconds, so that they are independent such parameters as length of the data batch that is passed. So whether the data batch is 4096, 8000 or 700, the segment_length would always remain the same (e.g. 100 ms). Instead of this: "frequency_ranges": [[4, 8], [8, 12], [13, 20], [20, 35], [13, 35], [60, 80], [90, 200], [60, 200], [200, 300] ], "segment_lengths": [1, 2, 2, 3, 3, 3, 10, 10, 10, 10],

    I would imagine something like this: "frequency_ranges": [[4, 8], [8, 12], [13, 20], [20, 35], [13, 35], [60, 80], [90, 200], [60, 200], [200, 300] ], "segment_lengths": [1000, 500, 333, 333, 333, 100, 100, 100, 100],

    or even better in my opinion (because it is less prone to errors, the user is enforced to write the segment length write after the frequency range). Just to underline this argument, I noticed that there were too many segment_lengths in the settings.json file. There were 10 segment_lengths but only 9 frequency_ranges. So my suggestion: "frequency_ranges": [[[4, 8], 1000], [[8, 12], 500], [[13, 20], 333], [[20, 35], 333], [[13, 35], 333], [[60, 80], 100], [[90, 200], 100], [[60, 200], 100], [[200, 300] 100] ],

    opened by richardkoehler 4
  • Add cortical-subcortical feature estimation

    Add cortical-subcortical feature estimation

    Given two channels, calculate:

    • partial directed coherence
    • phase amplitude coupling

    Add as separate "feature" file. Needs to implement write to output dataframe

    opened by timonmerk 4
  • Optimize speed of feature normalization

    Optimize speed of feature normalization

    • Computation time of feature normalization was improved
    • Memory usage of feature normalization was improved
    • nm_eval_timing was repaired, as these changes broke the api

    closes #136

    opened by richardkoehler 3
  • Rework

    Rework "nm_settings.json"

    There are some minor issues in the nm_settings.json that we could improve:

    1. Add settings for STFT. Right now STFT requires the sampling frequency to be 1000 Hz
    2. Re-think frequency bands: Either we define them separate from FFT, STFT or bandpass, and then they can't be specifically adapted to each method. Or you can define them inside of each method (FFT, STFT, bandpass separately). But right now, FFT and STFT are "taking" the info from the bandpass settings. Also, right now the fband_names are hard-coded in nm_features.py (not sure why this is the case?)
    3. Make notch filter settings more flexible (i.e. let the user define how many harmonics he would like to remove).
    enhancement 
    opened by richardkoehler 3
  • Add possibility to

    Add possibility to "create" new channels

    Add the option in "settings.json" to "create" new channels by summation (e.g. new_channel: LFP_R_234, sum_channels: [LFP_R_2, LFP_R_3, LFP_R_4]. Not an imminent priority, but would be nice to have.

    opened by richardkoehler 3
  • Add column

    Add column "status" to M1.tsv

    @timonmerk I suggest that we add the column "status" (like in BIDS channels.tsv files) to the M1.tsv file values would be "good" or "bad" I recently encountered the problem that I wanted to exclude an ECOG channel from the ECOG average reference because it was too noisy, but there is no way to do this, except to hard-code. Alternatively, we could in theory also implement excluding channels by setting the type of the bad channel to "bad". This is not 100% logical but this would avoid for us to change the current API. However, the more sustainable and "BIDS-friendly" solution would be to implement the "status" column. I could easily imagine such a scenario:

    • We want to make real-time predictions from an ECOG grid (e.g. 6 * 5 electrodes), but to reduce computational cost we only want to select the best performing ECOG channel. However, we would like to re-reference this ECOG channel to an average ECOG reference. So the solution would be to mark all ECOG channels as type "ecog", mark the bad ECOG channels as "bad", the others as "good", and only set "used" to 1 for the best performing ECOG channel.
    opened by richardkoehler 3
  • Check in settings.py sampling frequency filter relation

    Check in settings.py sampling frequency filter relation

    For too low sampling frequencies, the current notch filter will fail. Design filter in settings.py?

    Settings.py should be written as class s.t. settings.json, df_M1 and such code isn't part of start_BIDS / start_LSL anymore.

        fs = int(np.ceil(raw_arr.info["sfreq"]))
        line_noise = int(raw_arr.info["line_freq"])
    
        # read df_M1 / create M1 if None specified
        df_M1 = pd.read_csv(PATH_M1, sep="\t") \
            if PATH_M1 is not None and os.path.isfile(PATH_M1) \
            else define_M1.set_M1(raw_arr.ch_names, raw_arr.get_channel_types())
        ch_names = list(df_M1['name'])
        refs = df_M1['rereference']
        to_ref_idx = np.array(df_M1[(df_M1['used'] == 1)].index)
        cortex_idx = np.where(df_M1.type == 'ecog')[0]
        subcortex_idx = np.array(df_M1[(df_M1["type"] == 'seeg') | (df_M1['type'] == 'dbs')
                                       | (df_M1['type'] == 'lfp')].index)
        ref_here = rereference.RT_rereference(ch_names, refs, to_ref_idx,
                                              cortex_idx, subcortex_idx,
                                              split_data=False)
    
    opened by timonmerk 3
  • Update rereference and test_features

    Update rereference and test_features

    @timonmerk So I was pulling all the information extraction from df_M1, like cortex_idx and subcortex_idx etc., into the rereference module, to make the actual analysis script (e.g. test_features) a bit cleaner. This got me thinking that we should maybe initialize the settings as a class, into which we load the settings.json and M1.tsv. We could then get the relevant information via functions e.g. ch_names(), used_chs(), cortex_idx(), out_path() etc. What do you think? The pull request doesn't implement the class yet or anything, I only pulled the cortex_idx etc. into the rereference initialization.

    opened by richardkoehler 3
  • Restructure nm_run_analysis.py

    Restructure nm_run_analysis.py

    This PR is aimed at restructuring run_analysis.py so that the data processing can now be performed independently of a py_neuromodulation data stream. This means in detail that:

    • I renamed the class Run to DataProcessor to make it clearer what it does: it processes a single batch of data. This is only my personal preference, so feel free to revert this change or give it any other name.
    • It is now DataProcessor and not the Stream that instantiates the Preprocessor and the Features classes from the given settings. This way DataProcessor can be used independently of Stream (e.g. in the case of timeflux, where timeflux handles the stream, hence the name of this branch).
    • I noticed a major bug in nm_run_analysis.py, where the settings specified "raw_resampling", but nm_run_analysis checked for "raw_resample" and this went unnoticed, because the matching statement didn't check for invalid specifications. This means that in example_BIDS.py "raw_resampling" was activated, but the data was not actually resampled. It took me a couple of hours to find this bug. I fixed the bug and also added handling the case of invalid strings, but the decoding performance has now slightly changed. Note that in this example case if one deactivates "raw_resampling", one might potentially observe improved performance.
    • After this restructuring, it is no longer necessary to specify the "preprocessing_order" and preprocessing methods to True or False respectively. The keyword "preprocessing" now takes a list (which is equivalent to the preprocessing_order before) and infers from this list which preprocessing methods to use. If you want to deactivate a preprocessing method, simply take it out of the "preprocessing" list. This change I would consider optional, but it does make the code much easier to read and helps us to avoid errors in the settings specifications.
    • I removed the use of the test_settings function, and I think that we should deprecate this function. It was a nice idea originally, but I am now more of the opinion that each method or class (e.g. NotchFilter) must do the work of checking if the passed arguments are valid anyway. So having an additional test_settings functions would mean duplication of this check, and it means that each time we change a function, we have to change test_settings too - so this means duplication of work for ourselves, too.
    • Fixed typos, like LineLenth, and added and fixed some type hints.
    • I might have forgotten about some additional changes I made ...

    Let me know what you think!

    opened by richardkoehler 0
Releases(v0.02)
Owner
Interventional Cognitive Neuromodulation - Neumann Lab Berlin
Interventional and Cognitive Neuromodulation Group
Interventional Cognitive Neuromodulation - Neumann Lab Berlin
Easy to use Audio Tagging in PyTorch

Audio Classification, Tagging & Sound Event Detection in PyTorch Progress: Fine-tune on audio classification Fine-tune on audio tagging Fine-tune on s

sithu3 15 Dec 22, 2022
Official Repsoitory for "Activate or Not: Learning Customized Activation." [CVPR 2021]

CVPR 2021 | Activate or Not: Learning Customized Activation. This repository contains the official Pytorch implementation of the paper Activate or Not

184 Dec 27, 2022
Implementation of a protein autoregressive language model, but with autoregressive infilling objective (editing subsequences capability)

Protein GLM (wip) Implementation of a protein autoregressive language model, but with autoregressive infilling objective (editing subsequences capabil

Phil Wang 17 May 06, 2022
A short and easy PyTorch implementation of E(n) Equivariant Graph Neural Networks

Simple implementation of Equivariant GNN A short implementation of E(n) Equivariant Graph Neural Networks for HOMO energy prediction. Just 50 lines of

Arsenii Senya Ashukha 97 Dec 23, 2022
2 Jul 19, 2022
Yolov3 pytorch implementation

YOLOV3 Pytorch实现 在bubbliiing大佬代码的基础上进行了修改,添加了部分注释。 预训练模型 预训练模型来源于bubbliiing。 链接:https://pan.baidu.com/s/1ncREw6Na9ycZptdxiVMApw 提取码:appk 训练自己的数据集 按照VO

4 Aug 27, 2022
🛠 All-in-one web-based IDE specialized for machine learning and data science.

All-in-one web-based development environment for machine learning Getting Started • Features & Screenshots • Support • Report a Bug • FAQ • Known Issu

Machine Learning Tooling 2.9k Jan 09, 2023
PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021.

PAML PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021. (Continuously updating ) Int

15 Nov 18, 2022
Repo for the Tutorials of Day1-Day3 of the Nordic Probabilistic AI School 2021 (https://probabilistic.ai/)

ProbAI 2021 - Probabilistic Programming and Variational Inference Tutorial with Pryo Day 1 (June 14) Slides Notebook: students_PPLs_Intro Notebook: so

PGM-Lab 46 Nov 01, 2022
Reimplementation of NeurIPS'19: "Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting" by Shu et al.

[Re] Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting Reimplementation of NeurIPS'19: "Meta-Weight-Net: Learning an Explicit Mapping

Robert Cedergren 1 Mar 13, 2020
AirCode: A Robust Object Encoding Method

AirCode This repo contains source codes for the arXiv preprint "AirCode: A Robust Object Encoding Method" Demo Object matching comparison when the obj

Chen Wang 30 Dec 09, 2022
HyperPose is a library for building high-performance custom pose estimation applications.

HyperPose is a library for building high-performance custom pose estimation applications.

TensorLayer Community 1.2k Jan 04, 2023
Official Python implementation of the 'Sparse deconvolution'-v0.3.0

Sparse deconvolution Python v0.3.0 Official Python implementation of the 'Sparse deconvolution', and the CPU (NumPy) and GPU (CuPy) calculation backen

Weisong Zhao 23 Dec 28, 2022
Connecting Java/ImgLib2 + Python/NumPy

imglyb imglyb aims at connecting two worlds that have been seperated for too long: Python with numpy Java with ImgLib2 imglyb uses jpype to access num

ImgLib2 29 Dec 21, 2022
LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection

LiDAR Distillation Paper | Model LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection Yi Wei, Zibu Wei, Yongming Rao, Jiax

Yi Wei 75 Dec 22, 2022
This is the code of NeurIPS'21 paper "Towards Enabling Meta-Learning from Target Models".

ST This is the code of NeurIPS 2021 paper "Towards Enabling Meta-Learning from Target Models". If you use any content of this repo for your work, plea

Su Lu 7 Dec 06, 2022
A spatial genome aligner for analyzing multiplexed DNA-FISH imaging data.

jie jie is a spatial genome aligner. This package parses true chromatin imaging signal from noise by aligning signals to a reference DNA polymer model

Bojing Jia 9 Sep 29, 2022
A tensorflow=1.13 implementation of Deconvolutional Networks on Graph Data (NeurIPS 2021)

GDN A tensorflow=1.13 implementation of Deconvolutional Networks on Graph Data (NeurIPS 2021) Abstract In this paper, we consider an inverse problem i

4 Sep 13, 2022
A minimalist tool to display a network graph.

A tool to get a minimalist view of any architecture This tool has only be tested with the models included in this repo. Therefore, I can't guarantee t

Thibault Castells 1 Feb 11, 2022
Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research

Welcome to AirSim AirSim is a simulator for drones, cars and more, built on Unreal Engine (we now also have an experimental Unity release). It is open

Microsoft 13.8k Jan 05, 2023