Deep Markov Factor Analysis (NeurIPS2021)

Overview

Deep Markov Factor Analysis (DMFA)

Codes and experiments for deep Markov factor analysis (DMFA) model accepted for publication at NeurIPS2021:

A. Farnoosh and S. Ostadabbas, “Deep Markov Factor Analysis: Towards concurrent temporal and spatial analysis of fMRI data,” in Thirty-fifth Annual Conference on Neural Information Processing Systems (NeurIPS), 2021.

Dependencies:

Numpy, Scipy, Pytorch, Nibabel, Tqdm, Matplotlib, Sklearn, Json, Pandas

Autism Dataset:

Run the following snippet to restore results from pre-trained checkpoints for Autism dataset in ./fMRI_results folder. A few instances from each dataset are included to help the code run without errors. You may replace {site} with Caltec, Leuven, MaxMun, NYU_00, SBL_00, Stanfo, Yale_0, USM_00, DSU_0, UM_1_0, or set -exp autism for the full dataset. Here, checkpoint files for Caltec, SBL_00, Stanfo are only included due to storage limitations.

python dmfa_fMRI.py -t 75 -exp autism_{site} -dir ./data_autism/ -smod ./ckpt_fMRI/ -dpath ./fMRI_results/ -restore

or run the following snippet for training with batch size of 10 (full dataset needs to be downloaded and preprocessed/formatted beforehand):

python dmfa_fMRI.py -t 75 -exp autism_{site} -dir ./data_autism/ -smod ./ckpt_fMRI/ -dpath ./fMRI_results/ -bs 10

After downloading the full Autism dataset, run the following snippet to preprocess/format data:

python generate_fMRI_patches.py -T 75 -dir ./path_to_data/ -ext /*.gz -spath ./data_autism/

Depression Dataset:

Run the following snippet to restore results from pre-trained checkpoints for Depression dataset in ./fMRI_results folder. A few instances from the dataset are included to help the code run without errors. You may replace {ID} with 1, 2, 3, 4. ID 4 corresponds to the first experiment on Depression dataset in the paper. IDs 2, 3 correspond to the second experiment on Depression dataset in the paper.

python dmfa_fMRI.py -exp depression_{ID} -dir ./data_depression/ -smod ./ckpt_fMRI/ -dpath ./fMRI_results/ -restore

or run the following snippet for training with batch size of 10 (full dataset needs to be downloaded and preprocessed/formatted beforehand):

python dmfa_fMRI.py -exp depression_{ID} -dir ./data_depression/ -smod ./ckpt_fMRI/ -dpath ./fMRI_results/ -bs 10

After downloading the full Depression dataset, run the following snippet to preprocess/format data:

python generate_fMRI_patches_depression.py -T 6 -dir ./path_to_data/ -spath ./data_depression/

Synthetic fMRI data:

Run the following snippet to restore results from the pre-trained checkpoint for the synthetic experiment in ./synthetic_results folder (synthetic fMRI data is not included due to storage limitations).

python dmfa_synthetic.py

Owner
Sarah Ostadabbas
Sarah Ostadabbas is an Assistant Professor at the Electrical and Computer Engineering Department of Northeastern University (NEU). Sarah joined NEU from Georgia
Sarah Ostadabbas
A repository built on the Flow software package to explore cyber-security attacks on intelligent transportation systems.

A repository built on the Flow software package to explore cyber-security attacks on intelligent transportation systems.

George Gunter 4 Nov 14, 2022
Official repository for "Restormer: Efficient Transformer for High-Resolution Image Restoration". SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.

Restormer: Efficient Transformer for High-Resolution Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan,

Syed Waqas Zamir 906 Dec 30, 2022
LogDeep is an open source deeplearning-based log analysis toolkit for automated anomaly detection.

LogDeep is an open source deeplearning-based log analysis toolkit for automated anomaly detection.

donglee 279 Dec 13, 2022
[CoRL 2021] A robotics benchmark for cross-embodiment imitation.

x-magical x-magical is a benchmark extension of MAGICAL specifically geared towards cross-embodiment imitation. The tasks still provide the Demo/Test

Kevin Zakka 36 Nov 26, 2022
The official PyTorch implementation of recent paper - SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training

This repository is the official PyTorch implementation of SAINT. Find the paper on arxiv SAINT: Improved Neural Networks for Tabular Data via Row Atte

Gowthami Somepalli 284 Dec 21, 2022
MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python

MNE-Python MNE-Python software is an open-source Python package for exploring, visualizing, and analyzing human neurophysiological data such as MEG, E

MNE tools for MEG and EEG data analysis 2.1k Dec 28, 2022
We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction

We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction. This repository aims to give easy access to state-of-the-art pre-train

GMUM 90 Jan 08, 2023
Code for the paper "Benchmarking and Analyzing Point Cloud Classification under Corruptions"

ModelNet-C Code for the paper "Benchmarking and Analyzing Point Cloud Classification under Corruptions". For the latest updates, see: sites.google.com

Jiawei Ren 45 Dec 28, 2022
The official homepage of the (outdated) COCO-Stuff 10K dataset.

COCO-Stuff 10K dataset v1.1 (outdated) Holger Caesar, Jasper Uijlings, Vittorio Ferrari Overview Welcome to official homepage of the COCO-Stuff [1] da

Holger Caesar 263 Dec 11, 2022
Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Coming soon!

ToxiChat Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Install depen

Ashutosh Baheti 11 Jan 01, 2023
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.

vid2vid Project | YouTube(short) | YouTube(full) | arXiv | Paper(full) Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic vid

NVIDIA Corporation 8.1k Jan 01, 2023
Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight)

[NeurIPS 2021 Spotlight] HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning [Paper] This is Official PyTorch implementatio

42 Nov 01, 2022
Implementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]

Introduction This repository is for X-Linear Attention Networks for Image Captioning (CVPR 2020). The original paper can be found here. Please cite wi

JDAI-CV 240 Dec 17, 2022
Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning

Here is deepparse. Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning. Use deepparse to Use the pr

GRAAL/GRAIL 192 Dec 20, 2022
Semi-Supervised Semantic Segmentation with Cross-Consistency Training (CCT)

Semi-Supervised Semantic Segmentation with Cross-Consistency Training (CCT) Paper, Project Page This repo contains the official implementation of CVPR

Yassine 344 Dec 29, 2022
Neighborhood Reconstructing Autoencoders

Neighborhood Reconstructing Autoencoders The official repository for Neighborhood Reconstructing Autoencoders (Lee, Kwon, and Park, NeurIPS 2021). T

Yonghyeon Lee 24 Dec 14, 2022
This is the code for the paper "Motion-Focused Contrastive Learning of Video Representations" (ICCV'21).

Motion-Focused Contrastive Learning of Video Representations Introduction This is the code for the paper "Motion-Focused Contrastive Learning of Video

11 Sep 23, 2022
Official code for "InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization" (ICLR 2020, spotlight)

InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization Authors: Fan-yun Sun, Jordan Hoffm

Fan-Yun Sun 232 Dec 28, 2022
以孤立语假设和宽度优先搜索为基础,构建了一种多通道堆叠注意力Transformer结构的斗地主ai

ddz-ai 介绍 斗地主是一种扑克游戏。游戏最少由3个玩家进行,用一副54张牌(连鬼牌),其中一方为地主,其余两家为另一方,双方对战,先出完牌的一方获胜。 ddz-ai以孤立语假设和宽度优先搜索为基础,构建了一种多通道堆叠注意力Transformer结构的系统,使其经过大量训练后,能在实际游戏中获

freefuiiismyname 88 May 15, 2022
An image classification app boilerplate to serve your deep learning models asap!

Image 🖼 Classification App Boilerplate Have you been puzzled by tons of videos, blogs and other resources on the internet and don't know where and ho

Smaranjit Ghose 27 Oct 06, 2022