Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers

Related tags

Deep LearningSLATER
Overview

Official TensorFlow implementation of the unsupervised reconstruction model using zero-Shot Learned Adversarial TransformERs (SLATER). (https://arxiv.org/abs/2105.08059)

Korkmaz, Y., Dar, S. U., Yurt, M., Ozbey, M., & Cukur, T. (2021). Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers. arXiv preprint arXiv:2105.08059.


Demo

The following commands are used to train and test SLATER to reconstruct undersampled MR acquisitions from single- and multi-coil datasets. You can download pretrained network snaphots and sample datasets from the links given below.

For training the MRI prior we use fully-sampled images, for testing undersampling is performed based on selected acceleration rate. We have used AdamOptimizer in training, RMSPropOptimizer with momentum parameter 0.9 in testing/inference. In the current settings AdamOptimizer is used, you can change underlying optimizer class in dnnlib/tflib/optimizer.py file. You can insert additional paramaters like momentum to the line 87 in the optimizer.py file.

Sample training command for multi-coil (fastMRI) dataset:

python run_network.py --train --gpus=0 --expname=fastmri_t1_train --dataset=fastmri-t1 --data-dir=datasets/multi-coil-datasets/train

Sample reconstruction/test command for fastMRI dataset:

python run_recon_multi_coil.py reconstruct-complex-images --network=pretrained_snapshots/fastmri-t1/network-snapshot-001282.pkl --dataset=fastmri-t1 --acc-rate=4 --contrast=t1 --data-dir=datasets/multi-coil-datasets/test

Sample training command for single-coil (IXI) dataset:

python run_network.py --train --gpus=0 --expname=ixi_t1_train --dataset=ixi_t1 --data-dir=datasets/single-coil-datasets/train

Sample reconstruction/test command for IXI dataset:

python run_recon_single_coil.py reconstruct-magnitude-images --network=pretrained_snapshots/ixi-t1/network-snapshot-001282.pkl --dataset=ixi_t1_test --acc-rate=4 --contrast=t1 --data-dir=datasets/single-coil-datasets/test

Datasets

For IXI dataset image dimensions are 256x256. For fastMRI dataset image dimensions vary with contrasts. (T1: 256x320, T2: 288x384, FLAIR: 256x320).

SLATER requires datasets in the tfrecords format. To create tfrecords file containing new datasets you can use dataset_tool.py:

To create single-coil datasets you need to give magnitude images to dataset_tool.py with create_from_images function by just giving image directory containing images in .png format. We included undersampling masks under datasets/single-coil-datasets/test.

To create multi-coil datasets you need to provide hdf5 files containing fully sampled coil-combined complex images in a variable named 'images_fs' with shape [num_of_images,x,y] (can be modified accordingly). To do this, you can use create_from_hdf5 function in dataset_tool.py.

The MRI priors are trained on coil-combined datasets that are saved in tfrecords files with a 3-channel order of [real, imaginary, dummy]. For test purposes, we included sample coil-sensitivity maps (complex variable with 4-dimensions [x,y,num_of_image,num_of_coils] named 'coil_maps') and undersampling masks (3-dimensions [x,y, num_of_image] named 'map') in the datasets/multi-coil-datasets/test folder in hdf5 format.

Coil-sensitivity-maps are estimated using ESPIRIT (http://people.eecs.berkeley.edu/~mlustig/Software.html). Network implementations use libraries from Gansformer (https://github.com/dorarad/gansformer) and Stylegan-2 (https://github.com/NVlabs/stylegan2) repositories.


Pretrained networks

You can download pretrained network snapshots and datasets from these links. You need to place downloaded folders (datasets and pretrained_snapshots folders) under the main repo to run those sample test commands given above.

Pretrained network snapshots for IXI-T1 and fastMRI-T1 can be downloaded from Google Drive: https://drive.google.com/drive/folders/1_69T1KUeSZCpKD3G37qgDyAilWynKhEc?usp=sharing

Sample training and test datasets for IXI-T1 and fastMRI-T1 can be downloaded from Google Drive: https://drive.google.com/drive/folders/1hLC8Pv7EzAH03tpHquDUuP-lLBasQ23Z?usp=sharing


Notice for training with multi-coil datasets

To train multi-coil (complex) datasets you need to remove/add some lines in training_loop.py:

  • Comment out line 8.
  • Delete comment at line 9.
  • Comment out line 23.

Citation

You are encouraged to modify/distribute this code. However, please acknowledge this code and cite the paper appropriately.

@article{korkmaz2021unsupervised,
  title={Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers},
  author={Korkmaz, Yilmaz and Dar, Salman UH and Yurt, Mahmut and {\"O}zbey, Muzaffer and {\c{C}}ukur, Tolga},
  journal={arXiv preprint arXiv:2105.08059},
  year={2021}
  }

(c) ICON Lab 2021


Prerequisites

  • Python 3.6 --
  • CuDNN 10.1 --
  • TensorFlow 1.14 or 1.15

Acknowledgements

This code uses libraries from the StyleGAN-2 (https://github.com/NVlabs/stylegan2) and Gansformer (https://github.com/dorarad/gansformer) repositories.

For questions/comments please send me an email: [email protected]


Owner
ICON Lab
ICON Lab
code for paper"A High-precision Semantic Segmentation Method Combining Adversarial Learning and Attention Mechanism"

PyTorch implementation of UAGAN(U-net Attention Generative Adversarial Networks) This repository contains the source code for the paper "A High-precis

Tong 8 Apr 25, 2022
Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch

Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch Reference Paper URL Author: Yi Tay, Dara Bahri, Donald Metzler

Myeongjun Kim 66 Nov 30, 2022
[CVPR 2022] Back To Reality: Weak-supervised 3D Object Detection with Shape-guided Label Enhancement

Back To Reality: Weak-supervised 3D Object Detection with Shape-guided Label Enhancement Announcement 🔥 We have not tested the code yet. We will fini

Xiuwei Xu 7 Oct 30, 2022
generate-2D-quadrilateral-mesh-with-neural-networks-and-tree-search

generate-2D-quadrilateral-mesh-with-neural-networks-and-tree-search This repository contains single-threaded TreeMesh code. I'm Hua Tong, a senior stu

Hua Tong 18 Sep 21, 2022
Auto HMM: Automatic Discrete and Continous HMM including Model selection

Auto HMM: Automatic Discrete and Continous HMM including Model selection

Chess_champion 29 Dec 07, 2022
Unofficial PyTorch implementation of Neural Additive Models (NAM) by Agarwal, et al.

nam-pytorch Unofficial PyTorch implementation of Neural Additive Models (NAM) by Agarwal, et al. [abs, pdf] Installation You can access nam-pytorch vi

Rishabh Anand 11 Mar 14, 2022
3DIAS: 3D Shape Reconstruction with Implicit Algebraic Surfaces (ICCV 2021)

3DIAS_Pytorch This repository contains the official code to reproduce the results from the paper: 3DIAS: 3D Shape Reconstruction with Implicit Algebra

Mohsen Yavartanoo 21 Dec 12, 2022
Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"

Memory Compressed Attention Implementation of the Self-Attention layer of the proposed Memory-Compressed Attention, in Pytorch. This repository offers

Phil Wang 47 Dec 23, 2022
ANEA: Automated (Named) Entity Annotation for German Domain-Specific Texts

ANEA The goal of Automatic (Named) Entity Annotation is to create a small annotated dataset for NER extracted from German domain-specific texts. Insta

Anastasia Zhukova 2 Oct 07, 2022
CONetV2: Efficient Auto-Channel Size Optimization for CNNs

CONetV2: Efficient Auto-Channel Size Optimization for CNNs Exciting News! CONetV2: Efficient Auto-Channel Size Optimization for CNNs has been accepted

Mahdi S. Hosseini 3 Dec 13, 2021
NNR conformation conditional and global probabilities estimation and analysis in peptides or proteins fragments

NNR and global probabilities estimation and analysis in peptides or protein fragments This module calculates global and NNR conformation dependent pro

0 Jul 15, 2021
Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)

Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching Official pytorch implementation of "Show, Attend and Distill: Kn

Clova AI Research 80 Dec 16, 2022
AI drive app that can help user become beautiful.

爱美丽 Beauty 简体中文 Features Beauty is an AI drive app that can help user become beautiful. it contain those functions: face score cheek face beauty repor

Starved Midnight 1 Jan 30, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

AugMax: Adversarial Composition of Random Augmentations for Robust Training Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, an

VITA 112 Nov 07, 2022
Pytorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling

Parallel Tacotron2 Pytorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling

Keon Lee 170 Dec 27, 2022
Simple Pixelbot for Diablo 2 Resurrected written in python and opencv.

Simple Pixelbot for Diablo 2 Resurrected written in python and opencv. Obviously only use it in offline mode as it is against the TOS of Blizzard to use it in online mode!

468 Jan 03, 2023
DeceFL: A Principled Decentralized Federated Learning Framework

DeceFL: A Principled Decentralized Federated Learning Framework This repository comprises codes that reproduce experiments in Ye, et al (2021), which

Huazhong Artificial Intelligence Lab (HAIL) 10 May 31, 2022
PyTorch implementation of ICLR 2022 paper PiCO: Contrastive Label Disambiguation for Partial Label Learning

PiCO: Contrastive Label Disambiguation for Partial Label Learning This is a PyTorch implementation of ICLR 2022 paper PiCO: Contrastive Label Disambig

王皓波 147 Jan 07, 2023
CPU inference engine that delivers unprecedented performance for sparse models

The DeepSparse Engine is a CPU runtime that delivers unprecedented performance by taking advantage of natural sparsity within neural networks to reduce compute required as well as accelerate memory b

Neural Magic 1.2k Jan 09, 2023
Very large and sparse networks appear often in the wild and present unique algorithmic opportunities and challenges for the practitioner

Sparse network learning with snlpy Very large and sparse networks appear often in the wild and present unique algorithmic opportunities and challenges

Andrew Stolman 1 Apr 30, 2021