Hierarchical probabilistic 3D U-Net, with attention mechanisms (โ€”๐˜ˆ๐˜ต๐˜ต๐˜ฆ๐˜ฏ๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜œ-๐˜•๐˜ฆ๐˜ต, ๐˜š๐˜Œ๐˜™๐˜ฆ๐˜ด๐˜•๐˜ฆ๐˜ต) and a nested decoder structure with deep supervision (โ€”๐˜œ๐˜•๐˜ฆ๐˜ต++).

Overview

Clinically Significant Prostate Cancer Detection in bpMRI

Note: This repo will be continually updated upon future advancements and we welcome open-source contributions! Currently, it shares the TensorFlow 2.5 version of the Hierarchical Probabilistic 3D U-Net (with attention mechanisms, nested decoder structure and deep supervision), titled M1, as explored in the publication(s) listed below. Source code used for training this model, as per our original setup, carry a large number of dependencies on internal datasets, tooling, infrastructure and hardware, and their release is currently not feasible. However, an equivalent minimal adaptation has been made available. We encourage users to test out M1, identify potential areas for significant improvement and propose PRs for inclusion to this repo.

Pre-Trained Model using 1950 bpMRI with PI-RADS v2 Annotations [Training:Validation Ratio - 80:20]:
To infer lesion predictions on testing samples using the pre-trained variant (architecture in commit 58b784f) of this algorithm, please visit https://grand-challenge.org/algorithms/prostate-mri-cad-cspca/

Main Scripts
โ— Preprocessing Functions: tf2.5/scripts/preprocess.py
โ— Tensor-Based Augmentations: tf2.5/scripts/model/augmentations.py
โ— Training Script Template: tf2.5/scripts/train_model.py
โ— Basic Callbacks (e.g. LR Schedules): tf2.5/scripts/callbacks.py
โ— Loss Functions: tf2.5/scripts/model/losses.py
โ— Network Architecture: tf2.5/scripts/model/unets/networks.py

Requirements
โ— Complete Docker Container: anindox8/m1:latest
โ— Key Python Packages: tf2.5/requirements.txt

schematic Train-time schematic for the Bayesian/hierarchical probabilistic configuration of M1. L_S denotes the segmentation loss between prediction p and ground-truth Y. Additionally, L_KL, denoting the Kullbackโ€“Leibler divergence loss between prior distribution P and posterior distribution Q, is used at train-time (refer to arXiv:1905.13077). For each execution of the model, latent samples z_i โˆˆ Q (train-time) or z_i โˆˆ P (test-time) are successively drawn at increasing scales of the model to predict one segmentation mask p.

schematic Architecture schematic of M1, with attention mechanisms and a nested decoder structure with deep supervision.

Minimal Example of Model Setup in TensorFlow 2.5:
(More Details: Training CNNs in TF2: Walkthrough; TF2 Datasets: Best Practices; TensorFlow Probability)

# U-Net Definition (Note: Hyperparameters are Data-Centric -> Require Adequate Tuning for Optimal Performance)
unet_model = unets.networks.M1(\
                        input_spatial_dims =  (20,160,160),            
                        input_channels     =   3,
                        num_classes        =   2,                       
                        filters            =  (32,64,128,256,512),   
                        strides            = ((1,1,1),(1,2,2),(1,2,2),(2,2,2),(2,2,2)),  
                        kernel_sizes       = ((1,3,3),(1,3,3),(3,3,3),(3,3,3),(3,3,3)),
                        prob_latent_dims   =  (3,2,1,0)
                        dropout_rate       =   0.50,       
                        dropout_mode       =  'monte-carlo',
                        se_reduction       =  (8,8,8,8,8),
                        att_sub_samp       = ((1,1,1),(1,1,1),(1,1,1),(1,1,1)),
                        kernel_initializer =   tf.keras.initializers.Orthogonal(gain=1), 
                        bias_initializer   =   tf.keras.initializers.TruncatedNormal(mean=0, stddev=1e-3),
                        kernel_regularizer =   tf.keras.regularizers.l2(1e-4),
                        bias_regularizer   =   tf.keras.regularizers.l2(1e-4),     
                        cascaded           =   False,
                        probabilistic      =   True,
                        deep_supervision   =   True,
                        summary            =   True)  

# Schedule Cosine Annealing Learning Rate with Warm Restarts
LR_SCHEDULE = (tf.keras.optimizers.schedules.CosineDecayRestarts(\
                        initial_learning_rate=1e-3, t_mul=2.00, m_mul=1.00, alpha=1e-3,
                        first_decay_steps=int(np.ceil(((TRAIN_SAMPLES)/BATCH_SIZE)))*10))
                                                  
# Compile Model w/ Optimizer and Loss Function(s)
unet_model.compile(optimizer = tf.keras.optimizers.Adam(learning_rate=LR_SCHEDULE, amsgrad=True), 
                   loss      = losses.Focal(alpha=[0.75, 0.25], gamma=2.00).loss)

# Train Model
unet_model.fit(...)

If you use this repo or some part of its codebase, please cite the following articles (see bibtex):

โ— A. Saha, J. Bosma, J. Linmans, M. Hosseinzadeh, H. Huisman (2021), "Anatomical and Diagnostic Bayesian Segmentation in Prostate MRI โˆ’Should Different Clinical Objectives Mandate Different Loss Functions?", Medical Imaging Meets NeurIPS Workshop โ€“ 35th Conference on Neural Information Processing Systems (NeurIPS), Sydney, Australia. (architecture in commit 914ec9d)

โ— A. Saha, M. Hosseinzadeh, H. Huisman (2021), "End-to-End Prostate Cancer Detection in bpMRI via 3D CNNs: Effect of Attention Mechanisms, Clinical Priori and Decoupled False Positive Reduction", Medical Image Analysis:102155. (architecture in commit 58b784f)

โ— A. Saha, M. Hosseinzadeh, H. Huisman (2020), "Encoding Clinical Priori in 3D Convolutional Neural Networks for Prostate Cancer Detection in bpMRI", Medical Imaging Meets NeurIPS Workshop โ€“ 34th Conference on Neural Information Processing Systems (NeurIPS), Vancouver, Canada. (architecture in commit 58b784f)

Contact: [email protected]; [email protected]

Related U-Net Architectures:
โ— nnU-Net: https://github.com/MIC-DKFZ/nnUNet
โ— Attention U-Net: https://github.com/ozan-oktay/Attention-Gated-Networks
โ— UNet++: https://github.com/MrGiovanni/UNetPlusPlus
โ— Hierarchical Probabilistic U-Net: https://github.com/deepmind/deepmind-research/tree/master/hierarchical_probabilistic_unet

Owner
Diagnostic Image Analysis Group
Diagnostic Image Analysis Group
Multi-Stage Episodic Control for Strategic Exploration in Text Games

XTX: eXploit - Then - eXplore Requirements First clone this repo using git clone https://github.com/princeton-nlp/XTX.git Please create two conda envi

Princeton Natural Language Processing 9 May 24, 2022
Awesome Remote Sensing Toolkit based on PaddlePaddle.

ๅŸบไบŽ้ฃžๆกจๆก†ๆžถๅผ€ๅ‘็š„้ซ˜ๆ€ง่ƒฝ้ฅๆ„Ÿๅ›พๅƒๅค„็†ๅผ€ๅ‘ๅฅ—ไปถ๏ผŒ็ซฏๅˆฐ็ซฏๅœฐๅฎŒๆˆไปŽ่ฎญ็ปƒๅˆฐ้ƒจ็ฝฒ็š„ๅ…จๆต็จ‹้ฅๆ„Ÿๆทฑๅบฆๅญฆไน ๅบ”็”จใ€‚ ๆœ€ๆ–ฐๅŠจๆ€ PaddleRS ๅณๅฐ†ๅ‘ๅธƒalpha็‰ˆๆœฌ๏ผๆฌข่ฟŽๅคงๅฎถ่ฏ•็”จ ็ฎ€ไป‹ PaddleRSๆ˜ฏ้ฅๆ„Ÿ็ง‘็ ”้™ขๆ‰€ใ€็›ธๅ…ณ้ซ˜ๆ กๅ…ฑๅŒๅŸบไบŽ้ฃžๆกจๅผ€ๅ‘็š„้ฅๆ„Ÿๅค„็†ๅนณๅฐ๏ผŒๆ”ฏๆŒ้ฅๆ„Ÿๅ›พๅƒๅˆ†็ฑป๏ผŒ็›ฎๆ ‡ๆฃ€ๆต‹๏ผŒๅ›พๅƒๅˆ†ๅ‰ฒ๏ผŒไปฅๅŠๅ˜ๅŒ–ๆฃ€ๆต‹็ญ‰ๅธธ็”จ้ฅ

146 Dec 11, 2022
An open-access benchmark and toolbox for electricity price forecasting

epftoolbox The epftoolbox is the first open-access library for driving research in electricity price forecasting. Its main goal is to make available a

97 Dec 05, 2022
Mercury: easily convert Python notebook to web app and share with others

Mercury Share your Python notebooks with others Easily convert your Python notebooks into interactive web apps by adding parameters in YAML. Simply ad

MLJAR 2.2k Dec 27, 2022
Basit bir burรง modรผlรผ.

Bu modulu burclar hakkinda gundelik bir sekilde bilgi alin diye yaptim ve sizler icin kullanima sunuyorum. Modulun kullanimi asiri basit: Ornek Kullan

Special 17 Jun 08, 2022
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021) The implementation of Reducing Infromation Bottleneck for W

Jungbeom Lee 81 Dec 16, 2022
A TensorFlow 2.x implementation of Masked Autoencoders Are Scalable Vision Learners

Masked Autoencoders Are Scalable Vision Learners A TensorFlow implementation of Masked Autoencoders Are Scalable Vision Learners [1]. Our implementati

Aritra Roy Gosthipaty 59 Dec 10, 2022
Picasso: a methods for embedding points in 2D in a way that respects distances while fitting a user-specified shape.

Picasso Code to generate Picasso embeddings of any input matrix. Picasso maps the points of an input matrix to user-defined, n-dimensional shape coord

Pachter Lab 45 Dec 23, 2022
Airborne magnetic data of the Osborne Mine and Lightning Creek sill complex, Australia

Osborne Mine, Australia - Airborne total-field magnetic anomaly This is a section of a survey acquired in 1990 by the Queensland Government, Australia

Fatiando a Terra Datasets 1 Jan 21, 2022
Old Photo Restoration (Official PyTorch Implementation)

Bringing Old Photo Back to Life (CVPR 2020 oral)

Microsoft 11.3k Dec 30, 2022
CPF: Learning a Contact Potential Field to Model the Hand-object Interaction

Contact Potential Field This repo contains model, demo, and test codes of our paper: CPF: Learning a Contact Potential Field to Model the Hand-object

Lixin YANG 99 Dec 26, 2022
Simple transformer model for CIFAR10

CIFAR-Transformer Simple transformer model for CIFAR10. Reference: https://www.tensorflow.org/text/tutorials/transformer https://github.com/huggingfac

9 Nov 07, 2022
Official implementation of GraphMask as presented in our paper Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking.

GraphMask This repository contains an implementation of GraphMask, the interpretability technique for graph neural networks presented in our ICLR 2021

Michael Schlichtkrull 29 Sep 02, 2022
Keras Realtime Multi-Person Pose Estimation - Keras version of Realtime Multi-Person Pose Estimation project

This repository has become incompatible with the latest and recommended version of Tensorflow 2.0 Instead of refactoring this code painfully, I create

M Faber 769 Dec 08, 2022
Leaderboard, taxonomy, and curated list of few-shot object detection papers.

Leaderboard, taxonomy, and curated list of few-shot object detection papers.

Gabriel Huang 70 Jan 07, 2023
Code for paper [ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot] (ICCV 2021, oral))

ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot This repository is the official PyTorch implementation of ICCV-21 pape

Jiarui 21 May 09, 2022
OpenMMLab Text Detection, Recognition and Understanding Toolbox

Introduction English | ็ฎ€ไฝ“ไธญๆ–‡ MMOCR is an open-source toolbox based on PyTorch and mmdetection for text detection, text recognition, and the correspondi

OpenMMLab 3k Jan 07, 2023
An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.

relational-rnn-pytorch An implementation of DeepMind's Relational Recurrent Neural Networks (Santoro et al. 2018) in PyTorch. Relational Memory Core (

Sang-gil Lee 241 Nov 18, 2022
Distributed DataLoader For Pytorch Based On Ray

Dpexโ€”โ€”็”จๆˆทๆ— ๆ„Ÿ็Ÿฅๅˆ†ๅธƒๅผๆ•ฐๆฎ้ข„ๅค„็†็ป„ไปถ ไธ€ใ€ๅ‰่จ€ ้š็€GPUไธŽCPU็š„็ฎ—ๅŠ›ๅทฎ่ท่ถŠๆฅ่ถŠๅคงไปฅๅŠๆจกๅž‹่ฎญ็ปƒๆ—ถ็š„้ข„ๅค„็†Pipelineๅ˜ๅพ—่ถŠๆฅ่ถŠๅคๆ‚๏ผŒCPU้ƒจๅˆ†็š„ๆ•ฐๆฎ้ข„ๅค„็†ๅทฒ็ป้€ๆธๆˆไธบไบ†ๆจกๅž‹่ฎญ็ปƒ็š„็“ถ้ขˆๆ‰€ๅœจ๏ผŒ่ฟ™ๅฏผ่‡ดๅ•ๆœบ็š„GPU้…็ฝฎ็š„ๆๅ‡ๅนถไธ่ƒฝๅธฆๆฅๆœŸๆœ›็š„็บฟๆ€งๅŠ ้€Ÿใ€‚้ข„ๅค„็†ๆ€ง่ƒฝ็“ถ้ขˆ็š„ๆœฌ่ดจๅœจไบŽๆฏไธชGPU่ƒฝๅคŸไฝฟ็”จ็š„C

Dalong 23 Nov 02, 2022
Code repository for the paper "Doubly-Trained Adversarial Data Augmentation for Neural Machine Translation" with instructions to reproduce the results.

Doubly Trained Neural Machine Translation System for Adversarial Attack and Data Augmentation Languages Experimented: Data Overview: Source Target Tra

Steven Tan 1 Aug 18, 2022