Here we present the implementation in TensorFlow of our work about liver lesion segmentation accepted in the Machine Learning 4 Health Workshop

Overview

Detection-aided liver lesion segmentation

Here we present the implementation in TensorFlow of our work about liver lesion segmentation accepted in the Machine Learning 4 Health Workshop of NIPS 2017. Check our project page for more information.

In order to develop this code, we used OSVOS and modified it to suit it to the liver lesion segmentation task.

Architecture of the network

In this work we propose a method to segment the liver and its lesions from Computed Tomography (CT) scans using Convolutional Neural Networks (CNNs), that have proven good results in a variety of computer vision tasks, including medical imaging. The network that segments the lesions consists of a cascaded architecture, which first focuses on the region of the liver in order to segment the lesions on it. Moreover, we train a detector to localize the lesions, and mask the results of the segmentation network with the positive detections. The segmentation architecture is based on DRIU(Maninis, 2016), a Fully Convolutional Network (FCN) with side outputs that work on feature maps of different resolutions, to finally benefit from the multi-scale information learned by different stages of the network. The main contribution of this work is the use of a detector to localize the lesions, which we show to be beneficial to remove false positives triggered by the segmentation network.

Our workshop paper is available on arXiv, and related slides here.

If you find this code useful, please cite with the following Bibtex code:

@misc{1711.11069,
Author = {Miriam Bellver and Kevis-Kokitsi Maninis and Jordi Pont-Tuset and Xavier Giro-i-Nieto and Jordi Torres and Luc Van Gool},
Title = {Detection-aided liver lesion segmentation using deep learning},
Year = {2017},
Eprint = {arXiv:1711.11069},
}

Code Instructions

Installation

  1. Clone this repository
git clone https://github.com/imatge-upc/liverseg-2017-nipsws.git
  1. Install if necessary the required dependencies:
  • Python 2.7
  • Tensorflow r1.0 or higher
  • Python dependencies: PIL, numpy, scipy

If you want to test our models, download the different weights. Extract the contents of this folder in the root of the repository, so there is a train_files folder with the following checkpoints:

  • Liver segmentation checkpoint
  • Lesion segmentation checkpoint
  • Lesion detection checkpoint

If you want to train the models by yourself, we provide also the following pretrained models:

  • VGG-16 weights
  • Resnet-50 weights weights

Data

This code was developed to participate in the Liver lesion segmentation challenge (LiTS), but can be used for other segmentation tasks also. The LiTS database consists on 130 CT scans for training and 70 CT scans for testing. These CT scans are compressed in a nifti format. We did our own partition of the training set, we used folders 0 - 104 to train, and 105-130 to test. This code is prepared to do experiments with our partition.

The code expects that the database is inside the LiTS_database folder. Inside there should be the following folders:

  • images_volumes: inside there should be a folder for each CT volume. Inside each of these folders, there should be a .mat file for each CT slice of the volume. The preprocessing required consists in clipping the values outside the range (-150,250) and doing max-min normalization.
  • liver_seg: the same structure as the previous, but with .png for each CT slice with the mask of the liver.
  • item_seg: the same structure as the previous, but with .png for each CT slice with the mask of the lesion.

An example of the structure for a single slice of a CT volume is the following:

LiTS_database/images_volumes/31/100.mat
LiTS_database/liver_seg/31/100.png
LiTS_database/item_seg/31/100.png

We provide a file in matlab to convert the nifti files into this same structure. In our case we used this matlab library. You can use whatever library you decide as long as the file structure and the preprocessing is the same.

cd /utils/matlab_utils
matlab process_database_liver.m

Liver segmentation

1. Train the liver model

In seg_liver_train.py you should indicate a dataset list file. An example is inside seg_DatasetList, training_volume_3.txt. Each line has:

img1 seg_lesion1 seg_liver1 img2 seg_lesion2 seg_liver2 img3 seg_lesion3 seg_liver3

If you just have segmentations of the liver, then repeat seg_lesionX=seg_liverX. If you used the folder structure explained in the previous point, you can use the training and testing_volume_3.txt files.

python seg_liver_train.py

2. Test the liver model

A dataset list with the same format but with the test images is required here. If you don't have annotations, simply put a dummy annotation X.png. There is also an example in seg_DatasetList/testing_volume_3.txt.

python seg_liver_test.py

Lesion detection

This network samples locations around liver and detects whether they have a lesion or not.

1. Crop slices around the liver

In order to train the lesion detector and the lesion segmentation network, we need to crop the CT scans around the liver region. First, we will need to obtain liver predictions for all the dataset, and move them to the LiTS_database folder.

cp -rf ./results/seg_liver_ck ./LiTS_database/seg_liver_ck

And the following lines will crop the images from the database, the ground truth and the liver predictions.

cd utils/crops_methods
python compute_3D_bbs_from_gt_liver.py

This will generate three folders:

LiTS_database/bb_liver_seg_alldatabase3_gt_nozoom_common_bb
LiTS_database/bb_liver_lesion_seg_alldatabase3_gt_nozoom_common_bb
LiTS_database/bb_images_volumes_alldatabase3_gt_nozoom_common_bb
LiTS_database/liver_results

and also a ./utils/crops_list/crops_LiTS_gt.txt file with the coordinates of the crop.

The default version will crop the images, ground truth, and liver predictions, considering the liver ground truth masks instead of the predictions. You can change this option in the same script.

2. Sample locations around liver

Now we need to sample locations around the liver region, in order to train and test the lesion detector. We need a .txt with the following format:

img1 x1 x2 id

Example:

images_volumes/97/444 385.0 277.0 1

whre x1 and x2 are the coordinates of the upper-left vertex of the bounding box and id is the data augmentation option. There are two options in this script. To sample locations for slices with ground truth or without. In the first case, two separate lists will be generated, one for positive locations (/w lesion) and another for negative locations (/wo lesion), in order to train the detector with balanced batches. These lists are already generated so you can use them, they are inside det_DatasetList (for instance, training_positive_det_patches_data_aug.txt for the positive patches of training set).

In case you want to generate other lists, use the following script:

cd utils/sampling
python sample_bbs.py

3. Train lesion detector

Once you sample the positive and negative locations, or decide to use the default lists, you can use the following command to train the detector.

python det_lesion_train.py

4. Test lesion detector

In order to test the detector, you can use the following command:

python det_lesion_test.py

This will create a folder inside detection_results with the task_name given to the experiment, and inside two .txt files, one with the hard results (considering a th of 0.5) and another with soft results with the prob predicted by the detector that a location is unhealthy.

Lesion segmentation

This is the network that segments the lesion. It is trained just backpropagatins gradients through the liver region.

1. Train the lesion model

In order to train the algorithm that does not backpropagate through pixels outside the liver, each line of the .txt list file in this case should have the following format:

img1 seg_lesion1 seg_liver1 result_liver1 img2 seg_lesion2 seg_liver2 result_liver1 img3 seg_lesion3 seg_liver3 result_liver1

An example list file is seg_DatasetList/training_lesion_commonbb_nobackprop_3.txt. If you used the folder structure proposed in the Database section, and you have named the folders of the cropped slices as proposed in the compute_3D_bbs_from_gt_liver.py file, you can use these files for training and testing the algorithm with the following command:

python seg_lesion_train.py

2. Test the lesion model

The command to test the network is the following:

python seg_lesion_test.py

In this case, observe that the script does 4 different steps:

  1. Does inference with the lesion segmentation network
  2. Returns results to the original size (from cropped slices to 512x512)
  3. Masks the results with the liver segmentation masks
  4. Checks positive detections of lesions in the liver. Remove those false positive of the segmentation network using the detection results.

Contact

If you have any general doubt about our work or code which may be of interest for other researchers, please use the public issues section on this github repo. Alternatively, drop us an e-mail at [email protected].

Owner
Image Processing Group - BarcelonaTECH - UPC
Image Processing Group - BarcelonaTECH - UPC
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target i

NanYoMy 13 Oct 09, 2022
[TOG 2021] PyTorch implementation for the paper: SofGAN: A Portrait Image Generator with Dynamic Styling.

This repository contains the official PyTorch implementation for the paper: SofGAN: A Portrait Image Generator with Dynamic Styling. We propose a SofGAN image generator to decouple the latent space o

Anpei Chen 694 Dec 23, 2022
This repository includes different versions of the prescribed-time controller as Simulink blocks and MATLAB script codes for engineering applications.

Prescribed-time Control Prescribed-time control (PTC) blocks in Simulink environment, MATLAB R2020b. For more theoretical details, refer to the papers

Amir Shakouri 1 Mar 11, 2022
A Library for Modelling Probabilistic Hierarchical Graphical Models in PyTorch

A Library for Modelling Probabilistic Hierarchical Graphical Models in PyTorch

Korbinian Pöppel 47 Nov 28, 2022
Source code for CVPR 2021 paper "Riggable 3D Face Reconstruction via In-Network Optimization"

Riggable 3D Face Reconstruction via In-Network Optimization Source code for CVPR 2021 paper "Riggable 3D Face Reconstruction via In-Network Optimizati

130 Jan 02, 2023
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 19, 2021
[IEEE TPAMI21] MobileSal: Extremely Efficient RGB-D Salient Object Detection [PyTorch & Jittor]

MobileSal IEEE TPAMI 2021: MobileSal: Extremely Efficient RGB-D Salient Object Detection This repository contains full training & testing code, and pr

Yu-Huan Wu 52 Jan 06, 2023
EMNLP'2021: SimCSE: Simple Contrastive Learning of Sentence Embeddings

SimCSE: Simple Contrastive Learning of Sentence Embeddings This repository contains the code and pre-trained models for our paper SimCSE: Simple Contr

Princeton Natural Language Processing 2.5k Dec 29, 2022
Simple-Image-Classification - Simple Image Classification Code (PyTorch)

Simple-Image-Classification Simple Image Classification Code (PyTorch) Yechan Kim This repository contains: Python3 / Pytorch code for multi-class ima

Yechan Kim 8 Oct 29, 2022
Multimodal Descriptions of Social Concepts: Automatic Modeling and Detection of (Highly Abstract) Social Concepts evoked by Art Images

MUSCO - Multimodal Descriptions of Social Concepts Automatic Modeling of (Highly Abstract) Social Concepts evoked by Art Images This project aims to i

0 Aug 22, 2021
Registration Loss Learning for Deep Probabilistic Point Set Registration

RLLReg This repository contains a Pytorch implementation of the point set registration method RLLReg. Details about the method can be found in the 3DV

Felix Järemo Lawin 35 Nov 02, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing

ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing ProFuzzBench is a benchmark for stateful fuzzing of network protocols. It includes a suite of

155 Jan 08, 2023
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise

45 Dec 08, 2022
Collection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow.

Generative Models Collection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow. Also present here are RBM and Helmholtz Machine. Note: Gen

Agustinus Kristiadi 7k Jan 02, 2023
Meaningful titles for tabs and PDF downloads! Also supports tab search.

arxiv-utils If you are a researcher that reads a lot on ArXiv, you'll benefit a lot from this web extension. Renames the title of PDF page to the pape

Johnson 174 Dec 20, 2022
Open AI's Python library

OpenAI Python Library The OpenAI Python library provides convenient access to the OpenAI API from applications written in the Python language. It incl

Pavan Ananth Sharma 3 Jul 10, 2022
GNPy: Optical Route Planning and DWDM Network Optimization

GNPy is an open-source, community-developed library for building route planning and optimization tools in real-world mesh optical networks

Telecom Infra Project 140 Dec 19, 2022
Code for CPM-2 Pre-Train

CPM-2 Pre-Train Pre-train CPM-2 此分支为110亿非 MoE 模型的预训练代码,MoE 模型的预训练代码请切换到 moe 分支 CPM-2技术报告请参考link。 0 模型下载 请在智源资源下载页面进行申请,文件介绍如下: 文件名 描述 参数大小 100000.tar

Tsinghua AI 136 Dec 28, 2022