A medical imaging framework for Pytorch

Overview
Comments
  • Adding support for multiple image formats (.dcm etc) in dataloaders

    Adding support for multiple image formats (.dcm etc) in dataloaders

    I was looking into another popular Tensorflow based medical library implementation NiftyNet , specifically the dataloaders and really liked the idea of multiple image loaders . Any plans on implementing the same ?

    As an initial guess we can dynamically pass the various functions for loading image formats in the input_handle and gt_handle

    self.input_handle = nib.load(self.input_filename) self.gt_handle = nib.load(self.gt_filename)

    We may need to make some changes as i saw that the slicing functionality depends on the nibabel/nifty format. I can start of the implementation if you are fine and may be later we can review.

    If you have any further ideas, i can help over the same.

    Thanks, Mohit

    opened by MohitTare 4
  • Notebook in

    Notebook in "Getting started" page does not open

    Hello, I tried to open the notebook indicated in the Getting Started page, but I get the following error in the Collab website: Notebook loading error

    There was an error loading this notebook. Ensure that the file is accessible and try again. Failed to execute 'json' on 'Response': body stream is locked

    I'm using the Brave browser under Linux, if that helps. Thanks!

    opened by omendezmorales 3
  • Released new transforms

    Released new transforms

    Changelog

    1. Locally tested the implementations of the Clahe and the Histogram Clipping Transforms, we tested it against the ACDC 2017 dataset.
    2. Refactored Clahe and Histogram Clipping transform from previous commits.
    3. Implemented the Square Padding Transform (more details below).
    4. Implemented the RangeMappingMRI2D Transform (more details below; a new proposal for name would be gladly welcomed).
    5. Added the possibility of applying the Clahe and Histogram Clipping transforms into labeled samples, although we understand that the default behavior should be False.

    Square Padding Transform

    Given the output size N, it will pad the matrix along the shortest 2D axis and then resize to the output size N. For example:

    3 by 5 matrix and output size = 8.

    3 x 5 -> 5 x 5 (with padding) -> 8 x 8 using np.resize. 4 x 3 -> 4 x 4 (with padding) -> 8 x 8 using np.resize.

    It is very important that the output size is bigger than any of the inputs. We used it for the ACDC 2017 2D MRI dataset where each patient has a variable number of slices, height, and width of each slice.

    Range Mapping MRI 2D Transform

    This basically maps the 2D MRI values to a new max_value, for example:

    we have slices that go from 0 to 16384, and by using this transform we can easily change their max value to go from 0 to 1. This is useful when using the Scikit implementation of the Clahe transform that implicitly asks for images in the -1 and 1 range.

    opened by asciidiego 2
  • Bugfix: Clahe and HistogramClipping refactor.

    Bugfix: Clahe and HistogramClipping refactor.

    1. Added labeled keyword to new transforms.
    2. Refactor main functionality of each transform to a class method.
    3. Since the input can be a numpy array or a PIL image, the np.asarray makes the transformation robust to PIL inputs.
    opened by asciidiego 2
  • Adding 3D specifics Dataloaders, transforms and model

    Adding 3D specifics Dataloaders, transforms and model

    This version contains some necessary functions to make a simple pipeline to train and use a 3D U-Net model.

    This version contains :

    • 2 data loaders (MRI3DSegmentationDataset, MRI3DSubVolumeSegmentationDataset), the first one just return the images/gt as tensors (the whole volume). The second one split the volume into several subelements, which is necessary to run U-Net 3D without using dozens of VRAM Go.
    • New transforms (NormalizeInstance3D, RandomRotation3D, RandomReverse3D).
    • A 3D U-Net model
    • Also updated some deprecated functions usage in the U-Net model
    opened by morvan-s 2
  • Medical image type

    Medical image type

    Hello,

    Thanks for your work! I have a question about the type of medical images, generally they were collected in dicom type(.dcm), do you create any dataloaders for dicom type inputs? As I know, there is a python library for such image type, it was used to convert .dcm to numpy, do you use it?

    question 
    opened by Yifeifr 2
  • digital-copyright

    digital-copyright

    Hi perone!👋, I added this optional feature to digitally sign you source-code and track it on a blockchain node should you ever be audited or experience a software supply-chain attack. Simply compare the byte encrypted signature on your .git binary with the hash written to your immutable blockchain node. If they ever differ you should escalate. See the perone-digital-copyright for complete instructions on accessing your hash.. Feel free to contact me directly to review any questions before accepting. ~~Best: [email protected]

    opened by JudeSafo 1
  • Implement undo_transform for RandomRotation and RandomRotation3D

    Implement undo_transform for RandomRotation and RandomRotation3D

    Implement undo_transform for RandomRotation and RandomRotation3D.

    1. Save the angle in the metadata
    2. undo_transform performs a rotation with the opposite angle.
    opened by charleygros 1
  • 3D Transformations?

    3D Transformations?

    are 3D transformations supported? it is not clear to my from the documentation and examples, and from looking at the code i'd guess its not the case? if they are, could you update the docs? if not, is anyone working on it? (maybe i'll add some basic transformations)

    opened by aydindemircioglu 1
  • import re for line 480

    import re for line 480

    flake8 testing of https://github.com/perone/medicaltorch on Python 3.7.0

    $ _flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

    ./examples/gmchallenge_unet.py:107:42: E999 SyntaxError: invalid syntax
                var_gt = gt_samples.cuda(async=True)
                                             ^
    ./medicaltorch/datasets.py:479:16: F821 undefined name 're'
                if re.search('[SaUO]', elem.dtype.str) is not None:
                   ^
    ./medicaltorch/transforms.py:26:36: F821 undefined name 'img'
                img = t.undo_transform(img)
                                       ^
    1     E999 SyntaxError: invalid syntax
    2     F821 undefined name 're'
    
    opened by cclauss 1
  • ‘async’ is a reserved word in Python >= 3.7

    ‘async’ is a reserved word in Python >= 3.7

    flake8 testing of https://github.com/perone/medicaltorch on Python 3.7.0

    $ _flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

    ./examples/gmchallenge_unet.py:107:42: E999 SyntaxError: invalid syntax
                var_gt = gt_samples.cuda(async=True)
                                             ^
    ./medicaltorch/datasets.py:479:16: F821 undefined name 're'
                if re.search('[SaUO]', elem.dtype.str) is not None:
                   ^
    ./medicaltorch/transforms.py:26:36: F821 undefined name 'img'
                img = t.undo_transform(img)
                                       ^
    1     E999 SyntaxError: invalid syntax
    2     F821 undefined name 're'
    
    enhancement 
    opened by cclauss 1
  • Import Errors in Datasets Class

    Import Errors in Datasets Class

    Hi,

    When using the latest version of medicaltorch (or at least, the one installed by pip), importing the datasets class into the program raises the following error:

    from torch._six import string_classes, int_classes                                   
    ImportError: cannot import name 'int_classes' from 'torch._six'
    

    I've found that this can be fixed by removing int_classes in the following line in datasets.py:

    from torch._six import string_classes, int_classes
    

    and, instead, declaring int_classes = int.

    opened by Birb12 0
  • Project dependencies may have API risk issues

    Project dependencies may have API risk issues

    Hi, In medicaltorch, inappropriate dependency versioning constraints can cause risks.

    Below are the dependencies and version constraints that the project is using

    nibabel>=2.2.1
    scipy>=1.0.0
    numpy>=1.14.1
    torch>=0.4.0
    torchvision>=0.2.1
    tqdm>=4.23.0
    scikit-image==0.15.0
    

    The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

    After further analysis, in this project, The version constraint of dependency scipy can be changed to >=0.19.0,<=1.7.3. The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0.

    The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

    The invocation of the current project includes all the following methods.

    The calling methods from the scipy
    scipy.spatial.distance.directed_hausdorff
    scipy.ndimage.filters.gaussian_filter
    scipy.ndimage.interpolation.map_coordinates
    scipy.spatial.distance.dice
    scipy.spatial.distance.jaccard
    
    The calling methods from the tqdm
    tqdm.tqdm.set_postfix
    tqdm.tqdm
    
    The calling methods from the all methods
    self.up3
    self.mp3
    self.conv1a
    f.read
    re.search
    self.branch4a_bn
    DownConv
    isinstance
    numpy.arange
    ValueError
    scipy.spatial.distance.directed_hausdorff
    self.conv3
    self.dc5
    torch.LongTensor
    numpy.any
    numpy.copy
    range
    numpy.allclose
    torch.from_numpy
    self.branch4a_drop
    self.ec2
    self.mp1
    index.self.handlers.get_pair_data
    torch.nn.BatchNorm2d
    numpy.sqrt
    self.branch5b_bn
    self.metadata.keys
    training_mean.input_data.pow.sum
    torch.stack
    torch.nn.LeakyReLU
    self.input_handle.header.get_zooms
    self.conv2_bn
    torchvision.transforms.functional.pad
    numpy.float32
    input.view
    self.conv1b_bn
    numpy.zeros
    input_data.np.flip.copy
    torchvision.transforms.functional.rotate
    self.sample_transform
    type
    self.slice_filter_fn
    numpy.random.uniform
    len
    tflat.iflat.sum
    medicaltorch.transforms.ToTensor
    self.conv9
    self.up_conv
    self.branch1a
    SegmentationPair2D.get_pair_slice
    prediction.flatten
    self.dc4
    self.branch2a
    self.branch4b_bn
    noise.astype.astype
    self.result_dict.items
    target.index_select
    self.threshold.target.torch.gt.float.view
    f.read.splitlines
    mt_collate
    self.branch3b_bn
    self.branch1a_bn
    numpy.random.random
    self.branch1b_drop
    self.branch3a
    self.branch3b_drop
    self.input_handle.header.get_data_shape
    self._build_train_input_filename
    self.gt_handle.header.get_data_shape
    self.conv2a_bn
    PIL.Image.fromarray.resize
    torch.nn.functional.avg_pool2d
    self.ec0
    sample_data.numpy
    self.branch3b
    self.amort
    self.conv2b_drop
    self.branch1a_drop
    error_msg.format
    os.path.dirname
    self.up1
    torchvision.transforms.functional.center_crop
    self.input_handle.get_fdata
    target.index_select.view
    numpy.squeeze
    self.branch4b_drop
    int
    self.ec3
    Mock
    nibabel.as_closest_canonical
    self.branch3a_bn
    os.path.exists
    self.branch1b
    SegmentationPair2D
    UpConv
    numpy.divide
    target.view
    self.input_handle.get_fdata.numel
    torch.nn.Conv2d
    PIL.Image.fromarray.mean
    self.propagate_params
    self.Unet.super.__init__
    self.batch.items
    self.branch2a_bn
    collections.defaultdict
    self.input_handle.get_fdata.sum
    self.down_conv
    torch.gt
    sys.path.insert
    numeric_score
    input.size
    masking.squeeze.sum
    self.branch2b_drop
    i.self.handlers.get_pair_data
    self.up2
    self.branch4a
    coord.self.handlers.get_pair_data
    tqdm.tqdm
    NotImplementedError
    self.indexes.append
    self.mp2
    self.dc3
    torch.nn.functional.relu
    indices.image.map_coordinates.reshape
    self.conv4
    self._prepare_indexes
    self.get_pair_data
    DatasetManager
    self.branch2b
    self.branch5b
    torchvision.transforms.functional.to_tensor
    self.conv2b_bn
    self.dc1
    SampleMetadata
    self.gt_handle.header.get_zooms
    labeled_target.view.sum
    self.dc8
    skimage.exposure.equalize_adapthist
    torch.is_tensor
    self.UNet3D.super.__init__
    torch.cat
    format
    numpy.random.randint
    self.transform
    PIL.Image.fromarray.std
    self.ec7
    self.branch3a_drop
    setuptools.setup
    self.downconv.size
    setuptools.find_packages
    elem.dtype.name.startswith
    scipy.ndimage.filters.gaussian_filter
    torch.nn.Dropout2d
    masking.sum.sum
    self.conv1b_drop
    self.conv2b
    scipy.spatial.distance.dice
    numpy.isnan
    elem.dtype.name.__numpy_type_map
    self.conv2a_drop
    self.conv1a_bn
    torch.DoubleTensor
    numpy.reshape
    torch.nn.ConvTranspose3d
    codecs.open
    self.branch5a
    torch.nn.Conv3d
    torch.nn.MaxPool3d
    RuntimeError
    masking.squeeze.nonzero
    list
    self.prediction
    self.conv2_drop
    os.path.join
    groundtruth.flatten
    numpy.meshgrid
    self.amort_bn
    numpy.random.rand
    torchvision.transforms.functional.affine
    numpy.round
    input.index_select
    self.dc2
    self.sample_augment.append
    self.dc0
    scipy.ndimage.interpolation.map_coordinates
    masking.nonzero.squeeze
    self.conv2a
    self.ec5
    map
    TypeError
    tqdm.tqdm.set_postfix
    self.sample_augment
    self.branch1b_bn
    self.transform.undo_transform
    self._load_filenames
    torch.nn.Sequential
    self.label_augment
    self.get_params
    input.index_select.view
    scipy.spatial.distance.jaccard
    self.conv1a_drop
    self.DownConv.super.__init__
    round
    self.handlers.append
    self.UpConv.super.__init__
    self.dc9
    SegmentationPair2D.get_pair_shapes
    numpy.transpose
    self.downconv
    os.path.abspath
    numpy.percentile
    self.gt_handle.get_fdata
    numpy.array
    self.conv2
    self.pool0
    numpy.flip
    self.conv1_drop
    self.ec1
    self.filename_pairs.append
    torchvision.transforms.functional.normalize
    self.branch5a_bn
    self.branch5b_drop
    self.ec4
    self.elastic_transform
    numpy.sum
    self.branch2b_bn
    super.__init__
    self.concat_bn
    torch.sigmoid
    diff_conf.mean
    self.ec6
    global_pool.expand.expand
    t.undo_transform
    self.threshold.target.torch.gt.float
    self.branch2a_drop
    numpy.random.normal
    self.branch4b
    labeled_input.view.sum
    self.conv1
    self.get_pair_shapes
    self.dc6
    PIL.Image.fromarray
    self.branch5a_drop
    self.amort_drop
    nibabel.load
    numpy.sqrt.item
    self.conv1_bn
    torch.nn.MaxPool2d
    sample.update
    self.dc7
    self.pool2
    self.concat_drop
    training_mean.input_data.pow
    metric_fn
    self.conv1b
    self.pool1
    training_mean.item
    zip
    unittest.mock.MagicMock
    super
    numpy.asarray
    masking.squeeze.squeeze
    gt_data.np.flip.copy
    

    @developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.

    opened by PyDeps 0
  • dice score greater than 100

    dice score greater than 100

    I have been trying to run the example code on SCGMChallenge dataset I see that the dice score is computed using scipy Since the preds and gt_npy are not boolean arrays the outcome of dice dissimilarity is sometimes negative d -0.1138425519461516 Then the dice score (1-d) is more than one as below d1 1.1138425519461517

    The resultant is that the dice score is more than 100

    opened by kumartr 0
  • Issues and any examples for using 3D MRI Datasets and Transformation?

    Issues and any examples for using 3D MRI Datasets and Transformation?

    Hello all.

    May I know if how to use the captioned functions that was recently added?

    I could not find any examples or guide to follow. Very much appreciated!

    Here is my code:

    filenames = namedtuple('filenames', 'input_filename gt_filename') filenametuple = filenames(mri_input_filename, mri_gt_filename)

    pair = mt_datasets.MRI3DSegmentationDataset(filenametuple)

    and it gives out the following output:

    338 
    339     def _load_filenames(self):
    

    --> 340 for input_filename, gt_filename in self.filename_pairs: 341 segpair = SegmentationPair2D(input_filename, gt_filename, 342 self.cache, self.canonical)

    ValueError: too many values to unpack (expected 2)

    opened by arvinhui 0
Releases(v0.2)
Owner
Christian S. Perone
Machine Learning Engineering / Research
Christian S. Perone
This repository contains pre-trained models and some evaluation code for our paper Towards Unsupervised Dense Information Retrieval with Contrastive Learning

Contriever: Towards Unsupervised Dense Information Retrieval with Contrastive Learning This repository contains pre-trained models and some evaluation

Meta Research 207 Jan 08, 2023
CNN visualization tool in TensorFlow

tf_cnnvis A blog post describing the library: https://medium.com/@falaktheoptimist/want-to-look-inside-your-cnn-we-have-just-the-right-tool-for-you-ad

InFoCusp 778 Jan 02, 2023
GB-CosFace: Rethinking Softmax-based Face Recognition from the Perspective of Open Set Classification

GB-CosFace: Rethinking Softmax-based Face Recognition from the Perspective of Open Set Classification This is the official pytorch implementation of t

Alibaba Cloud 5 Nov 14, 2022
Prediction of MBA refinance Index (Mortgage prepayment)

Prediction of MBA refinance Index (Mortgage prepayment) Deep Neural Network based Model The ability to predict mortgage prepayment is of critical use

Ruchil Barya 1 Jan 16, 2022
mmdetection version of TinyBenchmark.

introduction This project is an mmdetection version of TinyBenchmark. TODO list: add TinyPerson dataset and evaluation add crop and merge for image du

34 Aug 27, 2022
Large dataset storage format for Pytorch

H5Record Large dataset ( 100G, = 1T) storage format for Pytorch (wip) Support python 3 pip install h5record Why? Writing large dataset is still a

theblackcat102 43 Oct 22, 2022
BC3407-Group-5-Project - BC3407 Group Project With Python

BC3407-Group-5-Project As the world struggles to contain the ever-changing varia

1 Jan 26, 2022
Proof-Of-Concept Piano-Drums Music AI Model/Implementation

Rock Piano "When all is one and one is all, that's what it is to be a rock and not to roll." ---Led Zeppelin, "Stairway To Heaven" Proof-Of-Concept Pi

Alex 4 Nov 28, 2021
A Protein-RNA Interface Predictor Based on Semantics of Sequences

PRIP PRIP:A Protein-RNA Interface Predictor Based on Semantics of Sequences installation gensim==3.8.3 matplotlib==3.1.3 xgboost==1.3.3 prettytable==2

李优 0 Mar 25, 2022
A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)

gym-mtsim: OpenAI Gym - MetaTrader 5 Simulator MtSim is a simulator for the MetaTrader 5 trading platform alongside an OpenAI Gym environment for rein

Mohammad Amin Haghpanah 184 Dec 31, 2022
Public scripts, services, and configuration for running a smart home K3S network cluster

makerhouse_network Public scripts, services, and configuration for running MakerHouse's home network. This network supports: TODO features here For mo

Scott Martin 1 Jan 15, 2022
A Joint Video and Image Encoder for End-to-End Retrieval

Frozen️ in Time ❄️ ️️️️ ⏳ A Joint Video and Image Encoder for End-to-End Retrieval project page | arXiv | webvid-data Repository containing the code,

225 Dec 25, 2022
Python library for analysis of time series data including dimensionality reduction, clustering, and Markov model estimation

deeptime Releases: Installation via conda recommended. conda install -c conda-forge deeptime pip install deeptime Documentation: deeptime-ml.github.io

495 Dec 28, 2022
Official Implementation of VAT

Semantic correspondence Few-shot segmentation Cost Aggregation Is All You Need for Few-Shot Segmentation For more information, check out project [Proj

Hamacojr 114 Dec 27, 2022
CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images

CurriculumNet Introduction This repo contains related code and models from the ECCV 2018 CurriculumNet paper. CurriculumNet is a new training strategy

156 Jul 04, 2022
Weakly Supervised Scene Text Detection using Deep Reinforcement Learning

Weakly Supervised Scene Text Detection using Deep Reinforcement Learning This repository contains the setup for all experiments performed in our Paper

Emanuel Metzenthin 3 Dec 16, 2022
Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(2021) paper

ImageNet-21K Pretraining for the Masses Paper | Pretrained models Official PyTorch Implementation Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, Lihi Zelni

574 Jan 02, 2023
GUI for TOAD-GAN, a PCG-ML algorithm for Token-based Super Mario Bros. Levels.

If you are using this code in your own project, please cite our paper: @inproceedings{awiszus2020toadgan, title={TOAD-GAN: Coherent Style Level Gene

Maren A. 13 Dec 14, 2022
Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"

Output Diversified Sampling (ODS) This is the github repository for the NeurIPS 2020 paper "Diversity can be Transferred: Output Diversification for W

50 Dec 11, 2022
Yolact-keras实例分割模型在keras当中的实现

Yolact-keras实例分割模型在keras当中的实现 目录 性能情况 Performance 所需环境 Environment 文件下载 Download 训练步骤 How2train 预测步骤 How2predict 评估步骤 How2eval 参考资料 Reference 性能情况 训练数

Bubbliiiing 11 Dec 26, 2022