Effect of Deep Transfer and Multi task Learning on Sperm Abnormality Detection

Overview

Effect of Deep Transfer and Multi task Learning on Sperm Abnormality Detection

License: GPL v3

Introduction

This repository includes codes and models of "Effect of Deep Transfer and Multi task Learning on Sperm Abnormality Detection" paper. link: https://doi.org/10.1016/j.compbiomed.2020.104121

Dataset

First you should download the MHSMA dataset using:

git clone https://github.com/soroushj/mhsma-dataset.git

Usage

First of all,the configuration file should be setted.So open dmtl.txt or dtl.txt and set the setting you want.This files contains paramaters of the model that you are going to train.

  • dtl.txt have only one line and contains paramaters to train a DTL model.

  • dmtl.txt contains two lines:paramaters of stage 1 are kept in the first line of the file and paramaters of stage 2 are kept in the second line of the file.
    Some paramaters have an aray of three values that they keep the value of three labels.To set them,consider this sequence:[Acrosome,Vacoule,Head].

  • To train a DTL model,use the following commands and arguments:

python train.py -t dtl [-e epchos] [-label label]  [-model model] [-w file] 

Argumetns:

Argument Description
-t type of network(dtl or dmtl)
-e number of epochs
-label label(a,v or h)
-model pre-trained model
-w name of best weihgt file
--phase You can use it to choose stage in DMTL(1 or 2)
--second_model The base model for second stage of DMTL

1.Train

  • To choose a pre-trained model, you can use one of the following models:
model argument Description
vgg_19 VGG 19
vgg_16 VGG 16
resnet_50 Resnet 50
resnet_101 Resnet 101
resnet_502 Resnet 502
  • To train a DMTL model,use the following commands and arguments:
python train.py -t dmtl [--phase phase] [-e epchos] [-label label] [-model model] [-w file]

Also you can use your own pre-trained model by using address of your model instead of the paramaters been told in the table above.

Example:
python train.py -t dmtl --phase 1 -e 100 -label a -model C:\model.h5 -w w.h5

2.K Fold

  • To perform K Fold on a model,use "-k_fold True" argument.
python train.py -k_fold True [-t type] [-e epchos] [-label label] [-model model] [-w file]

3.Threshold Search

  • To find a good threshold for your model,use the following code:
python threshold.py [-t type] [-addr model address] [-l label]

Models

The CNN models that were introduced and evaluated in our research paper can be found in the v1.0 release of this repository.

You might also like...
Face Detection and Alignment using Multi-task Cascaded Convolutional Networks (MTCNN)
Face Detection and Alignment using Multi-task Cascaded Convolutional Networks (MTCNN)

Face-Detection-with-MTCNN Face detection is a computer vision problem that involves finding faces in photos. It is a trivial problem for humans to sol

Multi-task yolov5 with detection and segmentation based on yolov5
Multi-task yolov5 with detection and segmentation based on yolov5

YOLOv5DS Multi-task yolov5 with detection and segmentation based on yolov5(branch v6.0) decoupled head anchor free segmentation head README中文 Ablation

Code for the ICML 2021 paper
Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxiang Wang, Han Zhao, Bo Li.

Bridging Multi-Task Learning and Meta-Learning Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Trainin

A novel Engagement Detection with Multi-Task Training (ED-MTT) system
A novel Engagement Detection with Multi-Task Training (ED-MTT) system

A novel Engagement Detection with Multi-Task Training (ED-MTT) system which minimizes MSE and triplet loss together to determine the engagement level of students in an e-learning environment.

Self-training for Few-shot Transfer Across Extreme Task Differences

Self-training for Few-shot Transfer Across Extreme Task Differences (STARTUP) Introduction This repo contains the official implementation of the follo

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers The repository contains the code to reproduce the experimen

Efficient neural networks for analog audio effect modeling

micro-TCN Efficient neural networks for audio effect modeling

[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias

Counterfactual VQA (CF-VQA) This repository is the Pytorch implementation of our paper "Counterfactual VQA: A Cause-Effect Look at Language Bias" in C

Algebraic effect handlers in Python

PyEffect: Algebraic effects in Python What IDK. Usage effects.handle(operation, handlers=None) effects.set_handler(effect, handler) Supported effects

Comments
  • a possible typo(bug)

    a possible typo(bug)

    Very interesting idea and complements!

    In LoadData.py, starting from line 150, ` if phase == 'search':

        return {
                "x_train": x_train_128,
                "y_train": y_train,
                "x_train_128": x_train_128,
                'x_val_128': x_valid_128,
                "x_val": x_valid_128,
                "y_val": y_valid,
                "x_test": x_test_128,
                "y_test": y_test
                }`
    

    here, I think that the first key-value pair should probably be "x_train": x_train instead of "x_train": x_train_128, which causes an error of shape mismatch during fit.

    opened by captainst 0
Releases(v1.0)
Owner
Amir Abbasi
Student at University of Guilan (Computer Engineering), Working on Computer Vision & Reinforcement Learning
Amir Abbasi
Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers The repository contains the code to reproduce the experimen

Alessandro Berti 4 Aug 24, 2022
This repo contains the pytorch implementation for Dynamic Concept Learner (accepted by ICLR 2021).

DCL-PyTorch Pytorch implementation for the Dynamic Concept Learner (DCL). More details can be found at the project page. Framework Grounding Physical

Zhenfang Chen 31 Jan 06, 2023
Implementation of "Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner"

Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner This repository is the official implementation of Meta-rPPG: Remote Heart Ra

Eugene Lee 137 Dec 13, 2022
Code for "Long Range Probabilistic Forecasting in Time-Series using High Order Statistics"

Long Range Probabilistic Forecasting in Time-Series using High Order Statistics This is the code produced as part of the paper Long Range Probabilisti

16 Dec 06, 2022
Rocket-recycling with Reinforcement Learning

Rocket-recycling with Reinforcement Learning Developed by: Zhengxia Zou I have long been fascinated by the recovery process of SpaceX rockets. In this

Zhengxia Zou 202 Jan 03, 2023
Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting

InversePrompting Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting Code: The code is provided in the "chinese_ip"

THUDM 101 Dec 16, 2022
This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT).

Dynamic-Vision-Transformer (Pytorch) This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT). Not All Ima

210 Dec 18, 2022
Deep Learning Emotion decoding using EEG data from Autism individuals

Deep Learning Emotion decoding using EEG data from Autism individuals This repository includes the python and matlab codes using for processing EEG 2D

Juan Manuel Mayor Torres 12 Dec 08, 2022
Pytorch implementation of the paper "Topic Modeling Revisited: A Document Graph-based Neural Network Perspective"

Graph Neural Topic Model (GNTM) This is the pytorch implementation of the paper "Topic Modeling Revisited: A Document Graph-based Neural Network Persp

Dazhong Shen 8 Sep 14, 2022
Running AlphaFold2 (from ColabFold) in Azure Machine Learning

Running AlphaFold2 (from ColabFold) in Azure Machine Learning Colby T. Ford, Ph.D. Companion repository for Medium Post: How to predict many protein s

Colby T. Ford 3 Feb 18, 2022
Implementation of "Fast and Flexible Temporal Point Processes with Triangular Maps" (Oral @ NeurIPS 2020)

Fast and Flexible Temporal Point Processes with Triangular Maps This repository includes a reference implementation of the algorithms described in "Fa

Oleksandr Shchur 20 Dec 02, 2022
FairMOT for Multi-Class MOT using YOLOX as Detector

FairMOT-X Project Overview FairMOT-X is a multi-class multi object tracker, which has been tailored for training on the BDD100K MOT Dataset. It makes

Jonathan Tan 33 Dec 28, 2022
DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe.

DeepLab Introduction DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe. It combines densely-compute

Ali 234 Nov 14, 2022
Feature board for ERPNext

ERPNext Feature Board Feature board for ERPNext Development Prerequisites k3d kubectl helm bench Install K3d Cluster # export K3D_FIX_CGROUPV2=1 # use

Revant Nandgaonkar 16 Nov 09, 2022
DeepStochlog Package For Python

DeepStochLog Installation Installing SWI Prolog DeepStochLog requires SWI Prolog to run. Run the following commands to install: sudo apt-add-repositor

KU Leuven Machine Learning Research Group 17 Dec 23, 2022
Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language (NeurIPS 2021)

VRDP (NeurIPS 2021) Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language Mingyu Ding, Zhenfang Chen, Tao Du, Pin

Mingyu Ding 36 Sep 20, 2022
Code needed to reproduce the examples found in "The Temporal Robustness of Stochastic Signals"

The Temporal Robustness of Stochastic Signals Code needed to reproduce the examples found in "The Temporal Robustness of Stochastic Signals" Case stud

0 Oct 28, 2021
Label Hallucination for Few-Shot Classification

Label Hallucination for Few-Shot Classification This repo covers the implementation of the following paper: Label Hallucination for Few-Shot Classific

Yiren Jian 13 Nov 13, 2022
Zero-shot Learning by Generating Task-specific Adapters

Code for "Zero-shot Learning by Generating Task-specific Adapters" This is the repository containing code for "Zero-shot Learning by Generating Task-s

INK Lab @ USC 11 Dec 17, 2021
LSUN Dataset Documentation and Demo Code

LSUN Please check LSUN webpage for more information about the dataset. Data Release All the images in one category are stored in one lmdb database fil

Fisher Yu 426 Jan 02, 2023