Effect of Deep Transfer and Multi task Learning on Sperm Abnormality Detection

Overview

Effect of Deep Transfer and Multi task Learning on Sperm Abnormality Detection

License: GPL v3

Introduction

This repository includes codes and models of "Effect of Deep Transfer and Multi task Learning on Sperm Abnormality Detection" paper. link: https://doi.org/10.1016/j.compbiomed.2020.104121

Dataset

First you should download the MHSMA dataset using:

git clone https://github.com/soroushj/mhsma-dataset.git

Usage

First of all,the configuration file should be setted.So open dmtl.txt or dtl.txt and set the setting you want.This files contains paramaters of the model that you are going to train.

  • dtl.txt have only one line and contains paramaters to train a DTL model.

  • dmtl.txt contains two lines:paramaters of stage 1 are kept in the first line of the file and paramaters of stage 2 are kept in the second line of the file.
    Some paramaters have an aray of three values that they keep the value of three labels.To set them,consider this sequence:[Acrosome,Vacoule,Head].

  • To train a DTL model,use the following commands and arguments:

python train.py -t dtl [-e epchos] [-label label]  [-model model] [-w file] 

Argumetns:

Argument Description
-t type of network(dtl or dmtl)
-e number of epochs
-label label(a,v or h)
-model pre-trained model
-w name of best weihgt file
--phase You can use it to choose stage in DMTL(1 or 2)
--second_model The base model for second stage of DMTL

1.Train

  • To choose a pre-trained model, you can use one of the following models:
model argument Description
vgg_19 VGG 19
vgg_16 VGG 16
resnet_50 Resnet 50
resnet_101 Resnet 101
resnet_502 Resnet 502
  • To train a DMTL model,use the following commands and arguments:
python train.py -t dmtl [--phase phase] [-e epchos] [-label label] [-model model] [-w file]

Also you can use your own pre-trained model by using address of your model instead of the paramaters been told in the table above.

Example:
python train.py -t dmtl --phase 1 -e 100 -label a -model C:\model.h5 -w w.h5

2.K Fold

  • To perform K Fold on a model,use "-k_fold True" argument.
python train.py -k_fold True [-t type] [-e epchos] [-label label] [-model model] [-w file]

3.Threshold Search

  • To find a good threshold for your model,use the following code:
python threshold.py [-t type] [-addr model address] [-l label]

Models

The CNN models that were introduced and evaluated in our research paper can be found in the v1.0 release of this repository.

You might also like...
Face Detection and Alignment using Multi-task Cascaded Convolutional Networks (MTCNN)
Face Detection and Alignment using Multi-task Cascaded Convolutional Networks (MTCNN)

Face-Detection-with-MTCNN Face detection is a computer vision problem that involves finding faces in photos. It is a trivial problem for humans to sol

Multi-task yolov5 with detection and segmentation based on yolov5
Multi-task yolov5 with detection and segmentation based on yolov5

YOLOv5DS Multi-task yolov5 with detection and segmentation based on yolov5(branch v6.0) decoupled head anchor free segmentation head README中文 Ablation

Code for the ICML 2021 paper
Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxiang Wang, Han Zhao, Bo Li.

Bridging Multi-Task Learning and Meta-Learning Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Trainin

A novel Engagement Detection with Multi-Task Training (ED-MTT) system
A novel Engagement Detection with Multi-Task Training (ED-MTT) system

A novel Engagement Detection with Multi-Task Training (ED-MTT) system which minimizes MSE and triplet loss together to determine the engagement level of students in an e-learning environment.

Self-training for Few-shot Transfer Across Extreme Task Differences

Self-training for Few-shot Transfer Across Extreme Task Differences (STARTUP) Introduction This repo contains the official implementation of the follo

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers The repository contains the code to reproduce the experimen

Efficient neural networks for analog audio effect modeling

micro-TCN Efficient neural networks for audio effect modeling

[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias

Counterfactual VQA (CF-VQA) This repository is the Pytorch implementation of our paper "Counterfactual VQA: A Cause-Effect Look at Language Bias" in C

Algebraic effect handlers in Python

PyEffect: Algebraic effects in Python What IDK. Usage effects.handle(operation, handlers=None) effects.set_handler(effect, handler) Supported effects

Comments
  • a possible typo(bug)

    a possible typo(bug)

    Very interesting idea and complements!

    In LoadData.py, starting from line 150, ` if phase == 'search':

        return {
                "x_train": x_train_128,
                "y_train": y_train,
                "x_train_128": x_train_128,
                'x_val_128': x_valid_128,
                "x_val": x_valid_128,
                "y_val": y_valid,
                "x_test": x_test_128,
                "y_test": y_test
                }`
    

    here, I think that the first key-value pair should probably be "x_train": x_train instead of "x_train": x_train_128, which causes an error of shape mismatch during fit.

    opened by captainst 0
Releases(v1.0)
Owner
Amir Abbasi
Student at University of Guilan (Computer Engineering), Working on Computer Vision & Reinforcement Learning
Amir Abbasi
The aim of this project is to build an AI bot that can play the Wordle game, or more generally Squabble

Wordle RL The aim of this project is to build an AI bot that can play the Wordle game, or more generally Squabble I know there are more deterministic

Aditya Arora 3 Feb 22, 2022
BridgeGAN - Tensorflow implementation of Bridging the Gap between Label- and Reference-based Synthesis in Multi-attribute Image-to-Image Translation.

Bridging the Gap between Label- and Reference based Synthesis(ICCV 2021) Tensorflow implementation of Bridging the Gap between Label- and Reference-ba

huangqiusheng 8 Jul 13, 2022
Ipython notebook presentations for getting starting with basic programming, statistics and machine learning techniques

Data Science 45-min Intros Every week*, our data science team @Gnip (aka @TwitterBoulder) gets together for about 50 minutes to learn something. While

Scott Hendrickson 1.6k Dec 31, 2022
Gif-caption - A straightforward GIF Captioner written in Python

Broksy's GIF Captioner Have you ever wanted to easily caption a GIF without havi

3 Apr 09, 2022
For visualizing the dair-v2x-i dataset

3D Detection & Tracking Viewer The project is based on hailanyi/3D-Detection-Tracking-Viewer and is modified, you can find the original version of the

34 Dec 29, 2022
Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Real-ESRGAN Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data Ported from https://github.com/xinntao/Real-ESRGAN Depend

Holy Wu 44 Dec 27, 2022
Pseudo-Visual Speech Denoising

Pseudo-Visual Speech Denoising This code is for our paper titled: Visual Speech Enhancement Without A Real Visual Stream published at WACV 2021. Autho

Sindhu 94 Oct 22, 2022
Animal Sound Classification (Cats Vrs Dogs Audio Sentiment Classification)

this is a simple artificial neural network model using deep learning and torch-audio to classify cats and dog sounds.

crispengari 3 Dec 05, 2022
(ICCV'21) Official PyTorch implementation of Relational Embedding for Few-Shot Classification

Relational Embedding for Few-Shot Classification (ICCV 2021) Dahyun Kang, Heeseung Kwon, Juhong Min, Minsu Cho [paper], [project hompage] We propose t

Dahyun Kang 82 Dec 24, 2022
i-RevNet Pytorch Code

i-RevNet: Deep Invertible Networks Pytorch implementation of i-RevNets. i-RevNets define a family of fully invertible deep networks, built from a succ

Jörn Jacobsen 378 Dec 06, 2022
Code accompanying our NeurIPS 2021 traffic4cast challenge

Traffic forecasting on traffic movie snippets This repo contains all code to reproduce our approach to the IARAI Traffic4cast 2021 challenge. In the c

Nina Wiedemann 2 Aug 09, 2022
Official Pytorch implementation of Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference (ICLR 2022)

The Official Implementation of CLIB (Continual Learning for i-Blurry) Online Continual Learning on Class Incremental Blurry Task Configuration with An

NAVER AI 34 Oct 26, 2022
Code to generate datasets used in "How Useful is Self-Supervised Pretraining for Visual Tasks?"

Synthetic dataset rendering Framework for producing the synthetic datasets used in: How Useful is Self-Supervised Pretraining for Visual Tasks? Alejan

Princeton Vision & Learning Lab 21 Apr 29, 2022
A comprehensive list of published machine learning applications to cosmology

ml-in-cosmology This github attempts to maintain a comprehensive list of published machine learning applications to cosmology, organized by subject ma

George Stein 290 Dec 29, 2022
Image to Image translation, image generataton, few shot learning

Semi-supervised Learning for Few-shot Image-to-Image Translation [paper] Abstract: In the last few years, unpaired image-to-image translation has witn

yaxingwang 49 Nov 18, 2022
PyTorch implementation of DARDet: A Dense Anchor-free Rotated Object Detector in Aerial Images

DARDet PyTorch implementation of "DARDet: A Dense Anchor-free Rotated Object Detector in Aerial Images", [pdf]. Highlights: 1. We develop a new dense

41 Oct 23, 2022
CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped

CSWin-Transformer This repo is the official implementation of "CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows". Th

Microsoft 409 Jan 06, 2023
Official implementation of NPMs: Neural Parametric Models for 3D Deformable Shapes - ICCV 2021

NPMs: Neural Parametric Models Project Page | Paper | ArXiv | Video NPMs: Neural Parametric Models for 3D Deformable Shapes Pablo Palafox, Aljaz Bozic

PabloPalafox 109 Nov 22, 2022
Bayesian dessert for Lasagne

Gelato Bayesian dessert for Lasagne Recent results in Bayesian statistics for constructing robust neural networks have proved that it is one of the be

Maxim Kochurov 84 May 11, 2020
Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment.

(ACMMM 2021 Oral) SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment This repository shows two tasks: Face landmark detection and Fac

BoomStar 51 Dec 13, 2022