Implementation of our paper "DMT: Dynamic Mutual Training for Semi-Supervised Learning"

Overview

PWC

PWC

PWC

PWC

DMT: Dynamic Mutual Training for Semi-Supervised Learning

This repository contains the code for our paper DMT: Dynamic Mutual Training for Semi-Supervised Learning, a concise and effective method for semi-supervised semantic segmentation & image classification.

Some might know it as the previous version DST-CBC, or Semi-Supervised Semantic Segmentation via Dynamic Self-Training and Class-Balanced Curriculum, if you want the old code, you can check out the dst-cbc branch.

Also, for older PyTorch version (<1.6.0) users, or the exact same environment that produced the paper's results, refer to 53853f6.

News

2021.6.7

Multi-GPU training support (based on Accelerate) is added, and the whole project is upgraded to PyTorch 1.6. Thanks to the codes & testing by @jinhuan-hit, and discussions from @lorenmt, @TiankaiHang.

2021.2.10

A slight backbone architecture difference in the segmentation task has just been identified and described in Acknowledgement.

2021.1.1

DMT is released. Happy new year! 😉

2020.12.7

The bug fix for DST-CBC (not fully tested) is released at the scale branch.

2020.11.9

Stay tuned for Dynamic Mutual Training (DMT), an updated version of DST-CBC, which has overall better and stabler performance and will be released early November. A new version Dynamic Mutual Training (DMT) will be released later, which has overall better and stabler performance.

Also, thanks to @lorenmt, a data augmentation bug fix will be released along with the next version, where PASCAL VOC performance is overall boosted by 1~2%, Cityscapes could also have better performance. But probably the gap to oracle will remain similar.

Setup

First, you'll need a CUDA 10, Python3 environment (best on Linux).

1. Setup PyTorch & TorchVision:

pip install torch==1.6.0 torchvision==0.7.0

2. Install other python packages you may require:

pip install packaging accelerate future matplotlib tensorboard tqdm
pip install git+https://github.com/ildoonet/pytorch-randaugment

3. Download the code and prepare the scripts:

git clone https://github.com/voldemortX/DST-CBC.git
cd DST-CBC
chmod 777 segmentation/*.sh
chmod 777 classification/*.sh

Getting started

Get started with SEGMENTATION.md for semantic segmentation.

Get started with CLASSIFICATION.md for image classification.

Understand the code

We refer interested readers to this repository's wiki. It is not updated for DMT yet.

Notes

It's best to use a Turing or Volta architecture GPU when running our code, since they have tensor cores and the computation speed is much faster with mixed precision. For instance, RTX 2080 Ti (which is what we used) or Tesla V100, RTX 20/30 series.

Our implementation is fast and memory efficient. A whole run (train 2 models by DMT on PASCAL VOC 2012) takes about 8 hours on a single RTX 2080 Ti using up to 6GB graphic memory, including on-the-fly evaluations and training baselines. The Cityscapes experiments are even faster.

Contact

Issues and PRs are most welcomed.

If you have any questions that are not answerable with Google, feel free to contact us through [email protected].

Citation

@article{feng2020dmt,
  title={DMT: Dynamic Mutual Training for Semi-Supervised Learning},
  author={Feng, Zhengyang and Zhou, Qianyu and Gu, Qiqi and Tan, Xin and Cheng, Guangliang and Lu, Xuequan and Shi, Jianping and Ma, Lizhuang},
  journal={arXiv preprint arXiv:2004.08514},
  year={2020}
}

Acknowledgements

The DeepLabV2 network architecture and coco pre-trained weights are faithfully re-implemented from AdvSemiSeg. The only difference is we use the so-called ResNetV1.5 implementation for ResNet-101 backbone (same as torchvision), for difference between ResNetV1 and V1.5, refer to this issue. However, the difference is reported to only bring 0-0.5% gain in ImageNet, considering we use the V1 COCO pre-trained weights that mismatch with V1.5, the overall performance should remain similar to V1. The better fully-supervised performance mainly comes from better training schedule. Besides, we base comparisons on relative performance to Oracle, not absolute performance.

The CBC part of the older version DST-CBC is adapted from CRST.

The overall implementation is based on TorchVision and PyTorch.

The people who've helped to make the method & code better: lorenmt, jinhuan-hit, TiankaiHang, etc.

Comments
  • miou problem in segmentation

    miou problem in segmentation

    Thanks for sharing a good job! I have a question. When I train cityscapes using 1/8 labeled data, two models(init from coco and imagenet) can reach nearly 59 mIOU in val set, close to 59.65 presented in the paper. However, after 5 iterations, the metric descends to 53(coco) and 22(imagenet). I check the pseudo label using the model of 59 mIOU and it is not particularly good. I don't know if that affected the results.

    question possible bug 
    opened by jinhuan-hit 26
  • Visualize the final experimental results

    Visualize the final experimental results

    Hello, your paper and code are very good, thank you for your efforts. Now I have a question to ask you, the details are as follows: First of all, I conducted experiments on my own data, and the results have been obtained. How can I use these weights to test test sets? In addition, I used dmT-VOC-20-1__p5 -- I , and use the training model to test, the effect is very poor, I do not know when the test method is correct.

    您好,您的论文和代码非常棒,感谢您的付出。现在我有个问题想请教您,具体如下:首先我是在我自己的数据上进行实验,且已经跑出结果。我如何能够用这些权值来测试测试集?此外,我使用了dmt-voc-20-1__p5--i的权重,并利用训练的模型来进行测试,效果很差,我不知道测试方法时候正确。

    question 
    opened by JayeShen1996 16
  • Nan values in confusion matrix

    Nan values in confusion matrix

    Hello, i'm using a segmentation dataset with two classes and grayscale images. I'm duplicating the channels of the image with elif pic.mode == 'L': img = torch.from_numpy(np.array(pic, np.uint8, copy=False)).expand([3, 224, 224]).reshape(-1) While training the baseline without DMT i get only accuracy values for one of the classes with nan values for the other: average row correct: ['99.52', 'nan'].

    Do you have any idea what i'd might have done wrong/missed? Thanks in advance!

    question 
    opened by dervirvel 10
  • Question about label mapping for cityscapes dataset

    Question about label mapping for cityscapes dataset

    When I was using part of your code about cityscapes benchmark, I met up with the error that

    IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "../utils/datasets.py", line 155, in getitem img1, target1 = self.transforms(img, target) File "../utils/transforms.py", line 27, in call image, target = t(image, target) File "../utils/transforms.py", line 216, in call target = target if type(target) == str else self.label_id_map[target] IndexError: index 255 is out of bounds for dimension 0 with size 34

    It seems that the LabelMap(label_id_map_city), didn't work correctly. It's the first time to using this benchmark, so I dont know how to deal with this problem, could you plz give me some hints?

    question 
    opened by revaeb 6
  • How

    How

    First of all, thank you very much for your previous help. Now I can train on my own data set, but now I have another problem. I want to convert the output of the network into a mask file like the given label. I want to know how this should be How to do it, can you help me? Can the output use softmax and then set the threshold to generate the final mask?

    question 
    opened by userhr2333 4
  • About using a better model

    About using a better model

    I would like to ask if you have used a better model for experimentation, such as deeplab V3+. Will it bring better accuracy if you use a better model?

    opened by wing212 3
  • What's the meaning of splits?

    What's the meaning of splits?

    Thanks for your hard work!

    I am new to this question. Can you explain the meaning of splits in generate_splits.py, like setting [2, 4, 8, 20, 29.75] for cityscapes? I only know that it means the ratio of labeld data and unlabeled data and really don't know why you set those values. Furthermore, if I want to train it on my own data, how can I set this variable according to the ratio of my labeled data and unlabeled data?

    Thank you for your help.

    question 
    opened by czb2133 3
  • Sudden drop in accuracy

    Sudden drop in accuracy

    Hello, I want to ask why the accuracy has suddenly dropped, and the accuracy of my reproduced article is much lower than that of the original text. I use a single 3090ti graphics card for training. image

    question 
    opened by wing212 18
  • When I run segmentation code with my own dataset, it occurs the error...

    When I run segmentation code with my own dataset, it occurs the error...

    Hello ! When I match my dataset to the cityscapes, it does not work in the model initialization phase. RuntimeError: Error(s) in loading state_dict for DeepLab: size mismatch for classifier.0.convs.0.weight: copying a param with shape torch.Size([19, 2048, 3, 3]) from checkpoint, the shape in current model is torch.Size([4, 2048, 3, 3]).

    My dataset contains only 5% labeled images. The size is 2048*1024,which is the same as the cityscapes. Could you help me find the probelm?

    Thank you very much!

    question 
    opened by grbcwq123 4
  • A warning appears during the running of the program, will this affect the accuracy?

    A warning appears during the running of the program, will this affect the accuracy?

    Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'",)

    The version of pytorch I installed is 1.2.0 and the version of torchvision is 0.4.0,and the version of apex is 0.1

    question 
    opened by userhr2333 2
  • [Kept for Feedback] Multi-GPU & New models

    [Kept for Feedback] Multi-GPU & New models

    Thanks for your nice work and congratulations on your good results!

    I have several questions.

    • Will your model extended to Parallel (distributed data-parallel) in the future.
    • Why don't you try to use deeplabv3+, will it lead to a better result?

    Best.

    question fixed 
    opened by TiankaiHang 21
Releases(v1.2)
Owner
Zhengyang Feng
Coder? Researcher? Artist?
Zhengyang Feng
DeepMetaHandles: Learning Deformation Meta-Handles of 3D Meshes with Biharmonic Coordinates

DeepMetaHandles (CVPR2021 Oral) [paper] [animations] DeepMetaHandles is a shape deformation technique. It learns a set of meta-handles for each given

Liu Minghua 73 Dec 15, 2022
An implementation of the methods presented in Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data.

An implementation of the methods presented in Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data.

Andrew Jesson 9 Apr 04, 2022
Semantic Segmentation of images using PixelLib with help of Pascalvoc dataset trained with Deeplabv3+ framework.

CARscan- Approach 1 - Segmentation of images by detecting contours. It failed because in images with elements along with cars were also getting detect

Padmanabha Banerjee 5 Jul 29, 2021
hySLAM is a hybrid SLAM/SfM system designed for mapping

HySLAM Overview hySLAM is a hybrid SLAM/SfM system designed for mapping. The system is based on ORB-SLAM2 with some modifications and refactoring. Raú

Brian Hopkinson 15 Oct 10, 2022
Julia package for multiway (inverse) covariance estimation.

TensorGraphicalModels TensorGraphicalModels.jl is a suite of Julia tools for estimating high-dimensional multiway (tensor-variate) covariance and inve

Wayne Wang 3 Sep 23, 2022
A PyTorch implementation of PointRend: Image Segmentation as Rendering

PointRend A PyTorch implementation of PointRend: Image Segmentation as Rendering [arxiv] [Official Implementation: Detectron2] This repo for Only Sema

AhnDW 336 Dec 26, 2022
Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021.

EfficientZero (NeurIPS 2021) Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021. Environments Effi

Weirui Ye 671 Jan 03, 2023
Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic Segmentation (CVPR 2022)

CCAM (Unsupervised) Code repository for our paper "CCAM: Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localizati

Computer Vision Insitute, SZU 113 Dec 27, 2022
JDet is Object Detection Framework based on Jittor.

JDet is Object Detection Framework based on Jittor.

135 Dec 14, 2022
Face uncertainty quantification or estimation using PyTorch.

Face-uncertainty-pytorch This is a demo code of face uncertainty quantification or estimation using PyTorch. The uncertainty of face recognition is af

Kaen 3 Sep 16, 2022
Official PyTorch implementation of "Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets" (ICLR 2021)

Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets This is the official PyTorch implementation for the paper Rapid Neural A

48 Dec 26, 2022
OneShot Learning-based hotword detection.

EfficientWord-Net Hotword detection based on one-shot learning Home assistants require special phrases called hotwords to get activated (eg:"ok google

ANT-BRaiN 102 Dec 25, 2022
A curated list of awesome resources related to Semantic Search🔎 and Semantic Similarity tasks.

A curated list of awesome resources related to Semantic Search🔎 and Semantic Similarity tasks.

224 Jan 04, 2023
Wordle Env: A Daily Word Environment for Reinforcement Learning

Wordle Env: A Daily Word Environment for Reinforcement Learning Setup Steps: git pull [email&#

2 Mar 28, 2022
NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

4.8k Jan 07, 2023
NAS-FCOS: Fast Neural Architecture Search for Object Detection (CVPR 2020)

NAS-FCOS: Fast Neural Architecture Search for Object Detection This project hosts the train and inference code with pretrained model for implementing

Ning Wang 180 Dec 06, 2022
Using some basic methods to show linkages and transformations of robotic arms

roboticArmVisualizer Python GUI application to create custom linkages and adjust joint angles. In the future, I plan to add 2d inverse kinematics solv

Sandesh Banskota 1 Nov 19, 2021
End-To-End Memory Network using Tensorflow

MemN2N Implementation of End-To-End Memory Networks with sklearn-like interface using Tensorflow. Tasks are from the bAbl dataset. Get Started git clo

Dominique Luna 339 Oct 27, 2022
基于DouZero定制AI实战欢乐斗地主

DouZero_For_Happy_DouDiZhu: 将DouZero用于欢乐斗地主实战 本项目基于DouZero 环境配置请移步项目DouZero 模型默认为WP,更换模型请修改start.py中的模型路径 运行main.py即可 SL (baselines/sl/): 基于人类数据进行深度学习

1.5k Jan 08, 2023
A PyTorch library and evaluation platform for end-to-end compression research

CompressAI CompressAI (compress-ay) is a PyTorch library and evaluation platform for end-to-end compression research. CompressAI currently provides: c

InterDigital 680 Jan 06, 2023