CVPR2022 paper "Dense Learning based Semi-Supervised Object Detection"

Related tags

Deep LearningDSL
Overview

Python >=3.8 PyTorch >=1.8.0 mmcv-full >=1.3.10

[CVPR2022] DSL: Dense Learning based Semi-Supervised Object Detection

DSL is the first work on Anchor-Free detector for Semi-Supervised Object Detection (SSOD).

This code is established on mmdetection and is only used for research.

Instruction

Install dependencies

pytorch>=1.8.0
cuda 10.2
python>=3.8
mmcv-full 1.3.10

Download ImageNet pre-trained models

Download resnet50_rla_2283.pth (Google) resnet50_rla_2283.pth (Baidu, extract code: 5lf1) for later DSL training.

Training

For dynamically labeling the unlabeled images, original COCO dataset and VOC dataset will be converted to (DSL-style) datasets where annotations are saved in different json files and each image has its own annotation file. In addition, this implementation is slightly different from the original paper, where we clean the code, merge some data flow for speeding up training, add PatchShuffle also to the labeled images, and remove MetaNet for speeding up training as well, the final performance is similar as the original paper.

Clone this project & Create data root dir

cd ${project_root_dir}
git clone https://github.com/chenbinghui1/DSL.git
mkdir data
mkdir ori_data

#resulting format
#${project_root_dir}
#      - ori_data
#      - data
#      - DSL
#        - configs
#        - ...

For COCO Partially Labeled Data protocol

1. Download coco dataset and unzip it

mkdir ori_data/coco
cd ori_data/coco

wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
wget http://images.cocodataset.org/zips/train2017.zip
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/zips/unlabeled2017.zip

unzip annotations_trainval2017.zip -d .
unzip -q train2017.zip -d .
unzip -q val2017.zip -d .
unzip -q unlabeled2017.zip -d .

# resulting format
# ori_data/coco
#   - train2017
#     - xxx.jpg
#   - val2017
#     - xxx.jpg
#   - unlabled2017
#     - xxx.jpg
#   - annotations
#     - xxx.json
#     - ...

2. Convert coco to semicoco dataset

Use (tools/coco_convert2_semicoco_json.py) to generate the DSL-style coco data dir, i.e., semicoco/, which matches the code of unlabel training and pseudo-label update.

cd ${project_root_dir}/DSL
python3 tools/coco_convert2_semicoco_json.py --input ${project_root_dir}/ori_data/coco --output ${project_root_dir}/data/semicoco

You will obtain ${project_root_dir}/data/semicoco/ dir

3. Prepare partially labeled data

Use (data_list/coco_semi/prepare_dta.py) to generate the partially labeled data list_file. Now we take 10% labeled data as example

cd data_list/coco_semi/
python3 prepare_dta.py --percent 10 --root ${project_root_dir}/ori_data/coco --seed 2

You will obtain (data_list/coco_semi/semi_supervised/instances_train2017.${seed}@${percent}.json) (data_list/coco_semi/semi_supervised/instances_train2017.${seed}@${percent}-unlabel.json) (data_list/coco_semi/semi_supervised/instances_train2017.json) (data_list/coco_semi/semi_supervised/instances_val2017.json)

These above files are only used as image_list.

4. Train supervised baseline model

Train base model via (demo/model_train/baseline_coco.sh); configs are in dir (configs/fcos_semi/); Before running this script please change the corresponding file path in both script and config files.

cd ${project_root_dir}/DSL
./demo/model_train/baseline_coco.sh

5. Generate initial pseudo-labels for unlabeled images(1/2)

Generate the initial pseudo-labels for unlabeled images via (tools/inference_unlabeled_coco_data.sh): please change the corresponding list file path of unlabeled data in the config file, and the model path in tools/inference_unlabeled_coco_data.sh.

./tools/inference_unlabeled_coco_data.sh

Then you will obtain (workdir_coco/xx/epoch_xxx.pth-unlabeled.bbox.json) which contains the pseudo-labels.

6. Generate initial pseudo-labels for unlabeled images(2/2)

Use (tools/generate_unlabel_annos_coco.py) to convert the produced (epoch_xxx.pth-unlabeled.bbox.json) above to DSL-style annotations

python3 tools/generate_unlabel_annos_coco.py \ 
          --input_path workdir_coco/xx/epoch_xxx.pth-unlabeled.bbox.json \
          --input_list data_list/coco_semi/semi_supervised/instances_train2017.${seed}@${percent}-unlabeled.json \
          --cat_info ${project_root_dir}/data/semicoco/mmdet_category_info.json \
          --thres 0.1

You will obtain (workdir_coco/xx/epoch_xxx.pth-unlabeled.bbox.json_thres0.1_annos/) dir which contains the DSL-style annotations.

7. DSL Training

Use (demo/model_train/unlabel_train.sh) to train our semi-supervised algorithm. Before training, please change the corresponding paths in config file and shell script.

./demo/model_train/unlabel_train.sh

For COCO Fully Labeled Data protocol

The overall steps are similar as steps in above Partially Labeled Data guaidline. The additional steps to do is to download and organize the new unlabeled data.

1. Organize the new images

Put all the jpg images into the generated DSL-style semicoco data dir like: semicoco/unlabel_images/full/xx.jpg;

cd ${project_root_dir}
cp ori_data/coco/unlabled2017/* data/semicoco/unlabel_images/full/

2. Download the corresponding files

Download (STAC_JSON.tar.gz) and unzip it; move (coco/annotations/instances_unlabeled2017.json) to (data_list/coco_semi/semi_supervised/) dir

cd ${project_root_dir}/ori_data
wget https://storage.cloud.google.com/gresearch/ssl_detection/STAC_JSON.tar
tar -xf STAC_JSON.tar.gz

# resulting files
# coco/annotations/instances_unlabeled2017.json
# coco/annotations/semi_supervised/instances_unlabeledtrainval20class.json
# voc/VOCdevkit/VOC2007/instances_diff_test.json
# voc/VOCdevkit/VOC2007/instances_diff_trainval.json
# voc/VOCdevkit/VOC2007/instances_test.json
# voc/VOCdevkit/VOC2007/instances_trainval.json
# voc/VOCdevkit/VOC2012/instances_diff_trainval.json
# voc/VOCdevkit/VOC2012/instances_trainval.json

cp coco/annotations/instances_unlabeled2017.json ${project_root_dir}/DSL/data_list/coco_semi/semi_supervised/

3. Train as steps4-steps7 which are used in Partially Labeled data protocol

Change the corresponding paths before training.

For VOC dataset

1. Download VOC data

Download VOC dataset to dir xx and unzip it, we will get (VOCdevkit/)

cd ${project_root_dir}/ori_data
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
tar -xf VOCtrainval_06-Nov-2007.tar
tar -xf VOCtest_06-Nov-2007.tar
tar -xf VOCtrainval_11-May-2012.tar

# resulting format
# ori_data/
#   - VOCdevkit
#     - VOC2007
#       - Annotations
#       - JPEGImages
#       - ...
#     - VOC2012
#       - Annotations
#       - JPEGImages
#       - ...

2. Convert voc to semivoc dataset

Use (tools/voc_convert2_semivoc_json.py) to generate DSL-style voc data dir, i.e., semivoc/, which matches the code of unlabel training and pseudo-label update.

cd ${project_root_dir}/DSL
python3 tools/voc_convert2_semivoc_json.py --input ${project_root_dir}/ori_data/VOCdevkit --output ${project_root_dir}/data/semivoc

And then use (tools/dataset_converters/pascal_voc.py) to convert the original voc list file to coco style file for evaluating VOC performances under COCO 'bbox' metric.

python3 tools/dataset_converters/pascal_voc.py ${project_root_dir}/ori_data/VOCdevkit -o data_list/voc_semi/ --out-format coco

You will obtain the list files in COCO-Style in dir: data_list/voc_semi/. These files are only used as val files, please refer to (configs/fcos_semi/voc/xx.py)

3. Combine with coco20class images

Copy (instances_unlabeledtrainval20class.json) to (data_list/voc_semi/) dir; and then run script (data_list/voc_semi/combine_coco20class_voc12.py) to produce the additional unlabel set with coco20classes.

cp ${project_root_dir}/ori_data/coco/annotations/semi_supervised/instances_unlabeledtrainval20class.json data_list/voc_semi/
cd data_list/voc_semi
python3 data_list/voc_semi/combine_coco20class_voc12.py \
                --cocojson instances_unlabeledtrainval20class.json \
                --vocjson voc12_trainval.json \
                --cocoimage_path ${project_root_dir}/data/semicoco/images/full \
                --outtxt_path ${project_root_dir}/data/semivoc/unlabel_prepared_annos/Industry/ \
                --outimage_path ${project_root_dir}/data/semivoc/unlabel_images/full
cd ../..

You will obtain the corresponding list file(.json): (voc12_trainval_coco20class.json), and the corresponding coco20classes images will be copyed to (${project_root_dir}/data/semivoc/unlabeled_images/full/) and the list file(.txt) will also be generated at (${project_root_dir}/data/semivoc/unlabel_prepared_annos/Industry/voc12_trainval_coco20class.txt)

4. Train as steps4-steps7 which are used in Partially Labeled data protocol

Please change the corresponding paths before training, and refer to configs/fcos_semi/voc/xx.py.

Testing

Please refer to (tools/semi_dist_test.sh).

./tools/semi_dist_test.sh

Acknowledgement

Owner
Bhchen
Bhchen
Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation

Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation Introduction WAKD is a PyTorch implementation for our ICPR-2022 pap

2 Oct 20, 2022
The implementation of 'Image synthesis via semantic composition'.

Image synthesis via semantic synthesis [Project Page] by Yi Wang, Lu Qi, Ying-Cong Chen, Xiangyu Zhang, Jiaya Jia. Introduction This repository gives

DV Lab 71 Jan 06, 2023
The MLOps platform for innovators 🚀

​ DS2.ai is an integrated AI operation solution that supports all stages from custom AI development to deployment. It is an AI-specialized platform service that collects data, builds a training datas

9 Jan 03, 2023
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.

Tensor2Tensor Tensor2Tensor, or T2T for short, is a library of deep learning models and datasets designed to make deep learning more accessible and ac

12.9k Jan 09, 2023
Ludwig Benchmarking Toolkit

Ludwig Benchmarking Toolkit The Ludwig Benchmarking Toolkit is a personalized benchmarking toolkit for running end-to-end benchmark studies across an

HazyResearch 17 Nov 18, 2022
AnimationKit: AI Upscaling & Interpolation using Real-ESRGAN+RIFE

ALPHA 2.5: Frostbite Revival (Released 12/23/21) Changelog: [ UI ] Chained design. All steps link to one another! Use the master override toggles to s

87 Nov 16, 2022
A Confidence-based Iterative Solver of Depths and Surface Normals for Deep Multi-view Stereo

idn-solver Paper | Project Page This repository contains the code release of our ICCV 2021 paper: A Confidence-based Iterative Solver of Depths and Su

zhaowang 43 Nov 17, 2022
ReferFormer - Official Implementation of ReferFormer

The official implementation of the paper: Language as Queries for Referring Video Object Segmentation Language as Queries for Referring Video Object S

Jonas Wu 232 Dec 29, 2022
Flexible Option Learning - NeurIPS 2021

Flexible Option Learning This repository contains code for the paper Flexible Option Learning presented as a Spotlight at NeurIPS 2021. The implementa

Martin Klissarov 7 Nov 09, 2022
Simple tools for logging and visualizing, loading and training

TNT TNT is a library providing powerful dataloading, logging and visualization utilities for Python. It is closely integrated with PyTorch and is desi

1.5k Jan 02, 2023
Isaac Gym Reinforcement Learning Environments

Isaac Gym Reinforcement Learning Environments

NVIDIA Omniverse 714 Jan 08, 2023
The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".

Watermark-Robustness-Toolbox - Official PyTorch Implementation This repository contains the official PyTorch implementation of the following paper to

49 Dec 19, 2022
Torch code for our CVPR 2018 paper "Residual Dense Network for Image Super-Resolution" (Spotlight)

Residual Dense Network for Image Super-Resolution This repository is for RDN introduced in the following paper Yulun Zhang, Yapeng Tian, Yu Kong, Bine

Yulun Zhang 494 Dec 30, 2022
Create UIs for prototyping your machine learning model in 3 minutes

Note: We just launched Hosted, where anyone can upload their interface for permanent hosting. Check it out! Welcome to Gradio Quickly create customiza

Gradio 11.7k Jan 07, 2023
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)

Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)

Yihui He 1k Jan 03, 2023
MinkLoc3D-SI: 3D LiDAR place recognition with sparse convolutions,spherical coordinates, and intensity

MinkLoc3D-SI: 3D LiDAR place recognition with sparse convolutions,spherical coordinates, and intensity Introduction The 3D LiDAR place recognition aim

16 Dec 08, 2022
Resilient projection-based consensus actor-critic (RPBCAC) algorithm

Resilient projection-based consensus actor-critic (RPBCAC) algorithm We implement the RPBCAC algorithm with nonlinear approximation from [1] and focus

Martin Figura 5 Jul 12, 2022
An experimental technique for efficiently exploring neural architectures.

SMASH: One-Shot Model Architecture Search through HyperNetworks An experimental technique for efficiently exploring neural architectures. This reposit

Andy Brock 478 Aug 04, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 05, 2022
The official implementation of the research paper "DAG Amendment for Inverse Control of Parametric Shapes"

DAG Amendment for Inverse Control of Parametric Shapes This repository is the official Blender implementation of the paper "DAG Amendment for Inverse

Elie Michel 157 Dec 26, 2022