Robust and Accurate Object Detection via Self-Knowledge Distillation

Related tags

Deep Learningudfa
Overview

Robust and Accurate Object Detection via Self-Knowledge Distillation

paper:https://arxiv.org/abs/2111.07239

Environments

  • Python 3.7
  • Cuda 10.1
  • Prepare dependency

Notes: We revise MMCV to adapt adversarial algorithms. Therefore we suggest that you prepare environments strictly as follows:

conda create -n udfa python=3.7
conda activate udfa
sh prepare_env.sh

Prepare datasets

  • VOC0712, download from http://host.robots.ox.ac.uk/pascal/VOC/, and place it under data directory

  • COCO2017, download from https://cocodataset.org/#download, and place it under data directory

  • The structure of datasets is shown as follows:

    structure of dataset

Train

VOC dataset

  • Generate GFLV2-R34 pretrained detector (served as teacher) on PASCAL_VOC 0712:

    python -m torch.distributed.launch --nproc_per_node=4  train.py --launcher pytorch --config configs/gflv2/gflv2_r34_fpn_voc_std.py 
    cd work_dirs/gflv2_r34_fpn_voc_std
    cp epoch_12.pth ../../weights/gflv2_r34_voc_pre.pth
    
  • Training GFLV2-R34 using udfa on PASCAL_VOC 0712:

    python -m torch.distributed.launch --nproc_per_node=4  train.py --launcher pytorch --config configs/gflv2/gflv2_r34_fpn_voc_kdss.py --load-from weights/gflv2_r34_voc_pre.pth
    
  • Training GFLV2-R34 using udfa with advprop on PASCAL_VOC 0712:

    python -m torch.distributed.launch --nproc_per_node=4  train.py --launcher pytorch --config configs/gflv2/gflv2_r34_fpn_voc_kdss1.py --load-from weights/gflv2_r34_voc_pre.pth
    
  • Training GFLV2-R34 using Det-AdvProp on PASCAL_VOC 0712:

    python -m torch.distributed.launch --nproc_per_node=4  train.py --launcher pytorch --config configs/gflv2/gflv2_r34_fpn_voc_mixbn.py --load-from weights/gflv2_r34_voc_pre.pth
    

COCO dataset

  • Generate GFLV2-R34 pretrained detector (served as teacher) on COCO:

    python -m torch.distributed.launch --nproc_per_node=4  train.py --launcher pytorch --config configs/gflv2/gflv2_r34_fpn_coco_std.py 
    cd work_dirs/gflv2_r34_fpn_coco_std
    cp epoch_12.pth ../../weights/gflv2_r34_coco_pre.pth
    
  • Training GFLV2-R34 using udfa on COCO:

    python -m torch.distributed.launch --nproc_per_node=4  train.py --launcher pytorch --config configs/gflv2/gflv2_r34_fpn_coco_kdss.py --load-from weights/gflv2_r34_coco_pre.pth
    
  • Training GFLV2-R34 using Det-AdvProp on COCO:

    python -m torch.distributed.launch --nproc_per_node=4  train.py --launcher pytorch --config configs/gflv2/gflv2_r34_fpn_coco_mixbn.py --load-from weights/gflv2_r34_coco_pre.pth
    

Test

  • Evlauate the clean AP or adversarial robustness on PASCAL_VOC 2007 test set:

    python -m torch.distributed.launch --nproc_per_node=4 test.py --launcher pytorch --configs/gflv2/gflv2_r34_fpn_voc_std.py  --checkpoint weights/gflv2_r34_voc_pre.pth --num_steps 0 --step_size 2 --eval mAP
    
  • Evlauate the clean AP or adversarial robustness on COCO 2017val set:

    python -m torch.distributed.launch --nproc_per_node=4 test.py --launcher pytorch --configs/gflv2/gflv2_r34_fpn_coco_std.py  --checkpoint weights/gflv2_r34_coco_pre.pth --num_steps 0 --step_size 2 --eval bbox
    

Acknowledgement

Our project is based on ImageCorruptions, MMDetection and MMCV.

Owner
Weipeng Xu
Weipeng Xu
GBIM(Gesture-Based Interaction map)

手势交互地图 GBIM(Gesture-Based Interaction map),基于视觉深度神经网络的交互地图,通过电脑摄像头观察使用者的手势变化,进而控制地图进行简单的交互。网络使用PaddleX提供的轻量级模型PPYOLO Tiny以及MobileNet V3 small,使得整个模型大小约10MB左右,即使在CPU下也能快速定位和识别手势。

8 Feb 10, 2022
STBP is a way to train SNN with datasets by Backward propagation.

Spiking neural network (SNN), compared with depth neural network (DNN), has faster processing speed, lower energy consumption and more biological interpretability, which is expected to approach Stron

Ling Zhang 18 Dec 09, 2022
Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization Official PyTorch implementation of the Fishr regularization for out-of-dist

62 Dec 22, 2022
Self-training with Weak Supervision (NAACL 2021)

This repo holds the code for our weak supervision framework, ASTRA, described in our NAACL 2021 paper: "Self-Training with Weak Supervision"

Microsoft 148 Nov 20, 2022
The Instructed Glacier Model (IGM)

The Instructed Glacier Model (IGM) Overview The Instructed Glacier Model (IGM) simulates the ice dynamics, surface mass balance, and its coupling thro

27 Dec 16, 2022
Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

Han Xu 129 Dec 11, 2022
PyTorch code for: Learning to Generate Grounded Visual Captions without Localization Supervision

Learning to Generate Grounded Visual Captions without Localization Supervision This is the PyTorch implementation of our paper: Learning to Generate G

Chih-Yao Ma 41 Nov 17, 2022
A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes iGibson is a simulation environment providing fast visual rend

Stanford Vision and Learning Lab 493 Jan 04, 2023
RATE: Overcoming Noise and Sparsity of Textual Features in Real-Time Location Estimation (CIKM'17)

RATE: Overcoming Noise and Sparsity of Textual Features in Real-Time Location Estimation This is the implementation of RATE: Overcoming Noise and Spar

Yu Zhang 5 Feb 10, 2022
Implementations for the ICLR-2021 paper: SEED: Self-supervised Distillation For Visual Representation.

Implementations for the ICLR-2021 paper: SEED: Self-supervised Distillation For Visual Representation.

Jacob 27 Oct 23, 2022
Efficient Training of Visual Transformers with Small Datasets

Official codes for "Efficient Training of Visual Transformers with Small Datasets", NerIPS 2021.

Yahui Liu 112 Dec 25, 2022
Using VapourSynth with super resolution models and speeding them up with TensorRT.

VSGAN-tensorrt-docker Using image super resolution models with vapoursynth and speeding them up with TensorRT. Using NVIDIA/Torch-TensorRT combined wi

111 Jan 05, 2023
Learning Skeletal Articulations with Neural Blend Shapes

This repository provides an end-to-end library for automatic character rigging and blend shapes generation as well as a visualization tool. It is based on our work Learning Skeletal Articulations wit

Peizhuo 504 Dec 30, 2022
Exploring whether attention is necessary for vision transformers

Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet Paper/Report TL;DR We replace the attention layer in a v

Luke Melas-Kyriazi 461 Jan 07, 2023
Causal estimators for use with WhyNot

WhyNot Estimators A collection of causal inference estimators implemented in Python and R to pair with the Python causal inference library whynot. For

ZYKLS 8 Apr 06, 2022
A sketch extractor for anime/illustration.

Anime2Sketch Anime2Sketch: A sketch extractor for illustration, anime art, manga By Xiaoyu Xiang Updates 2021.5.2: Upload more example results of anim

Xiaoyu Xiang 1.6k Jan 01, 2023
Repository for MeshTalk supplemental material and code once the (already approved) 16 GHS captures our lab will make publicly available are released.

meshtalk This repository contains code to run MeshTalk for face animation from audio. If you use MeshTalk, please cite @inproceedings{richard2021mesht

Meta Research 221 Jan 06, 2023
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

DV Lab 137 Dec 14, 2022
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

SpaceML 92 Nov 30, 2022
Highly comparative time-series analysis

〰️ hctsa 〰️ : highly comparative time-series analysis hctsa is a software package for running highly comparative time-series analysis using Matlab (fu

Ben Fulcher 569 Dec 21, 2022