Code for Domain Adaptive Video Segmentation via Temporal Consistency Regularization in ICCV 2021

Related tags

Deep LearningDA-VSN
Overview

Domain Adaptive Video Segmentation via Temporal Consistency Regularization

Updates

Paper

Domain Adaptive Video Segmentation via Temporal Consistency Regularization

Dayan Guan, Jiaxing Huang, Xiao Aoran, Shijian Lu
School of Computer Science and Engineering, Nanyang Technological University, Singapore
International Conference on Computer Vision, 2021.

If you find this code useful for your research, please cite our paper:

@inproceedings{guan2021domain,
  title={Domain adaptive video segmentation via temporal consistency regularization},
  author={Guan, Dayan and Huang, Jiaxing and Xiao, Aoran and Lu, Shijian},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={8053--8064},
  year={2021}
}

Abstract

Video semantic segmentation is an essential task for the analysis and understanding of videos. Recent efforts largely focus on supervised video segmentation by learning from fully annotated data, but the learnt models often experience clear performance drop while applied to videos of a different domain. This paper presents DA-VSN, a domain adaptive video segmentation network that addresses domain gaps in videos by temporal consistency regularization (TCR) for consecutive frames of target-domain videos. DA-VSN consists of two novel and complementary designs. The first is cross-domain TCR that guides the prediction of target frames to have similar temporal consistency as that of source frames (learnt from annotated source data) via adversarial learning. The second is intra-domain TCR that guides unconfident predictions of target frames to have similar temporal consistency as confident predictions of target frames. Extensive experiments demonstrate the superiority of our proposed domain adaptive video segmentation network which outperforms multiple baselines consistently by large margins.

Installation

  1. Conda enviroment:
conda create -n DA-VSN python=3.6
conda activate DA-VSN
conda install -c menpo opencv
pip install torch==1.2.0 torchvision==0.4.0
  1. Clone the ADVENT:
git clone https://github.com/valeoai/ADVENT.git
pip install -e ./ADVENT
  1. Clone the repo:
git clone https://github.com/Dayan-Guan/DA-VSN.git
pip install -e ./DA-VSN

Preparation

  1. Dataset:
DA-VSN/data/Cityscapes/                       % Cityscapes dataset root
DA-VSN/data/Cityscapes/leftImg8bit_sequence   % leftImg8bit_sequence_trainvaltest
DA-VSN/data/Cityscapes/gtFine                 % gtFine_trainvaltest
DA-VSN/data/Viper/                            % VIPER dataset root
DA-VSN/data/Viper/train/img                   % Modality: Images; Frames: *[0-9]; Sequences: 00-77; Format: jpg
DA-VSN/data/Viper/train/cls                   % Modality: Semantic class labels; Frames: *0; Sequences: 00-77; Format: png
DA-VSN/data/SynthiaSeq/                      % SYNTHIA-Seq dataset root
DA-VSN/data/SynthiaSeq/SEQS-04-DAWN          % SYNTHIA-SEQS-04-DAWN
  1. Pre-trained models: Download pre-trained models and put in DA-VSN/pretrained_models

Optical Flow Estimation

  • For quick preparation: Download the optical flow estimated from Cityscapes-Seq validation set here and unzip in DA-VSN/data
  1. Clone the flownet2-pytorch:
git clone https://github.com/NVIDIA/flownet2-pytorch.git
  1. Download pre-trained FlowNet2 and put in flownet2-pytorch/pretrained_models
DA-VSN/data/Cityscapes_val_optical_flow_scale512/  % unzip Cityscapes_val_optical_flow_scale512.zip
  1. Use the flownet2-pytorch to estimate optical flow

Evaluation on Pretrained Models

  • VIPER → Cityscapes-Seq:
cd DA-VSN/davsn/scripts
python test.py --cfg configs/davsn_viper2city_pretrained.yml
  • SYNTHIA-Seq → Cityscapes-Seq:
python test.py --cfg configs/davsn_syn2city_pretrained.yml

Training and Testing

  • VIPER → Cityscapes-Seq:
cd DA-VSN/davsn/scripts
python train.py --cfg configs/davsn_viper2city.yml
python test.py --cfg configs/davsn_viper2city.yml
  • SYNTHIA-Seq → Cityscapes-Seq:
python train.py --cfg configs/davsn_syn2city.yml
python test.py --cfg configs/davsn_syn2city.yml

Acknowledgements

This codebase is heavily borrowed from ADVENT and flownet2-pytorch.

Contact

If you have any questions, please contact: [email protected]

Comments
  • Optical flow is not used for propagating

    Optical flow is not used for propagating

    Hi, author. I have two questions. The first is I find that you didn't use flow to propogate previous frame to current frame. You just use it as a limitation that the pixel appeared in both cf and kf will be retained. This is unreasonable. image And I refine the code using resample2D to warp kf to cf, but the result only improve a little.

    The second question is that I try to train DAVSN for 3 times on 1080Ti and 2080Ti following the setting you gave, but I only get 46 mIoU which is 2 point less than you.

    opened by EDENpraseHAZARD 5
  • Question on Synthia-seq dataset

    Question on Synthia-seq dataset

    Dear authors,

    Thank you for your great work. I have several questions about the synthia-seq->cityscape-seq adaptation. The first one is about the scale of training data. It seems like compared with the VIPER dataset, synthia-seq only contains one labeled video with 850 frames in total. Is that true? And the second question is that 11 classes are reported the Table 4, but in the dataloader of synthia-seq, 12 classes are used. So, I'm not sure whether the fence class is considered during adaptation or not. https://github.com/Dayan-Guan/DA-VSN/blob/d110ff70dacec4156a3787eb49e7f2448dfb91a5/davsn/dataset/SynthiaSeq.py#L11

    Thanks in advance for your help!

    opened by xyIsHere 3
  • Details of SYNTHIA-Seq dataset

    Details of SYNTHIA-Seq dataset

    Hi author, I have downloaded SYNTHIA-Seq, but I found there are 'Stereo_Left' and 'Stereo_Right' folders. And each contains 'Omni_B', 'Omni_F', 'Omni_L' and 'Omni_R'. I wonder which one is used for training.

    opened by EDENpraseHAZARD 2
  • Could you please provide 'estimated_optical_flow' for training DA-VSN

    Could you please provide 'estimated_optical_flow' for training DA-VSN

    Hi @Dayan-Guan , thank you for open-sourcing your work!

    I am trying to follow this work. For training DA-VSN from scratch, the optical flows (for the 3 datasets used in your paper) estimated by FlowNet2 are needed. However, the instruction in your README only includes the evaluation part. I also see from the recent issues that you have provided the code and more instructions for the training part. But the code is not a complete one I guess so I cannot generate the optical flows with it.

    Could you please provide your generated optical flows for all 3 datasets used in your paper? It would save us time. Or could you please have a look again at the provided 'Code_for_optical_flow_estimation'? So that it is runnable for generating optical flows on our own.

    Thanks in advance!

    Regards

    opened by ldkong1205 1
  • In train_video_UDA.py, line 251, trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), if the image flips, but the optical flow does not flip

    In train_video_UDA.py, line 251, trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), if the image flips, but the optical flow does not flip

    Hello! I really enjoy reading your work!! At the same time, I encountered a problem in the operation of train_video_UDA.py

    In line 251 trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), Variable trg_prob is the prediction of trg_img_b_wk, and trg_img_b_wk is obtained by trg_img_b based on a certain probability of flip, but trg_flow_warp does not seem to be flipped, We consider such a situation, If trg_img_b_wk is fliped, trg_flow_warp is not flipped, Then trg_prob_warp and trg_img_d_st do not seem consistent? Because the image flips, but the optical flow does not flip. Although the trg_pl in line 256~258 is fliped.

    Chinese discription of my question: 在第251行, trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), 变量trg_prob是trg_img_b_wk的语义分割预测, 而trg_img_b_wk是由trg_img_b根据一定概率flip得到的, 但 trg_flow_warp似乎没有进行翻转, 我们考虑这样一种情况, 如果trg_img_b_wk经过了flip处理, 那么trg_prob_warp和trg_img_d_st的语义貌似不是一致的?因为图像flip了但光流图没有flip。 尽管在第256行对trg_pl进行了flip操作

    opened by zhe-juanz 0
  • Some questions about data loading

    Some questions about data loading

    Hi, This is a very enlightening work!!! @xing0047 @Dayan-Guan I want to ask a question~

    When I use./TPS/tps/scripts/train.py to read SynthiaSeq or ViperSeq data, I debug the code and find the following phenomena:

    I tried to print some variables of __ getitem__ () ,

    When the shuffle of source_loader = data.DataLoader() is set to False, and the batch_size=cfg.TRAIN.BATCH_SIZE_SOURCE is set to 1,

    1. It is found that although the batch_ Size=1, but 4 pictures and the first frame corresponding to them are loaded at one time, Instead of 1 picture and the previous frame.

    2. At the same time, it is found that 4 loaded pictures are disordered, such as 2-1-3-4, rather than 1-2-3-4, it seems to violate the settings of shuffle.

    Could you please kindly explain my doubt? Thank you very much!!

    The print code are as follows:

    111

    The print results are as follows,which the order of each run of print is different:

    ---index--- 1 ---index--- 0 ---index--- 2 img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000002.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000002.png ---index--- 3 img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000001.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000001.png img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000003.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000003.png img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000004.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000004.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000003.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000002.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000001.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000000.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000003.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000002.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000001.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000000.png

    opened by zhe-juanz 0
  • Regarding Synthia-Seq Dataset

    Regarding Synthia-Seq Dataset

    I really enjoyed reading your work. I have a question regarding the synthia-seq dataset. In the paper you mention that you have used 8000 synthesized video frames, but in the github the Synthia-Seq Dawn contain only 850 images. Can you please clarify this ambiguity. Thank you. image

    opened by Ihsan149 0
  • Optical flow for training

    Optical flow for training

    Thanks for your great job! I want to train DA-VSN, but I don't know how to get Estimated_optical_flow_Viper_train, Estimated_optical_flow_Cityscapes-Seq_train. I didn't find the detail about optical flow from readme or paper.

    opened by EDENpraseHAZARD 11
Neural Architecture Search Powered by Swarm Intelligence 🐜

Neural Architecture Search Powered by Swarm Intelligence 🐜 DeepSwarm DeepSwarm is an open-source library which uses Ant Colony Optimization to tackle

288 Oct 28, 2022
Official code for "EagerMOT: 3D Multi-Object Tracking via Sensor Fusion" [ICRA 2021]

EagerMOT: 3D Multi-Object Tracking via Sensor Fusion Read our ICRA 2021 paper here. Check out the 3 minute video for the quick intro or the full prese

Aleksandr Kim 276 Dec 30, 2022
Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis

Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis [Paper] [Online Demo] The following results are obtained by our SCUNet with purely syn

Kai Zhang 312 Jan 07, 2023
Face Alignment using python

Face Alignment Face Alignment using python Input Image Aligned Face Aligned Face Aligned Face Input Image Aligned Face Input Image Aligned Face Instal

Sajjad Aemmi 28 Nov 23, 2022
A public available dataset for road boundary detection in aerial images

Topo-boundary This is the official github repo of paper Topo-boundary: A Benchmark Dataset on Topological Road-boundary Detection Using Aerial Images

Zhenhua Xu 79 Jan 04, 2023
Code for Estimating Multi-cause Treatment Effects via Single-cause Perturbation (NeurIPS 2021)

Estimating Multi-cause Treatment Effects via Single-cause Perturbation (NeurIPS 2021) Single-cause Perturbation (SCP) is a framework to estimate the m

Zhaozhi Qian 9 Sep 28, 2022
Bringing Characters to Life with Computer Brains in Unity

AI4Animation: Deep Learning for Character Control This project explores the opportunities of deep learning for character animation and control as part

Sebastian Starke 5.5k Jan 04, 2023
Python port of R's Comprehensive Dynamic Time Warp algorithm package

Welcome to the dtw-python package Comprehensive implementation of Dynamic Time Warping algorithms. DTW is a family of algorithms which compute the loc

Dynamic Time Warping algorithms 154 Dec 26, 2022
[ICCV 2021] Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification

Counterfactual Attention Learning Created by Yongming Rao*, Guangyi Chen*, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for ICCV

Yongming Rao 90 Dec 31, 2022
An educational tool to introduce AI planning concepts using mobile manipulator robots.

JEDAI Explains Decision-Making AI Virtual Machine Image The recommended way of using JEDAI is to use pre-configured Virtual Machine image that is avai

Autonomous Agents and Intelligent Robots 13 Nov 15, 2022
Iranian Cars Detection using Yolov5s, PyTorch

Iranian Cars Detection using Yolov5 Train 1- git clone https://github.com/ultralytics/yolov5 cd yolov5 pip install -r requirements.txt 2- Dataset ../

Nahid Ebrahimian 22 Dec 05, 2022
A selection of State Of The Art research papers (and code) on human locomotion (pose + trajectory) prediction (forecasting)

A selection of State Of The Art research papers (and code) on human trajectory prediction (forecasting). Papers marked with [W] are workshop papers.

Karttikeya Manglam 40 Nov 18, 2022
The official implementation of Autoregressive Image Generation using Residual Quantization (CVPR '22)

Autoregressive Image Generation using Residual Quantization (CVPR 2022) The official implementation of "Autoregressive Image Generation using Residual

Kakao Brain 529 Dec 30, 2022
A basic reminder tool written in Python.

A simple Python Reminder Here's a basic reminder tool written in Python that speaks to the user and sends a notification. Run pip3 install pyttsx3 w

Sachit Yadav 4 Feb 05, 2022
Pytorch Implementation of PointNet and PointNet++++

Pytorch Implementation of PointNet and PointNet++ This repo is implementation for PointNet and PointNet++ in pytorch. Update 2021/03/27: (1) Release p

Luigi Ariano 1 Nov 11, 2021
Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks

pix2vox [Demonstration video] Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks. Generated samples Single-category generation M

Takumi Moriya 232 Nov 14, 2022
Code for Mesh Convolution Using a Learned Kernel Basis

Mesh Convolution This repository contains the implementation (in PyTorch) of the paper FULLY CONVOLUTIONAL MESH AUTOENCODER USING EFFICIENT SPATIALLY

Yi_Zhou 35 Jan 03, 2023
Google Recaptcha solver.

byerecaptcha - Google Recaptcha solver. Model and some codes takes from embium's repository -Installation- pip install byerecaptcha -How to use- from

Vladislav Zenkevich 21 Dec 19, 2022
Algorithmic encoding of protected characteristics and its implications on disparities across subgroups

Algorithmic encoding of protected characteristics and its implications on disparities across subgroups This repository contains the code for the paper

Team MIRA - BioMedIA 15 Oct 24, 2022
Satellite labelling tool for manual labelling of storm top features such as overshooting tops, above-anvil plumes, cold U/Vs, rings etc.

Satellite labelling tool About this app A tool for manual labelling of storm top features such as overshooting tops, above-anvil plumes, cold U/Vs, ri

Czech Hydrometeorological Institute - Satellite Department 10 Sep 14, 2022