[CVPR 2021] MiVOS - Mask Propagation module. Reproduced STM (and better) with training code :star2:. Semi-supervised video object segmentation evaluation.

Overview

MiVOS (CVPR 2021) - Mask Propagation

Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang

[arXiv] [Paper PDF] [Project Page] [Papers with Code]

Parkour Bike

This repo implements an improved version of the Space-Time Memory Network (STM) and is part of the accompanying code of Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion (MiVOS). It can be used as:

  1. A tool for propagating masks across video frames. Results
  2. An integral component for reproducing and/or improving the performance in MiVOS.
  3. A tool that can compute dense correspondences between two frames. Tutorial

Overall structure and capabilities

MiVOS Mask-Propagation Scribble-to-Mask
DAVIS/YouTube semi-supervised evaluation ✔️
DAVIS interactive evaluation ✔️
User interaction GUI tool ✔️
Dense Correspondences ✔️
Train propagation module ✔️
Train S2M (interaction) module ✔️
Train fusion module ✔️
Generate more synthetic data ✔️

Framework

framework

Requirements

We used these packages/versions in the development of this project. It is likely that higher versions of the same package will also work. This is not an exhaustive list -- other common python packages (e.g. pillow) are expected and not listed.

  • PyTorch 1.7.1
  • torchvision 0.8.2
  • OpenCV 4.2.0
  • progressbar
  • thinspline for training (pip install git+https://github.com/cheind/py-thin-plate-spline)
  • gitpython for training
  • gdown for downloading pretrained models

Refer to the official PyTorch guide for installing PyTorch/torchvision. The rest (except thin spline) can be installed by:

pip install progressbar2 opencv-python gitpython gdown

Main Results

Semi-supervised VOS

FPS is amortized, computed as total processing time / total number of frames irrespective of the number of objects, aka multi-object FPS. All times are measured on an RTX 2080 Ti with IO time excluded. Pre-computed results and evaluation outputs (either from local evaluation or CodaLab output log) are also provided. All evaluations are done in 480p resolution.

(Note: This implementation is not optimal in speed. There are ways to speed it up but we wanted to keep it in its simplest PyTorch form.)

Find all the precomputed results here.

DAVIS 2016 val:

Produced using eval_davis_2016.py

Model Top-k? J F J&F FPS Pre-computed results
Without BL pretraining 87.0 89.0 88.0 15.5 D16_s02_notop
Without BL pretraining ✔️ 89.7 92.1 90.9 16.9 D16_s02
With BL pretraining 87.8 90.0 88.9 15.5 D16_s012_notop
With BL pretraining ✔️ 89.7 92.4 91.0 16.9 D16_s012

DAVIS 2017 val:

Produced using eval_davis.py

Model Top-k? J F J&F FPS Pre-computed results
Without BL pretraining 78.8 84.2 81.5 9.75 D17_s02_notop
Without BL pretraining ✔️ 80.5 85.8 83.1 11.2 D17_s02
With BL pretraining 81.1 86.5 83.8 9.75 D17_s012_notop
With BL pretraining ✔️ 81.7 87.4 84.5 11.2 D17_s012

For YouTubeVOS val and DAVIS test-dev we also tried the kernelized memory (called KM in our code) technique described in Kernelized Memory Network for Video Object Segmentation. It works nicely with our top-k filtering.

YouTubeVOS val:

Produced using eval_youtube.py

Model Kernel Memory (KM)? J-Seen J-Unseen F-Seen F-Unseen Overall Score Pre-computed results
Full model with top-k 80.6 77.3 84.7 85.5 82.0 D17_testdev_s012
Full model with top-k ✔️ 81.6 77.7 85.8 85.9 82.8 D17_testdev_s012_km

DAVIS 2017 test-dev:

Produced using eval_davis.py

Model Kernel Memory (KM)? J F J&F Pre-computed results
Full model with top-k 72.7 80.2 76.5 YV_val_s012
Full model with top-k ✔️ 74.9 82.2 78.6 YV_val_s012_km

Running them yourselves

You can look at the corresponding scripts (eval_davis.py, eval_youtube.py, etc.). The arguments tooltip should give you a rough idea of how to use them. For example, if you have downloaded the datasets and pretrained models using our scripts, you only need to specify the output path: python eval_davis.py --output [somewhere] for DAVIS 2017 validation set evaluation.

Correspondences

The W matrix can be considered as a dense correspondence (affinity) matrix. This is in fact how we used it in the fusion module. See try_correspondence.py for details. We have included a small GUI there to show the correspondences (a point source is used, but a mask/tensor can be used in general).

Try it yourself: python try_correspondence.py.

Source Target
Source 1 Target 1
Source 2 Target 2
Source 3 Target 3

Pretrained models

Here we provide two pretrained models. One is pretrained on static images and transferred to main training (we call it s02: stage 0 -> stage 2); the other is pretrained on both static images and BL30K then transferred to main training (we call it s012). For the s02 model, we train it for 300K (instead of 150K) iterations in the main training stage to offset the extra training. More iterations do not help/help very little. The script download_model.py automatically downloads the s012 model. Put all pretrained models in Mask-Propagation/saves/.

Model Google Drive OneDrive
s02 link link
s012 link link

Training

Data preparation

I recommend either softlinking (ln -s) existing data or use the provided download_datasets.py to structure the datasets as our format. download_datasets.py might download more than what you need -- just comment out things that you don't like. The script does not download BL30K because it is huge (>600GB) and we don't want to crash your harddisks. See below.

├── BL30K
├── DAVIS
│   ├── 2016
│   │   ├── Annotations
│   │   └── ...
│   └── 2017
│       ├── test-dev
│       │   ├── Annotations
│       │   └── ...
│       └── trainval
│           ├── Annotations
│           └── ...
├── Mask-Propagation
├── static
│   ├── BIG_small
│   └── ...
└── YouTube
    ├── all_frames
    │   └── valid_all_frames
    ├── train
    ├── train_480p
    └── valid

BL30K

BL30K is a synthetic dataset rendered using ShapeNet data and Blender. For details, see MiVOS.

You can either use the automatic script download_bl30k.py or download it manually below. Note that each segment is about 115GB in size -- 700GB in total. You are going to need ~1TB of free disk space to run the script (including extraction buffer).

Google Drive is much faster in my experience. Your mileage might vary.

Manual download: [Google Drive] [OneDrive]

Training commands

CUDA_VISIBLE_DEVICES=[a,b] OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port [cccc] --nproc_per_node=2 train.py --id [defg] --stage [h]

We implemented training with Distributed Data Parallel (DDP) with two 11GB GPUs. Replace a, b with the GPU ids, cccc with an unused port number, defg with a unique experiment identifier, and h with the training stage (0/1/2).

The model is trained progressively with different stages (0: static images; 1: BL30K; 2: YouTubeVOS+DAVIS). After each stage finishes, we start the next stage by loading the trained weight.

One concrete example is:

Pre-training on static images: CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 9842 --nproc_per_node=2 train.py --id retrain_s0 --stage 0

Pre-training on the BL30K dataset: CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 9842 --nproc_per_node=2 train.py --id retrain_s01 --load_network [path_to_trained_s0.pth] --stage 1

Main training: CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 9842 --nproc_per_node=2 train.py --id retrain_s012 --load_network [path_to_trained_s01.pth] --stage 2

Details

Files to look at

  • model/network.py - Defines the core network.
  • model/model.py - Training procedure.
  • util/hyper_para.py - Hyperparameters that you can provide by specifying command line arguments.

What are the differences?

While I did start building this from STM's official evaluation code, the official training code is not available and therefore a lot of details are missing. My own judgments are used in the engineering of this work.

  • We both use the ResNet-50 backbone up to layer3 but there are a few minor architecture differences elsewhere (e.g. decoder, mask generation in the last layer)
  • This repo does not use the COCO dataset and uses some other static image datasets instead.
  • This repo picks two, instead of three objects for each training sample.
  • Top-k filtering (proposed by us) is included here
  • Our raw performance (without BL30K or top-k) is slightly worse than the original STM model but I believe we train with fewer resources.

Citation

Please cite our paper if you find this repo useful!

@inproceedings{MiVOS_2021,
  title={Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion},
  author={Cheng, Ho Kei and Tai, Yu-Wing and Tang, Chi-Keung},
  booktitle={CVPR},
  year={2021}
}

Contact: [email protected]

Comments
  • About BL30K

    About BL30K

    作者您好,我将BL30K的6个压缩包全部下载好,并全部解压之后,在进行第二个阶段的预训练时报错是找不到data/dangjisheng/BL30K/a/BL30K/Annotations/kea03423/00020.png',不知道为什么?我是把6个文件压缩包全部下载好而且全部解压在一个目录下的,为什么会报错缺少文件?期待您的回复。

    image

    image

    opened by longmalongma 31
  • The server remained unresponsive for a long time when I try to train your model.

    The server remained unresponsive for a long time when I try to train your model.

    When I ran this line of code on our server, the server did not respond for a long time. Do you know why?

    UDA_VISIBLE_DEVICES=0 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 9842 --nproc_per_node=1 train.py --id retrain_s0 --stage 0 --batch_size 4

    opened by longmalongma 20
  • subprocess.CalledProcessError

    subprocess.CalledProcessError

    Hi, thanks for your great work! When I try to run CUDA_VISIBLE_DEVICES=0 OMP_NUM_THREADS=4 python -m torch.distrib uted.launch --master_port 9842 --nproc_per_node=2 train.py --id retrain_s0 --stage 0 , I meet this problem, can you help me?

    File "/home/longma/anaconda2/envs/p3torchstm/lib/python3.6/site-packages/torch/distributed/launch.py", line 242, in main cmd=cmd) subprocess.CalledProcessError: Command '['/home/longma/anaconda2/envs/p3torchstm/bin/python', '-u', 'train.py', '--local_rank=1', '--id', 'retrain_s0', '--stage', '0']' returned non-zero exit status 1.

    opened by longmalongma 19
  • how to save the feature map of manymemory frames?

    how to save the feature map of manymemory frames?

    There is a part of your code that I don't understand. Should the memory frame be stored separately, or should the key-value feture map and the content feature map of the memory frame be connected together to save?Which line represents the memory frame saved?

    opened by longmalongma 10
  • Pre-training on the BL30K dataset after pre-training on static images

    Pre-training on the BL30K dataset after pre-training on static images

    As I see that in the pre-training on static images stage, the "single_object" in PropagationNetwork is True, so the MaskRGBEncoderSO is used. When I try to load the pre-trained of the above stage for the pre-training on the BL30K dataset or Main training, the "single_object" now is False and the model use MaskRGBEncoder instead. After that, the model can not load the model successfully. Here is the error: Traceback (most recent call last): File "train.py", line 68, in <module> total_iter = model.load_model(para['load_model']) File "/content/Mask-Propagation/model/model.py", line 180, in load_model self.PNet.module.load_state_dict(network) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1224, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for PropagationNetwork: size mismatch for mask_rgb_encoder.conv1.weight: copying a param with shape torch.Size([64, 4, 7, 7]) from checkpoint, the shape in current model is torch.Size([64, 5, 7, 7]).

    So can you explain how can we fix it? Thank you so much.

    opened by nero1342 9
  • RuntimeError: Error(s) in loading state_dict for PropagationNetwork

    RuntimeError: Error(s) in loading state_dict for PropagationNetwork

    Hello ! I want to train the PropagationNetwork on my personal image dataset, so I use the training command CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 9842 --nproc_per_node=2 train.py --id retrain_s01 --load_network ./saves/propagation_model.pth --stage 0.(based on the pretrain model S012). It threw a runtime error.

    loadnetwork_error The training command works fine without the --load network parameters. Could you give me some suggestions?

    opened by xwhkkk 5
  • metrics results of test dataset

    metrics results of test dataset

    After I run the code eval_davis_2016.py, I only get the mask file in the output file. how could I get the value of metrics such as J, J&F? and how could we test the model on personal datasets to get those metrics after using interactive_gui.py? Thanks for your suggestions

    opened by xwhkkk 4
  • J&F performance on BL30K

    J&F performance on BL30K

    Hi, I am doing BL30K training for DAVIS 2017 val (including stage 0 and stage 1). I just want to know what J&F should I achieve on the DAVIS 2017 val after finishing BL30K training? Therefore, I can check whether my training is correct. I think it did not included in readme.

    opened by vateye 4
  • How to run two copies of your code at the same time?

    How to run two copies of your code at the same time?

    image

    I have duplicated two copies of your code and made small changes in the duplicated code respectively. When one is being trained, the other one cannot be trained. If the two codes are trained at the same time, what parameters need to be changed?One of my computers has 4 2080ti, the memory is enough.

    opened by longmalongma 4
  • Why don't you use top_k and km during the training phase?

    Why don't you use top_k and km during the training phase?

    Looking at your code I was a little confused why you didn't use top_k and km during the training phase. But top_k and km are used in the evaluation phase, right?Is it bad to use top_k and km in training?

    opened by longmalongma 4
  • RuntimeError: CUDA error: out of memory

    RuntimeError: CUDA error: out of memory

    How many GPUs do you need to test on Davis and YouTube?I keep reporting memory errors during my tests.I directly used the model trained by static pictures for VOS training, skipping the pre-training of BL30K. Is that OK?

    opened by longmalongma 4
Releases(1.0)
Model-based reinforcement learning in TensorFlow

Bellman Website | Twitter | Documentation (latest) What does Bellman do? Bellman is a package for model-based reinforcement learning (MBRL) in Python,

46 Nov 09, 2022
M2MRF: Many-to-Many Reassembly of Features for Tiny Lesion Segmentation in Fundus Images

M2MRF: Many-to-Many Reassembly of Features for Tiny Lesion Segmentation in Fundus Images This repo is the official implementation of paper "M2MRF: Man

12 Dec 14, 2022
A simple Rock-Paper-Scissors game using CV in python

ML18_Rock-Paper-Scissors-using-CV A simple Rock-Paper-Scissors game using CV in python For IITISOC-21 Rules and procedure to play the interactive game

Anirudha Bhagwat 3 Aug 08, 2021
A deep learning library that makes face recognition efficient and effective

Distributed Arcface Training in Pytorch This is a deep learning library that makes face recognition efficient, and effective, which can train tens of

Sajjad Aemmi 10 Nov 23, 2021
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with ONNX, TensorRT, ncnn, and OpenVINO supported.

Introduction YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and ind

7.7k Jan 03, 2023
Reimplementation of the paper "Attention, Learn to Solve Routing Problems!" in jax/flax.

JAX + Attention Learn To Solve Routing Problems Reinplementation of the paper Attention, Learn to Solve Routing Problems! using Jax and Flax. Fully su

Gabriela Surita 7 Dec 01, 2022
PyTorch Implementation of DSB for Score Based Generative Modeling. Experiments managed using Hydra.

Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling This repository contains the implementation for the paper Diffusion

James Thornton 50 Jan 03, 2023
Structured Edge Detection Toolbox

################################################################### # # # Structure

Piotr Dollar 779 Jan 02, 2023
LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image.

This project is based on ultralytics/yolov3. LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image. The related paper is avai

26 Dec 13, 2022
Code to reproduce the results in the paper "Tensor Component Analysis for Interpreting the Latent Space of GANs".

Tensor Component Analysis for Interpreting the Latent Space of GANs [ paper | project page ] Code to reproduce the results in the paper "Tensor Compon

James Oldfield 4 Jun 17, 2022
Label Studio is a multi-type data labeling and annotation tool with standardized output format

Website • Docs • Twitter • Join Slack Community What is Label Studio? Label Studio is an open source data labeling tool. It lets you label data types

Heartex 11.7k Jan 09, 2023
This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports"

Introduction: X-Ray Report Generation This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports". O

no name 36 Dec 16, 2022
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs Check out the paper on arXiv: https://arxiv.org/abs/2103.13744 This repo cont

Christian Reiser 373 Dec 20, 2022
Out-of-Town Recommendation with Travel Intention Modeling (AAAI2021)

TrainOR_AAAI21 This is the official implementation of our AAAI'21 paper: Haoran Xin, Xinjiang Lu, Tong Xu, Hao Liu, Jingjing Gu, Dejing Dou, Hui Xiong

Jack Xin 13 Oct 19, 2022
Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021

Introduction Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021 Prerequisites Python 3.8 and conda, get Conda CUDA 11

51 Dec 03, 2022
Attention mechanism with MNIST dataset

[TensorFlow] Attention mechanism with MNIST dataset Usage $ python run.py Result Training Loss graph. Test Each figure shows input digit, attention ma

YeongHyeon Park 12 Jun 10, 2022
Auto grind btdb2 exp for tower

Bloons TD Battles 2 EXP Grinder Auto grind btdb2 exp for towers Setup I suggest checking out every screenshot to see what they are supposed to be, so

Vincent 6 Jul 29, 2022
Differential rendering based motion capture blender project.

TraceArmature Summary TraceArmature is currently a set of python scripts that allow for high fidelity motion capture through the use of AI pose estima

William Rodriguez 4 May 27, 2022
Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation

Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision Training Efficiency We show the training efficiency of our DSLP model b

Chenyang Huang 36 Oct 31, 2022
C3d-pytorch - Pytorch porting of C3D network, with Sports1M weights

C3D for pytorch This is a pytorch porting of the network presented in the paper Learning Spatiotemporal Features with 3D Convolutional Networks How to

Davide Abati 311 Jan 06, 2023