[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

Overview

Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion (MiVOS)

Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang

CVPR 2021

[arXiv] [Paper PDF] [Project Page] [Demo] [Papers with Code]

demo1 demo2 demo3

Credit (left to right): DAVIS 2017, Academy of Historical Fencing, Modern History TV

We manage the project using three different repositories (which are actually in the paper title). This is the main repo, see also Mask-Propagation and Scribble-to-Mask.

Overall structure and capabilities

MiVOS Mask-Propagation Scribble-to-Mask
DAVIS/YouTube semi-supervised evaluation ✔️
DAVIS interactive evaluation ✔️
User interaction GUI tool ✔️
Dense Correspondences ✔️
Train propagation module ✔️
Train S2M (interaction) module ✔️
Train fusion module ✔️
Generate more synthetic data ✔️

Framework

framework

Requirements

We used these packages/versions in the development of this project. It is likely that higher versions of the same package will also work. This is not an exhaustive list -- other common python packages (e.g. pillow) are expected and not listed.

Refer to the official PyTorch guide for installing PyTorch/torchvision. The rest can be installed by:

pip install PyQt5 davisinteractive progressbar2 opencv-python networkx gitpython gdown Cython

Quick start

  1. python download_model.py to get all the required models.
  2. python interactive_gui.py --video or python interactive_gui.py --images . A video has been prepared for you at examples/example.mp4.
  3. If you need to label more than one object, additionally specify --num_objects
  4. There are instructions in the GUI. You can also watch the demo videos for some ideas.

Main Results

DAVIS/YouTube semi-supervised results

DAVIS Interactive Track

All results are generated using the unmodified official DAVIS interactive bot without saving masks (--save_mask not specified) and with an RTX 2080Ti. We follow the official protocol.

Precomputed result, with the json summary: [Google Drive] [OneDrive]

eval_interactive_davis.py

Model AUC-J&F J&F @ 60s
Baseline 86.0 86.6
(+) Top-k 87.2 87.8
(+) BL30K pretraining 87.4 88.0
(+) Learnable fusion 87.6 88.2
(+) Difference-aware fusion (full model) 87.9 88.5

Pretrained models

python download_model.py should get you all the models that you need. (pip install gdown required.)

[OneDrive Mirror]

Training

Data preparation

Datasets should be arranged as the following layout. You can use download_datasets.py (same as the one Mask-Propagation) to get the DAVIS dataset and manually download and extract fusion_data ([OneDrive]) and BL30K.

├── BL30K
├── DAVIS
│   └── 2017
│       ├── test-dev
│       │   ├── Annotations
│       │   └── ...
│       └── trainval
│           ├── Annotations
│           └── ...
├── fusion_data
└── MiVOS

BL30K

BL30K is a synthetic dataset rendered using Blender with ShapeNet's data. We break the dataset into six segments, each with approximately 5K videos. The videos are organized in a similar format as DAVIS and YouTubeVOS, so dataloaders for those datasets can be used directly. Each video is 160 frames long, and each frame has a resolution of 768*512. There are 3-5 objects per video, and each object has a random smooth trajectory -- we tried to optimize the trajectories greedily to minimize object intersection (not guaranteed), with occlusions still possible (happen a lot in reality). See generation/blender/generate_yaml.py for details.

We noted that using probably half of the data is sufficient to reach full performance (although we still used all), but using less than one-sixth (5K) is insufficient.

Download

You can either use the automatic script download_bl30k.py or download it manually below. Note that each segment is about 115GB in size -- 700GB in total. You are going to need ~1TB of free disk space to run the script (including extraction buffer).

Google Drive is much faster in my experience. Your mileage might vary.

Manual download: [Google Drive] [OneDrive]

Generation

  1. Download ShapeNet.
  2. Install Blender. (We used 2.82)
  3. Download a bunch of background and texture images. We used this repo (we specified "non-commercial reuse" in the script) and the list of keywords are provided in generation/blender/*.json.
  4. Generate a list of configuration files (generation/blender/generate_yaml.py).
  5. Run rendering on the configurations. See here (Not documented in detail, ask if you have a question)

Fusion data

We use the propagation module to run through some data and obtain real outputs to train the fusion module. See the script generate_fusion.py.

Or you can download pre-generated fusion data:

Training commands

These commands are to train the fusion module only.

CUDA_VISIBLE_DEVICES=[a,b] OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port [cccc] --nproc_per_node=2 train.py --id [defg] --stage [h]

We implemented training with Distributed Data Parallel (DDP) with two 11GB GPUs. Replace a, b with the GPU ids, cccc with an unused port number, defg with a unique experiment identifier, and h with the training stage (0/1).

The model is trained progressively with different stages (0: BL30K; 1: DAVIS). After each stage finishes, we start the next stage by loading the trained weight. A pretrained propagation model is required to train the fusion module.

One concrete example is:

Pre-training on the BL30K dataset: CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 7550 --nproc_per_node=2 train.py --load_prop saves/propagation_model.pth --stage 0 --id retrain_s0

Main training: CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 7550 --nproc_per_node=2 train.py --load_prop saves/propagation_model.pth --stage 1 --id retrain_s012 --load_network [path_to_trained_s0.pth]

Credit

f-BRS: https://github.com/saic-vul/fbrs_interactive_segmentation

ivs-demo: https://github.com/seoungwugoh/ivs-demo

deeplab: https://github.com/VainF/DeepLabV3Plus-Pytorch

STM: https://github.com/seoungwugoh/STM

BlenderProc: https://github.com/DLR-RM/BlenderProc

Citation

Please cite our paper if you find this repo useful!

@inproceedings{MiVOS_2021,
  title={Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion},
  author={Cheng, Ho Kei and Tai, Yu-Wing and Tang, Chi-Keung},
  booktitle={CVPR},
  year={2021}
}

Contact: [email protected]

Comments
  • Some problem when train Fusion

    Some problem when train Fusion

    Hello, I encountered some problems when retraining the fusion model. Some key parameter guidelines for training fusion are not given in the code warehouse. Can you provide it? Specifically as follows: (1) generate_fusion.py: parameter "separation" not given

    Can you provide the relevant parameter descriptions of fusion training and the instructions to run so that I can reproduce the results of your paper?

    and when I try to train(python train.py),I meet some code mistake in fusion_dataset.py: (1)are there some mistake When you assign a value to self.vid_to_instance? and It will return error at: self.videos = [v for v in self.videos if v in self.vid_to_instance](line 60 in fusion_datast.py)

    opened by nazimii 12
  • Process killed

    Process killed

    I tried the MIVOS + STCN on a 1.5 minute 4k video that was down sampled to 480p and the program crashed.

    What are the steps to reformat/sample a 4k video to make it work for this tool?

    Also can this tool run on multiple GPUs?

    opened by zdhernandez 11
  • Fine-tune guidance

    Fine-tune guidance

    Hi really loved the work, I'm trying to fine-tune the downloaded models(using the downlaod_model.py) to another domain. I was wondering if you could help me where to put the data and which command to run the training.

    Thank you

    opened by be-redAsmara 8
  • RuntimeError: " not implemented for 'BFloat16'(example.mp4)">

    RuntimeError: "slow_conv_dilated<>" not implemented for 'BFloat16'(example.mp4)

    Hello! I followed the instructions of Quickstart with these settings: python interactive_gui.py --video .\example\example.mp4 As I don't have a GPU, I change the map location to 'CPU'. When I select the "click" radio button and click on the object to create the mask, a runtime error is thrown. image Could you give me some suggestions? Looking forward to your reply.

    opened by xwhkkk 6
  • -- images mem_profile 2 | RuntimeError: All input tensors must be on the same device. Received cpu and cuda:0

    -- images mem_profile 2 | RuntimeError: All input tensors must be on the same device. Received cpu and cuda:0

    To replicate:

    • create folder with only one image
    • with these settings run: python interactive_gui.py --mem_profile 2 --images ./example/test_folder/
    • select the "click" radio button
    • click on the image to create mask
    • select "scribble" radio button
    • "scribble" an area in the picture
    • runtime error is thrown Screenshot from 2021-12-03 22-06-00
    opened by zdhernandez 5
  • Overlay and Mask files not equal to size of original input image.

    Overlay and Mask files not equal to size of original input image.

    @hkchengrex Doing one image larger than 1k resolution in one folder with command: python interactive_gui.py --mem_profile 2 --images ./example/test_folder/

    • clicking on an object to produce the mask
    • click "save" to save the overlay and masks
    • Both overlay and mask files are reduced to a fix resolution of: width: 480px, height: 640px

    Q. Can we keep the size of the output files to be equal to the input size of the original image? Q. Can we add a flag to use either current behavior or preserve the resolution of the input image ?

    opened by zdhernandez 4
  • Getting

    Getting "ValueError: Davis root folder must be named "DAVIS" Error when i try run eval_interactive_davis.py

    Getting "ValueError: Davis root folder must be named "DAVIS" Error when i try run eval_interactive_davis.py

    Traceback (most recent call last): File "/home/bereket/Desktop/IRCAD-Data/MiVOS/MiVOS-MiVOS-STCN/eval_interactive_davis.py", line 76, in with DavisInteractiveSession(davis_root=davis_path+'/trainval', report_save_dir='../output', max_nb_interactions=8, max_time=8*30) as sess: File "/home/bereket/anaconda3/envs/ivos/lib/python3.9/site-packages/davisinteractive/session/session.py", line 89, in enter samples, max_t, max_i = self.connector.start_session( File "/home/bereket/anaconda3/envs/ivos/lib/python3.9/site-packages/davisinteractive/connector/local.py", line 29, in start_session self.service = EvaluationService(davis_root=davis_root) File "/home/bereket/anaconda3/envs/ivos/lib/python3.9/site-packages/davisinteractive/evaluation/service.py", line 27, in init self.davis = Davis(davis_root=davis_root) File "/home/bereket/anaconda3/envs/ivos/lib/python3.9/site-packages/davisinteractive/dataset/davis.py", line 93, in init raise ValueError('Davis root folder must be named "DAVIS"') ValueError: Davis root folder must be named "DAVIS"

    opened by be-redAsmara 4
  • Processing on long video with high resolution

    Processing on long video with high resolution

    Hello! Thank you for the amazing framework!

    I have an issue while processing on long video with high resolution. I ran out of GPU memory. As I understand, mivos tries to upload all images directly to GPU and if the video is too long or in high-resolution mivos can't handle such cases. Is there is a way to fix this issue? Maybe modify code to work with data chunks?

    Thank you in advance!

    opened by devidlatkin 4
  • Has anyone met the following problem during the running of

    Has anyone met the following problem during the running of "interactive_gui.py"?

    Traceback (most recent call last): File "interactive_gui.py", line 23, in from PyQt5.QtWidgets import (QWidget, QApplication, QMainWindow, QComboBox, QGridLayout, ImportError: /usr/lib/x86_64-linux-gnu/libQt5Core.so.5: version `Qt_5.15' not found (required by /home/fg/anaconda3/envs/MiVOS/lib/python3.7/site-packages/PyQt5/QtWidgets.abi3.so)

    opened by Starboy-at-earth 4
  • static dataset in download_dataset.py

    static dataset in download_dataset.py

    I note that there are a static dataset in download_dataset.py so, where is this static dataset used?

    and in readme.md, you say, you use BL30K to train fusion model, and the BL30K is very large(600G), so ,you use 600 G dataset to pretrain fusion model?

    opened by nazimii 3
  • Temporal Information

    Temporal Information

    Hi, I am interested in your project and I would like to go in detail for an aspect related to temporal information. Are you training your model on video datasets? Are you getting temporal information from the dataset? or your model has been trained on single images considering only spatial information?

    Thank you so much. Best, Francesca

    opened by FrancescaCi 3
  • CPU profile 2 process throwing CUDA out of memory for one image with multiple items when propagate button is clicked

    CPU profile 2 process throwing CUDA out of memory for one image with multiple items when propagate button is clicked

    @hkchengrex To replicate:

    • load only one image of width(3024 by 4032) in folder./example/test_folder/
    • run command: python interactive_gui.py --mem_profile 2 --images ./example/test_folder/ --resolution -1 --num_objects 4
    • click on one object to create overlay of the first object (red)
    • select num keypad 2 and click a different object (to produce overlay of different color)
    • select num keypag 3 and click a different object (to produce overlay of different color)
    • select num keypag 3 and click a different object (to produce overlay of different color)
    • click "propagate" Throws error. See picture. Screenshot from 2021-12-11 12-03-39

    Even though I am doing one image if I click "Save" it does what is supposed to to (save overlay and mask). But clicking "Propagate" should not throw and error with cuda when --mem_profile was set to 2, right ? should not have used GPU.

    opened by zdhernandez 7
CRNN With PyTorch

CRNN-PyTorch Implementation of https://arxiv.org/abs/1507.05717

Vadim 4 Sep 01, 2022
Tensor-based approaches for fMRI classification

tensor-fmri Using tensor-based approaches to classify fMRI data from StarPLUS. Citation If you use any code in this repository, please cite the follow

4 Sep 07, 2022
Code for KDD'20 "An Efficient Neighborhood-based Interaction Model for Recommendation on Heterogeneous Graph"

Heterogeneous INteract and aggreGatE (GraphHINGE) This is a pytorch implementation of GraphHINGE model. This is the experiment code in the following w

Jinjiarui 69 Nov 24, 2022
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Deep Cognition and Language Research (DeCLaRe) Lab 89 Dec 26, 2022
The first machine learning framework that encourages learning ML concepts instead of memorizing class functions.

SeaLion is designed to teach today's aspiring ml-engineers the popular machine learning concepts of today in a way that gives both intuition and ways of application. We do this through concise algori

Anish 324 Dec 27, 2022
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)

Skyformer This repository is the official implementation of Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr"om Method (NeurIPS 2021).

Qi Zeng 46 Sep 20, 2022
Pytorch implementation of the paper DocEnTr: An End-to-End Document Image Enhancement Transformer.

DocEnTR Description Pytorch implementation of the paper DocEnTr: An End-to-End Document Image Enhancement Transformer. This model is implemented on to

Mohamed Ali Souibgui 74 Jan 07, 2023
Pytorch implementation of NEGEV method. Paper: "Negative Evidence Matters in Interpretable Histology Image Classification".

Pytorch 1.10.0 code for: Negative Evidence Matters in Interpretable Histology Image Classification (https://arxiv. org/abs/xxxx.xxxxx) Citation: @arti

Soufiane Belharbi 4 Dec 01, 2022
Code for CVPR2021 paper "Learning Salient Boundary Feature for Anchor-free Temporal Action Localization"

AFSD: Learning Salient Boundary Feature for Anchor-free Temporal Action Localization This is an official implementation in PyTorch of AFSD. Our paper

Tencent YouTu Research 146 Dec 24, 2022
Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.

Adversarial Training Against Location-Optimized Adversarial Patches arXiv | Paper | Code | Video | Slides Code for the paper: Sukrut Rao, David Stutz,

Sukrut Rao 32 Dec 13, 2022
RoIAlign & crop_and_resize for PyTorch

RoIAlign for PyTorch This is a PyTorch version of RoIAlign. This implementation is based on crop_and_resize and supports both forward and backward on

Long Chen 530 Jan 07, 2023
Speech Recognition using DeepSpeech2.

deepspeech.pytorch Implementation of DeepSpeech2 for PyTorch using PyTorch Lightning. The repo supports training/testing and inference using the DeepS

Sean Naren 2k Jan 04, 2023
Code for the CVPR 2021 paper "Triple-cooperative Video Shadow Detection"

Triple-cooperative Video Shadow Detection Code and dataset for the CVPR 2021 paper "Triple-cooperative Video Shadow Detection"[arXiv link] [official l

Zhihao Chen 24 Oct 04, 2022
Using python and scikit-learn to make stock predictions

MachineLearningStocks in python: a starter project and guide EDIT as of Feb 2021: MachineLearningStocks is no longer actively maintained MachineLearni

Robert Martin 1.3k Dec 29, 2022
Exploring the Dual-task Correlation for Pose Guided Person Image Generation

Dual-task Pose Transformer Network The source code for our paper "Exploring Dual-task Correlation for Pose Guided Person Image Generation“ (CVPR2022)

63 Dec 15, 2022
An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding, top-down-bottom-up, and attention (consensus between columns)

GLOM - Pytorch (wip) An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding,

Phil Wang 173 Dec 14, 2022
Disagreement-Regularized Imitation Learning

Due to a normalization bug the expert trajectories have lower performance than the rl_baseline_zoo reported experts. Please see the following link in

Kianté Brantley 25 Apr 28, 2022
Code for the CVPR2021 workshop paper "Noise Conditional Flow Model for Learning the Super-Resolution Space"

NCSR: Noise Conditional Flow Model for Learning the Super-Resolution Space Official NCSR training PyTorch Code for the CVPR2021 workshop paper "Noise

57 Oct 03, 2022
Marine debris detection with commercial satellite imagery and deep learning.

Marine debris detection with commercial satellite imagery and deep learning. Floating marine debris is a global pollution problem which threatens mari

Inter Agency Implementation and Advanced Concepts 56 Dec 16, 2022
UMT is a unified and flexible framework which can handle different input modality combinations, and output video moment retrieval and/or highlight detection results.

Unified Multi-modal Transformers This repository maintains the official implementation of the paper UMT: Unified Multi-modal Transformers for Joint Vi

Applied Research Center (ARC), Tencent PCG 84 Jan 04, 2023