Background Matting: The World is Your Green Screen

Overview

Background Matting: The World is Your Green Screen

alt text

By Soumyadip Sengupta, Vivek Jayaram, Brian Curless, Steve Seitz, and Ira Kemelmacher-Shlizerman

This paper will be presented in IEEE CVPR 2020.

Project Page

Go to Project page for additional details and results.

Paper (Arxiv)

Blog Post

Background Matting v2.0

We recently released a brand new background matting project: better quality and REAL-TIME performance (30fps at 4K and 60fps at FHD)! You can now use this with Zoom! Much better quality! We tried this on a Linux machine with a GPU.

Check out the code!

Project members

Acknowledgement: Andrey Ryabtsev, University of Washington

License

This work is licensed under the Creative Commons Attribution NonCommercial ShareAlike 4.0 License.

Summary

Updates

April 21, 2020:

April 20,2020

April 9, 2020

  • Issues:
    • Updated alignment function in pre-processing code. Python version uses AKAZE features (SIFT and SURF is not available with opencv3), MATLAB version also provided uses SURF features.
  • New features:

April 8, 2020

  • Issues:
    • Turning off adjustExposure() for bias-gain correction in test_pre_processing.py. (Bug found, need to be fixed)
    • Incorporating 'uncropping' operation in test_background-matting_image.py. (Output will be of same resolution and aspect-ratio as input)

Getting Started

Clone repository:

git clone https://github.com/senguptaumd/Background-Matting.git

Please use Python 3. Create an Anaconda environment and install the dependencies. Our code is tested with Pytorch=1.1.0, Tensorflow=1.14 with cuda10.0

conda create --name back-matting python=3.6
conda activate back-matting

Make sure CUDA 10.0 is your default cuda. If your CUDA 10.0 is installed in /usr/local/cuda-10.0, apply

export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64
export PATH=$PATH:/usr/local/cuda-10.0/bin

Install PyTorch, Tensorflow (needed for segmentation) and dependencies

conda install pytorch=1.1.0 torchvision cudatoolkit=10.0 -c pytorch
pip install tensorflow-gpu==1.14.0
pip install -r requirements.txt

Note: The code is likely to work on other PyTorch and Tensorflow versions compatible with your system CUDA. If you already have a working environment with PyTorch and Tensorflow, only install dependencies with pip install -r requirements.txt. If our code fails due to different versions, then you need to install specific CUDA, PyTorch and Tensorflow versions.

Run the inference code on sample images

Data

To perform Background Matting based green-screening, you need to capture:

  • (a) Image with the subject (use _img.png extension)
  • (b) Image of the background without the subject (use _back.png extension)
  • (c) Target background to insert the subject (place in data/background)

Use sample_data/ folder for testing and prepare your own data based on that. This data was collected with a hand-held camera.

Pre-trained model

Please download the pre-trained models from Google Drive and place Models/ folder inside Background-Matting/.

Note: syn-comp-adobe-trainset model was trained on the training set of the Adobe dataset. This was the model used for numerical evaluation on Adobe dataset.

Pre-processing

  1. Segmentation

Background Matting needs a segmentation mask for the subject. We use tensorflow version of Deeplabv3+.

cd Background-Matting/
git clone https://github.com/tensorflow/models.git
cd models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
cd ../..
python test_segmentation_deeplab.py -i sample_data/input

You can replace Deeplabv3+ with any segmentation network of your choice. Save the segmentation results with extension _masksDL.png.

  1. Alignment

Skip this step, if your data is captured with fixed-camera.

  • For hand-held camera, we need to align the background with the input image as a part of pre-processing. We apply simple hoomography based alignment.
  • We ask users to disable the auto-focus and auto-exposure of the camera while capturing the pair of images. This can be easily done in iPhone cameras (tap and hold for a while).

Run python test_pre_process.py -i sample_data/input for pre-processing. It aligns the background image _back.png and changes its bias-gain to match the input image _img.png

We used AKAZE features python code (since SURF and SIFT unavilable in opencv3) for alignment. We also provide an alternate MATLAB code (test_pre_process.m), which uses SURF features. MATLAB code also provides a way to visualize feature matching and alignment. Bad alignment will produce bad matting output. Bias-gain adjustment is turned off in the Python code due to a bug, but it is present in MATLAB code. If there are significant exposure changes between the captured image and the captured background, use bias-gain adjustment to account for that.

Feel free to write your own alignment code, choose your favorite feature detector, feature matching and alignment.

Background Matting

python test_background-matting_image.py -m real-hand-held -i sample_data/input/ -o sample_data/output/ -tb sample_data/background/0001.png

For images taken with fixed camera (with a tripod), choose -m real-fixed-cam for best results. -m syn-comp-adobe lets you use the model trained on synthetic-composite Adobe dataset, without real data (worse performance).

Run the inference code on sample videos

This is almost exactly similar as that of the image with few small changes.

Data

To perform Background Matting based green-screening, you need to capture:

  • (a) Video with the subject (teaser.mov)
  • (b) Image of the background without the subject (use teaser_back.png extension)
  • (c) Target background to insert the subject (place in target_back.mov)

We provide sample_video/ captured with hand-held camera and sample_video_fixed/ captured with fixed camera for testing. Please download the data and place both folders under Background-Matting. Prepare your own data based on that.

Pre-processing

  1. Frame extraction:
cd Background-Matting/sample_video
mkdir input background
ffmpeg -i teaser.mov input/%04d_img.png -hide_banner
ffmpeg -i target_back.mov background/%04d.png -hide_banner

Repeat the same for sample_video_fixed

  1. Segmentation
cd Background-Matting/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
cd ../..
python test_segmentation_deeplab.py -i sample_video/input

Repeat the same for sample_video_fixed

  1. Alignment

No need to run alignment for sample_video_fixed or videos captured with fixed-camera.

Run python test_pre_process_video.py -i sample_video/input -v_name sample_video/teaser_back.png for pre-processing. Alternately you can also use test_pre_process_video.m in MATLAB.

Background Matting

For hand-held videos, like sample_video:

python test_background-matting_image.py -m real-hand-held -i sample_video/input/ -o sample_video/output/ -tb sample_video/background/

For fixed-camera videos, like sample_video_fixed:

python test_background-matting_image.py -m real-fixed-cam -i sample_video_fixed/input/ -o sample_video_fixed/output/ -tb sample_video_fixed/background/ -b sample_video_fixed/teaser_back.png

To obtain the video from the output frames, run:

cd Background-Matting/sample_video
ffmpeg -r 60 -f image2 -i output/%04d_matte.png -vcodec libx264 -crf 15 -s 1280x720 -pix_fmt yuv420p teaser_matte.mp4
ffmpeg -r 60 -f image2 -i output/%04d_compose.png -vcodec libx264 -crf 15 -s 1280x720 -pix_fmt yuv420p teaser_compose.mp4

Repeat same for sample_video_fixed

Notes on capturing images

For best results capture images following these guidelines:

  • Choose a background that is mostly static, can be both indoor and outdoor.
  • Avoid casting any shadows of the subject on the background.
    • place the subject atleast few feets away from the background.
    • if possible adjust the lighting to avoid strong shadows on the background.
  • Avoid large color coincidences between subject and background. (e.g. Do not wear a white shirt in front of a white wall background.)
  • Lock AE/AF (Auto-exposure and Auto-focus) of the camera.
  • For hand-held capture, you need to:
    • allow only small camera motion by continuing to holding the camera as the subject exists the scene.
    • avoid backgrounds that has two perpendicular planes (homography based alignment will fail) or use a background very far away.
    • The above restirctions do not apply for images captured with fixed camera (on a tripod)

Training on synthetic-composite Adobe dataset

Data

  • Download original Adobe matting dataset: Follow instructions.
  • Separate human images: Use test_data_list.txt and train_data_list.txt in Data_adobe to copy only human subjects from Adobe dataset. Create folders fg_train, fg_test, mask_train, mask_test to copy foreground and alpha matte for test and train data separately. (The train test split is same as the original dataset.) You can run the following to accomplish this:
cd Data_adobe
./prepare.sh /path/to/adobe/Combined_Dataset
  • Download background images: Download MS-COCO images and place it in bg_train and in bg_test.
  • Compose Adobe foregrounds onto COCO for the train and test sets. This saves the composed result as _comp and the background as _back under merged_train and merged_test. It will also create a CSV to be used by the training dataloader. You can pass --workers 8 to use e.g. 8 threads, though it will use only one by default.
python compose.py --fg_path fg_train --mask_path mask_train --bg_path bg_train --out_path merged_train --out_csv Adobe_train_data.csv
python compose.py --fg_path fg_test --mask_path mask_test --bg_path bg_test --out_path merged_test

Training

Change number of GPU and required batch-size, depending on your platform. We trained the model with 512x512 input (-res flag).

CUDA_VISIBLE_DEVICES=0,1 python train_adobe.py -n Adobe_train -bs 4 -res 512

Notes:

  • 512x512 is the maximum input resolution we recommend for training
  • If you decreasing training resolution to 256x256, change -res 256, but we also recommend using lesser residual blocks. Use: -n_blocks1 5 -n_blocks2 2.

Cheers to the unofficial Deep Image Matting repo.

Training on unlabeled real videos

Data

Please download our captured videos.. We will show next how to finetune your model on fixed-camera captured videos. It will be similar for hand-held cameras, except you will need to align the captured background image to each frame of the video separately. (Take a hint from test_pre_process.py and use alignImages().)

Data Pre-processing:

  • Extract frames for each video: ffmpeg -i $NAME.mp4 $NAME/%04d_img.png -hide_banner
  • Run Segmentation (follow instructions on Deeplabv3+) : python test_segmentation_deeplab.py -i $NAME
  • Target background for composition. For self-supervised learning we need some target backgrounds that has roughly similar lighting as the original videos. Either capture few videos of indoor/outdoor scenes without humans or use our captured background in the background folder.
  • Create a .csv file Video_data_train.csv with each row as: $image;$captured_back;$segmentation;$image+20frames;$image+2*20frames;$image+3*20frames;$image+4*20frames;$target_back. The process is automated by prepare_real.py -- take a look inside and change background_path and path before running.

Training

Change number of GPU and required batch-size, depending on your platform. We trained the model with 512x512 input (-res flag).

CUDA_VISIBLE_DEVICES=0,1 python train_real_fixed.py -n Real_fixed -bs 4 -res 512 -init_model Models/syn-comp-adobe-trainset/net_epoch_64.pth

Dataset

We captured videos with both fixed and hand-held camera in indoor and outdoor settings. We release this data to encourage future research on improving background matting. The data is released for research purposes only.

Download data

Google Colab

Thanks to Andrey Ryabstev for creating Google Colab version for easy inference on images and videos of your choice.

Google Colab

Notes

We are eager to hear how our algorithm works on your images/videos. If the algorithm fails on your data, please feel free to share it with us at [email protected]. This will help us in improving our algorithm for future research. Also, feel free to share any cool results.

Citation

If you use this code for your research, please consider citing:

@InProceedings{BMSengupta20,
  title={Background Matting: The World is Your Green Screen},
  author = {Soumyadip Sengupta and Vivek Jayaram and Brian Curless and Steve Seitz and Ira Kemelmacher-Shlizerman},
  booktitle={Computer Vision and Pattern Regognition (CVPR)},
  year={2020}
}

Related Implementations

Microsoft Virtual Stage: Using our background matting technology along with depth sensing with Kinect, Microsoft opensourced this amazing code for virtual staging. Follow this link for details of their technique.

Weights & Biases: Great presentation and detailed discussions and insights on pre-processing and training our model. Check out Two Minutes Paper's take on our work.

Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 07, 2023
Generative Art Using Neural Visual Grammars and Dual Encoders

Generative Art Using Neural Visual Grammars and Dual Encoders Arnheim 1 The original algorithm from the paper Generative Art Using Neural Visual Gramm

DeepMind 231 Jan 05, 2023
2D&3D human pose estimation

Human Pose Estimation Papers [CVPR 2016] - 201511 [IJCAI 2016] - 201602 Other Action Recognition with Joints-Pooled 3D Deep Convolutional Descriptors

133 Jan 02, 2023
Pcos-prediction - Predicts the likelihood of Polycystic Ovary Syndrome based on patient attributes and symptoms

PCOS Prediction 🥼 Predicts the likelihood of Polycystic Ovary Syndrome based on

Samantha Van Seters 1 Jan 10, 2022
Large-scale language modeling tutorials with PyTorch

Large-scale language modeling tutorials with PyTorch 안녕하세요. 저는 TUNiB에서 머신러닝 엔지니어로 근무 중인 고현웅입니다. 이 자료는 대규모 언어모델 개발에 필요한 여러가지 기술들을 소개드리기 위해 마련하였으며 기본적으로

TUNiB 172 Dec 29, 2022
A Temporal Extension Library for PyTorch Geometric

Documentation | External Resources | Datasets PyTorch Geometric Temporal is a temporal (dynamic) extension library for PyTorch Geometric. The library

Benedek Rozemberczki 1.9k Jan 07, 2023
End-To-End Crowdsourcing

End-To-End Crowdsourcing Comparison of traditional crowdsourcing approaches to a state-of-the-art end-to-end crowdsourcing approach LTNet on sentiment

Andreas Koch 1 Mar 06, 2022
Competitive Programming Club, Clinify's Official repository for CP problems hosting by club members.

Clinify-CPC_Programs This repository holds the record of the competitive programming club where the competitive coding aspirants are thriving hard and

Clinify Open Sauce 4 Aug 22, 2022
SelfRemaster: SSL Speech Restoration

SelfRemaster: Self-Supervised Speech Restoration Official implementation of SelfRemaster: Self-Supervised Speech Restoration with Analysis-by-Synthesi

Takaaki Saeki 46 Jan 07, 2023
Supplementary materials for ISMIR 2021 LBD paper "Evaluation of Latent Space Disentanglement in the Presence of Interdependent Attributes"

Evaluation of Latent Space Disentanglement in the Presence of Interdependent Attributes Supplementary materials for ISMIR 2021 LBD submission: K. N. W

Karn Watcharasupat 2 Oct 25, 2021
D2LV: A Data-Driven and Local-Verification Approach for Image Copy Detection

Facebook AI Image Similarity Challenge: Matching Track —— Team: imgFp This is the source code of our 3rd place solution to matching track of Image Sim

16 Dec 25, 2022
2021 credit card consuming recommendation

2021 credit card consuming recommendation

Wang, Chung-Che 7 Mar 08, 2022
Robocop is your personal mini voice assistant made using Python.

Robocop-VoiceAssistant To use this project, you should have python installed in your system. If you don't have python installed, install it beforehand

Sohil Khanduja 3 Feb 26, 2022
My solutions for Stanford University course CS224W: Machine Learning with Graphs Fall 2021 colabs (GNN, GAT, GraphSAGE, GCN)

machine-learning-with-graphs My solutions for Stanford University course CS224W: Machine Learning with Graphs Fall 2021 colabs Course materials can be

Marko Njegomir 7 Dec 14, 2022
Joint project of the duo Hacker Ninjas

Project Smoothie Společný projekt dua Hacker Ninjas. První pokus o hříčku po třech týdnech učení se programování. Jakub Kolář e:\

Jakub Kolář 2 Jan 07, 2022
Rethinking Transformer-based Set Prediction for Object Detection

Rethinking Transformer-based Set Prediction for Object Detection Here are the code for the ICCV paper. The code is adapted from Detectron2 and AdelaiD

Zhiqing Sun 62 Dec 03, 2022
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.

Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition This is the research repository for Vid2

Future Interfaces Group (CMU) 26 Dec 24, 2022
Dataset and Code for ICCV 2021 paper "Real-world Video Super-resolution: A Benchmark Dataset and A Decomposition based Learning Scheme"

Dataset and Code for RealVSR Real-world Video Super-resolution: A Benchmark Dataset and A Decomposition based Learning Scheme Xi Yang, Wangmeng Xiang,

Xi Yang 92 Jan 04, 2023
Bayesian optimisation library developped by Huawei Noah's Ark Library

Bayesian Optimisation Research This directory contains official implementations for Bayesian optimisation works developped by Huawei R&D, Noah's Ark L

HUAWEI Noah's Ark Lab 395 Dec 30, 2022
Real-time Object Detection for Streaming Perception, CVPR 2022

StreamYOLO Real-time Object Detection for Streaming Perception Jinrong Yang, Songtao Liu, Zeming Li, Xiaoping Li, Sun Jian Real-time Object Detection

Jinrong Yang 237 Dec 27, 2022