Official code release for "Learned Spatial Representations for Few-shot Talking-Head Synthesis" ICCV 2021

Related tags

Deep Learninglsr
Overview

LSR: Learned Spatial Representations for Few-shot Talking-Head Synthesis

Official code release for LSR. For technical details, please refer to:

Learned Spatial Representations for Few-shot Talking Head Synthesis.
Moustafa Meshry, Saksham Suri, Larry S. Davis, Abhinav Shrivastava
In International Conference on Computer Vision (ICCV), 2021.

Paper | Project page | Video

If you find this code useful, please consider citing:

@inproceedings{meshry2021step,
  title = {Learned Spatial Representations for Few-shot Talking-Head Synthesis},
  author = {Meshry, Moustafa and
          Suri, Saksham and
          Davis, Larry S. and
          Shrivastava, Abhinav},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV),
  year = {2021}
}

Environment setup

The code was built using tensorflow 2.2.0, cuda 10.1.243, and cudnn v7.6.5, but should be compatible with more recent tensorflow releases and cuda versions. To set up a virtual environement for the code, follow the following instructions.

  • Create a new conda environment
conda create -n lsr python=3.6
  • Activate the lsr environment
conda activate lsr
  • Set up the prerequisites
pip install -r requirements.txt

Run a pre-trained model

  • Download our pretrained model and extract to ./_trained_models/meta_learning
  • To run the inference for a test identity, execute the following command:
python main.py \
    --train_dir=_trained_models/meta_learning \
    --run_mode=infer \
    --K=1 \
    --source_subject_dir=_datasets/sample_fsth_eval_subset_processed/train/id00017/OLguY5ofUrY/combined \
    --driver_subject_dir=_datasets/sample_fsth_eval_subset_processed/test/id00017/OLguY5ofUrY/combined \
    --few_shot_finetuning=false 

where --K specifies the number of few-shot inputs, --few_shot_finetuning specifies whether or not to fine-tune the meta-learned model using the the K-shot inputs, and --source_subject_dir and --driver_subject_dir specify the source identity and driver sequence data respectively. Each output image contains a tuple of 5 images represeting the following (concatenated along the width):

  • The input facial landmarks for the target view.
  • The output discrete layout of our model, visualized in RGB.
  • The oracle segmentation map using an off-the-shelf segmentation model (i.e. the pesuedo ground truth), visualized in RGB.
  • The final output of our model.
  • The ground truth image of the driver subject.

A sample tuple is shown below.

        Input landmarks             Output spatial map           Oracle segmentation                     Output                           Ground truth


Test data and pre-computed outupts

Our model is trained on the train split of the VoxCeleb2 dataset. The data used for evaluation is adopted from the "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models" paper (Zakharov et. al, 2019), and can be downloaded from the link provided by the authors of the aforementioned paper.

The test data contains 1600 images of 50 test identities (not seen by the model during training). Each identity has 32 input frames + 32 hold-out frames. The K-shot inputs to the model are uniformly sampled from the 32 input set. If the subject finetuning is turned on, then the model is finetuned on the K-shot inputs. The 32 hold-out frames are never shown to the finetuned model. For more details about the test data, refer to the aforementioned paper (and our paper). To facilitate comparison to our method, we provide a link with our pre-computed outputs of the test subset for K={1, 4, 8, 32} and for both the subject-agnostic (meta-learned) and subject-finetuned models. For more details, please refer to the README file associated with the released outputs. Alternatively, you can run our pre-trained model on your own data or re-train our model by following the instructions for training, inference and dataset preparation.

Dataset pre-processing

The dataset preprocessing has the following steps:

  1. Facial landmark generation
  2. Face parsing
  3. Converting the VoxCeleb2 dataset to tfrecords (for training).

We provide details for each of these steps.

Facial Landmark Generation

  1. data_dir: Path to a directory containing data to be processed.
  2. output_dir: Path to the output directory where the processed data should be saved.
  3. k: Sampling rate for frames from video (Default is set to 10)
  4. mode: The mode can be set to images or videos depending on whether the input data is video files or already extracted frames.

Here are example commands that process the sample data provided with this repository:

Note: Make sure the folders only contain the videos or images that are to be processed.

  • Generate facial landmarks for sample VoxCeleb2 test videos.
python preprocessing/landmarks/release_landmark.py \
    --data_dir=_datasets/sample_test_videos \
    --output_dir=_datasets/sample_test_videos_processed \
    --mode=videos

To process the full dev and test subsets of the VoxCeleb2 dataset, run the above command twice while setting the --data_dir to point to the downloaded dev and test splits respectively.

  • Generate facial landmarks for the train portion of the sample evaluation subset.
python preprocessing/landmarks/release_landmark.py \
    --data_dir=_datasets/sample_fsth_eval_subset/train \
    --output_dir=_datasets/sample_fsth_eval_subset_processed/train \
    --mode=images
  • Generate facial landmarks for the test portion of the sample evaluation subset.
python preprocessing/landmarks/release_landmark.py \
    --data_dir=_datasets/sample_fsth_eval_subset/test \
    --output_dir=_datasets/sample_fsth_eval_subset_processed/test \
    --mode images

To process the full evaluation subset, download the evaluation subset, and run the above commands on the train and test portions of it.

Facial Parsing

The facial parsing step generates the oracle segmentation maps. It uses face parser of the CelebAMask-HQ github repository

To set it up follow the instructions below, and refer to instructions in the CelebAMask-HQ github repository for guidance.

mkdir third_party
git clone https://github.com/switchablenorms/CelebAMask-HQ.git third_party
cp preprocessing/segmentation/* third_party/face_parsing/.

To process the sample data provided with this repository, run the following commands.

  • Generate oracle segmentations for sample VoxCeleb2 videos.
python -u third_party/face_parsing/generate_oracle_segmentations.py \
    --batch_size=1 \
    --test_image_path=_datasets/sample_test_videos_processed
  • Generate oracle segmentations for the train portion of the sample evaluation subset.
python -u third_party/face_parsing/generate_oracle_segmentations.py \
    --batch_size=1 \
    --test_image_path=_datasets/sample_fsth_eval_subset_processed/train
  • Generate oracle segmentations for the test portion of the sample evaluation subset.
python -u third_party/face_parsing/generate_oracle_segmentations.py \
    --batch_size=1 \
    --test_image_path=_datasets/sample_fsth_eval_subset_processed/test

Converting VoxCeleb2 to tfrecords.

To re-train our model, you'll need to export the VoxCeleb2 dataset to a TF-record format. After downloading the VoxCeleb2 dataset and generating the facial landmarks and segmentations for it, run the following commands to export them to tfrecods.

python data/export_voxceleb_to_tfrecords.py \
  --dataset_parent_dir=
   
     \
  --output_parent_dir=
    
      \
  --subset=dev \
  --num_shards=1000

    
   

For example, the command to convert the sample data provided with this repository is

python data/export_voxceleb_to_tfrecords.py \
  --dataset_parent_dir=_datasets/sample_fsth_eval_subset_processed \
  --output_parent_dir=_datasets/sample_fsth_eval_subset_processed/tfrecords \
  --subset=test \
  --num_shards=1

Training

Training consists of two stages: first, we bootstrap the training of the layout generator by training it to predict a segmentation map for the target view. Second, we turn off the semantic segmentation loss and train our full pipeline. Our code assumes the training data in a tfrecord format (see previous instructions for dataset preparation).

After you have generated the dev and test tfrecords of the VoxCeleb2 dataset, you can run the training as follows:

  • run the layout pre-training step: execute the following command
sh scripts/train_lsr_pretrain.sh
  • train the full pipeline: after the pre-training is complete, run the following command
sh scripts/train_lsr_meta_learning.sh

Please, refer to the training scripts for details about different training configurations and how to set the correct flags for your training data.

Owner
Moustafa Meshry
Moustafa Meshry
code for Grapadora research paper experimentation

Road feature embedding selection method Code for research paper experimentation Abstract Traffic forecasting models rely on data that needs to be sens

Eric López Manibardo 0 May 26, 2022
Extracts essential Mediapipe face landmarks and arranges them in a sequenced order.

simplified_mediapipe_face_landmarks Extracts essential Mediapipe face landmarks and arranges them in a sequenced order. The default 478 Mediapipe face

Irfan 13 Oct 04, 2022
Scalable, event-driven, deep-learning-friendly backtesting library

...Minimizing the mean square error on future experience. - Richard S. Sutton BTGym Scalable event-driven RL-friendly backtesting library. Build on

Andrew 922 Dec 27, 2022
An official implementation of the paper Exploring Sequence Feature Alignment for Domain Adaptive Detection Transformers

Sequence Feature Alignment (SFA) By Wen Wang, Yang Cao, Jing Zhang, Fengxiang He, Zheng-jun Zha, Yonggang Wen, and Dacheng Tao This repository is an o

WangWen 79 Dec 24, 2022
[CVPR2021] Invertible Image Signal Processing

Invertible Image Signal Processing This repository includes official codes for "Invertible Image Signal Processing (CVPR2021)". Figure: Our framework

Yazhou XING 281 Dec 31, 2022
A python comtrade load library accelerated by go

Comtrade-GRPC Code for python used is mainly from dparrini/python-comtrade. Just patch the code in BinaryDatReader.parse for parsing a little more eff

Bo 1 Dec 27, 2021
Code for the Interspeech 2021 paper "AST: Audio Spectrogram Transformer".

AST: Audio Spectrogram Transformer Introduction Citing Getting Started ESC-50 Recipe Speechcommands Recipe AudioSet Recipe Pretrained Models Contact I

Yuan Gong 603 Jan 07, 2023
Spherical Confidence Learning for Face Recognition, accepted to CVPR2021.

Sphere Confidence Face (SCF) This repository contains the PyTorch implementation of Sphere Confidence Face (SCF) proposed in the CVPR2021 paper: Shen

Maths 70 Dec 09, 2022
Implementation of GGB color space

GGB Color Space This package is implementation of GGB color space from Development of a Robust Algorithm for Detection of Nuclei and Classification of

Resha Dwika Hefni Al-Fahsi 2 Oct 06, 2021
Lightweight stereo matching network based on MobileNetV1 and MobileNetV2

MobileStereoNet: Towards Lightweight Deep Networks for Stereo Matching

Cognitive Systems Research Group 139 Nov 30, 2022
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

SpaceML 92 Nov 30, 2022
Unofficial implementation of Perceiver IO: A General Architecture for Structured Inputs & Outputs

Perceiver IO Unofficial implementation of Perceiver IO: A General Architecture for Structured Inputs & Outputs Usage import torch from src.perceiver.

Timur Ganiev 111 Nov 15, 2022
Face uncertainty quantification or estimation using PyTorch.

Face-uncertainty-pytorch This is a demo code of face uncertainty quantification or estimation using PyTorch. The uncertainty of face recognition is af

Kaen 3 Sep 16, 2022
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark

The DeepMind Alchemy environment is a meta-reinforcement learning benchmark that presents tasks sampled from a task distribution with deep underlying structure.

DeepMind 188 Dec 25, 2022
A simple Python configuration file operator.

A simple Python configuration file operator This project provides a common way to read configurations using config42. Installation It is possible to i

Scott Lau 2 Nov 08, 2021
Official repository for: Continuous Control With Ensemble DeepDeterministic Policy Gradients

Continuous Control With Ensemble Deep Deterministic Policy Gradients This repository is the official implementation of Continuous Control With Ensembl

4 Dec 06, 2021
Anomaly detection analysis and labeling tool, specifically for multiple time series (one time series per category)

taganomaly Anomaly detection labeling tool, specifically for multiple time series (one time series per category). Taganomaly is a tool for creating la

Microsoft 272 Dec 17, 2022
Grounding Representation Similarity with Statistical Testing

Grounding Representation Similarity with Statistical Testing This repo contains code to replicate the results in our paper, which evaluates representa

26 Dec 02, 2022
这是一个利用facenet和retinaface实现人脸识别的库,可以进行在线的人脸识别。

Facenet+Retinaface:人脸识别模型在Keras当中的实现 目录 注意事项 Attention 所需环境 Environment 文件下载 Download 预测步骤 How2predict 参考资料 Reference 注意事项 该库中包含了两个网络,分别是retinaface和fa

Bubbliiiing 31 Nov 15, 2022