Facial Image Inpainting with Semantic Control

Overview

Facial Image Inpainting with Semantic Control

In this repo, we provide a model for the controllable facial image inpainting task. This model enables users to intuitively edit their images by using parametric 3D faces.

The technology report is comming soon.

  • Image Inpainting results

  • Fine-grained Control

Quick Start

Installation

  • Clone the repository and set up a conda environment with all dependencies as follows
git clone https://github.com/RenYurui/Controllable-Face-Inpainting.git --recursive
cd Controllable-Face-Inpainting

# 1. Create a conda virtual environment.
conda create -n cfi python=3.6
source activate cfi
conda install -c pytorch pytorch=1.7.1 torchvision cudatoolkit=10.2

# 2. install pytorch3d
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -c bottler nvidiacub
git clone https://github.com/facebookresearch/pytorch3d.git
cd pytorch3d && pip install -e .

# 3. Install other dependencies
pip install -r requirements.txt

Download Prerequisite Models

  • Follow Deep3DFaceRecon to prepare ./BFM folder. Download 01_MorphableModel.mat and Expression Basis Exp_Pca.bin. Put the obtained files into the ./Deep3DFaceRecon_pytorch/BFM floder. Then link the folder to the root path.
ln -s /PATH_TO_REPO_ROOT/Deep3DFaceRecon_pytorch/BFM /PATH_TO_REPO_ROOT
  • Clone the Arcface repo
cd third_part
git clone https://github.com/deepinsight/insightface.git
cp -r ./insightface/recognition/arcface_torch/ ./

The Arcface is used to extract identity features for loss computation. Download the pre-trained model from Arcface using this link. By default, the resnet50 backbone (ms1mv3_arcface_r50_fp16) is used. Put the obtained weights into ./third_part/arcface_torch/ms1mv3_arcface_r50_fp16/backbone.pth

  • Download the pretrained weights of our model from Google Driven. Save the obtained files into folder ./result.

Inference

We provide some example images. Please run the following code for inference

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 --master_port 1234 demo.py \
--config ./config/facial_image_renderer_ffhq.yaml \
--name facial_image_renderer_ffhq \
--output_dir ./visi_result \
--input_dir ./examples/inputs \
--mask_dir ./examples/masks

Train the model from scratch

Dataset Preparation

  • Download dataset. We use Celeba-HQ and FFHQ for training and inference. Please download the datasets (image format) and put them under ./dataset folder.
  • Obtain 3D faces by using Deep3DFaceRecon. Follow the Deep3DFaceRecon repo to download the trained weights. And save it as: ./Deep3DFaceRecon_pytorch/checkpoints/face_recon/epoch_20.pth
# 1. Extract keypoints from the face images for cropping.
cd scripts
# extracted keypoints from celeba
python extract_kp.py \
--data_root PATH_TO_CELEBA_ROOT \
--output_dir PATH_TO_KEYPOINTS \
--dataset celeba \
--device_ids 0,1 \
--workers 6

# 2. Extract 3DMM coefficients from the face images.
cd .. #repo root
# we provide some scripts for easy of use. However, one can use the original repo to extract the coefficients.
cp scripts/inference_options.py ./Deep3DFaceRecon_pytorch/options
cp scripts/face_recon.py ./Deep3DFaceRecon_pytorch
cp scripts/facerecon_inference_model.py ./Deep3DFaceRecon_pytorch/models
cp scripts/pytorch_3d.py ./Deep3DFaceRecon_pytorch/util
ln -s /PATH_TO_REPO_ROOT/third_part/arcface_torch /PATH_TO_REPO_ROOT/Deep3DFaceRecon_pytorch/models

cd Deep3DFaceRecon_pytorch

python face_recon.py \
--input_dir PATH_TO_CELEBA_ROOT \
--keypoint_dir PATH_TO_KEYPOINTS \
--output_dir PATH_TO_3DMM_COEFFICIENT \
--inference_batch_size 100 \
--name=face_recon \
--dataset_name celeba \
--epoch=20 \
--model facerecon_inference

# 3. Save images and the coefficients into a lmdb file.
cd .. #repo root
python prepare_data.py \
--root PATH_TO_CELEBA_ROOT \
--coeff_file PATH_TO_3DMM_COEFFICIENT \
--dataset celeba \
--out PATH_TO_CELEBA_LMDB_ROOT

Train The Model

# we first train the semantic_descriptor_recommender
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --master_port 1234 train.py \
--config ./config/semantic_descriptor_recommender_celeba.yaml \
--name semantic_descriptor_recommender_celeba

# Then, we trian the facial_image_renderer for image inpainting
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --master_port 1234 train.py \
--config ./config/facial_image_renderer_celeba.yaml \
--name facial_image_renderer_celeba
Owner
Ren Yurui
Ren Yurui
TargetAllDomainObjects - A python wrapper to run a command on against all users/computers/DCs of a Windows Domain

TargetAllDomainObjects A python wrapper to run a command on against all users/co

Podalirius 19 Dec 13, 2022
Segmentation Training Pipeline

Segmentation Training Pipeline This package is a part of Musket ML framework. Reasons to use Segmentation Pipeline Segmentation Pipeline was developed

Musket ML 52 Dec 12, 2022
Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.

Introduction 1. Usage (For MSS) 1.1 Prepare running environment 1.2 Use pretrained model 1.3 Train new MSS models from scratch 1.3.1 How to train 1.3.

Leo 100 Dec 25, 2022
Code release for "Making a Bird AI Expert Work for You and Me".

Making-a-Bird-AI-Expert-Work-for-You-and-Me Code release for "Making a Bird AI Expert Work for You and Me". arxiv (Coming soon...) Changelog 2021/12/6

PRIS-CV: Computer Vision Group 11 Dec 11, 2022
Pytorch implementation of "A simple neural network module for relational reasoning" (Relational Networks)

Pytorch implementation of Relational Networks - A simple neural network module for relational reasoning Implemented & tested on Sort-of-CLEVR task. So

Kim Heecheol 800 Dec 05, 2022
Language model Prompt And Query Archive

LPAQA: Language model Prompt And Query Archive This repository contains data and code for the paper How Can We Know What Language Models Know? Install

127 Dec 20, 2022
Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming

Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification.

YerevaNN 75 Nov 06, 2022
Self-Supervised depth kalilia

Self-Supervised depth kalilia

24 Oct 15, 2022
This repo provides a demo for the CVPR 2021 paper "A Fourier-based Framework for Domain Generalization" on the PACS dataset.

FACT This repo provides a demo for the CVPR 2021 paper "A Fourier-based Framework for Domain Generalization" on the PACS dataset. To cite, please use:

105 Dec 17, 2022
High frequency AI based algorithmic trading module.

Flow Flow is a high frequency algorithmic trading module that uses machine learning to self regulate and self optimize for maximum return. The current

59 Dec 14, 2022
Mall-Customers-Segmentation - Customer Segmentation Using K-Means Clustering

Overview Customer Segmentation is one the most important applications of unsupervised learning. Using clustering techniques, companies can identify th

NelakurthiSudheer 2 Jan 03, 2022
RADIal is available now! Check the download section

Latest news: RADIal is available now! Check the download section. However, because we are currently working on the data anonymization, we provide for

valeo.ai 55 Jan 03, 2023
LSTMs (Long Short Term Memory) RNN for prediction of price trends

Price Prediction with Recurrent Neural Networks LSTMs BTC-USD price prediction with deep learning algorithm. Artificial Neural Networks specifically L

5 Nov 12, 2021
[ICCV2021] Official code for "Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition"

CTR-GCN This repo is the official implementation for Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. The pap

Yuxin Chen 148 Dec 16, 2022
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)

GraspNet Baseline Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020). [paper] [dataset] [API] [do

GraspNet 209 Dec 29, 2022
Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train format

ttopt Description Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train (TT) format and maximu

5 May 23, 2022
The implementation of the lifelong infinite mixture model

Lifelong infinite mixture model 📋 This is the implementation of the Lifelong infinite mixture model 📋 Accepted by ICCV 2021 Title : Lifelong Infinit

Fei Ye 5 Oct 20, 2022
Focal and Global Knowledge Distillation for Detectors

FGD Paper: Focal and Global Knowledge Distillation for Detectors Install MMDetection and MS COCO2017 Our codes are based on MMDetection. Please follow

Mesopotamia 261 Dec 23, 2022
Semi-supervised Transfer Learning for Image Rain Removal. In CVPR 2019.

Semi-supervised Transfer Learning for Image Rain Removal This package contains the Python implementation of "Semi-supervised Transfer Learning for Ima

Wei Wei 59 Dec 26, 2022
implementation of the paper "MarginGAN: Adversarial Training in Semi-Supervised Learning"

MarginGAN This repository is the implementation of the paper "MarginGAN: Adversarial Training in Semi-Supervised Learning". 1."preliminary" is the imp

Van 7 Dec 23, 2022