This is Unofficial Repo. Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery Detection (CVPR 2021)

Overview

Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery Detection

This is a PyTorch implementation of the LipForensics paper.

This is an Unofficially implemented codes with some Official code. I made this repo to use more conveniently.

If you want to see the Original code, You can cite this link

You should try the preprocessing, which steps are firstly getting landmarks and then cropping mouth.

Setup

Install packages

pip install -r requirements.txt

Note: we used Python version 3.8 to test this code.

Prepare data

  1. Follow the links below to download the datasets (you will be asked to fill out some forms before downloading):

  2. Extract the frames (e.g. using code in the FaceForensics++ repo.) The filenames of the frames should be as follows: 0000.png, 0001.png, ....

  3. Detect the faces and compute 68 face landmarks. For example, you can use RetinaFace and FAN for good results.

  4. Place face frames and corresponding landmarks into the appropriate directories:

    • For FaceForensics++, FaceShifter, and DeeperForensics, frames for a given video should be placed in data/datasets/Forensics/{dataset_name}/{compression}/images/{video}, where dataset_name is RealFF (real frames from FF++), Deepfakes, FaceSwap, Face2Face, NeuralTextures, FaceShifter, or DeeperForensics. dataset_name is c0, c23, or c40, corresponding to no compression, low compression, and high compression, respectively. video is the video name and should be numbered as follows: 000, 001, .... For example, the frame 0102 of real video 067 at c23 compression is found in data/datasets/Forensics/RealFF/c23/images/067/0102.png
    • For CelebDF-v2, frames for a given video should be placed in data/datasets/CelebDF/{dataset_name}/images/{video} where dataset_name is RealCelebDF, which should include all real videos from the test set, or FakeCelebDF, which should include all fake videos from the test set.
    • For DFDC, frames for a given video should be placed in data/datasets/DFDC/images (both real and fake). The video names from the test set we used in our experiments are given in data/datasets/DFDC/dfdc_all_vids.txt.

    The corresponding computed landmarks for each frame should be placed in .npy format in the directories defined by replacing images with landmarks above (e.g. for video "000", the .npy files for each frame should be placed in data/datasets/Forensics/RealFF/c23/landmarks/000).

  5. To crop the mouth region from each frame for all datasets, run

    python preprocessing/crop_mouths.py --dataset all

    This will write the mouth images into the corresponding cropped_mouths directory.

Evaluate

  • Cross-dataset generalisation (Table 2 in paper):
    1. Download the pretrained model and place into models/weights. This model has been trained on FaceForensics++ (Deepfakes, FaceSwap, Face2Face, and NeuralTextures) and is the one used to get the LipForensics video-level AUC results in Table 2 of the paper, reproduced below:

      CelebDF-v2 DFDC FaceShifter DeeperForensics
      82.4% 73.5% 97.1% 97.6%
    2. To evaluate on e.g. FaceShifter, run

      python evaluate.py --dataset FaceShifter --weights_forgery ./models/weights/lipforensics_ff.pth

Citation

If you find this repo useful for your research, please consider citing the following:

@inproceedings{haliassos2021lips,
  title={Lips Don't Lie: A Generalisable and Robust Approach To Face Forgery Detection},
  author={Haliassos, Alexandros and Vougioukas, Konstantinos and Petridis, Stavros and Pantic, Maja},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={5039--5049},
  year={2021}
}
Owner
Minha Kim
@DASH-Lab on Sungkyunkwan University in Korea
Minha Kim
CondenseNet V2: Sparse Feature Reactivation for Deep Networks

CondenseNetV2 This repository is the official Pytorch implementation for "CondenseNet V2: Sparse Feature Reactivation for Deep Networks" paper by Le Y

Haojun Jiang 74 Dec 12, 2022
Image-to-image regression with uncertainty quantification in PyTorch

Image-to-image regression with uncertainty quantification in PyTorch. Take any dataset and train a model to regress images to images with rigorous, distribution-free uncertainty quantification.

Anastasios Angelopoulos 25 Dec 26, 2022
Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification

STAM - Pytorch Implementation of STAM (Space Time Attention Model), yet another pure and simple SOTA attention model that bests all previous models in

Phil Wang 109 Dec 28, 2022
Process JSON files for neural recording sessions using Medtronic's BrainSense Percept PC neurostimulator

percept_processing This code processes JSON files for streamed neural data using Medtronic's Percept PC neurostimulator with BrainSense Technology for

Maria Olaru 3 Jun 06, 2022
Multi-Objective Loss Balancing for Physics-Informed Deep Learning

Multi-Objective Loss Balancing for Physics-Informed Deep Learning Code for ReLoBRaLo. Abstract Physics Informed Neural Networks (PINN) are algorithms

Rafael Bischof 16 Dec 12, 2022
PyTorch implementation of paper: AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer, ICCV 2021.

AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer [Paper] [PyTorch Implementation] [Paddle Implementation] Overview This reposit

148 Dec 30, 2022
TGRNet: A Table Graph Reconstruction Network for Table Structure Recognition

TGRNet: A Table Graph Reconstruction Network for Table Structure Recognition Xue, Wenyuan, et al. "TGRNet: A Table Graph Reconstruction Network for Ta

Wenyuan 68 Jan 04, 2023
HyperSeg: Patch-wise Hypernetwork for Real-time Semantic Segmentation Official PyTorch Implementation

: We present a novel, real-time, semantic segmentation network in which the encoder both encodes and generates the parameters (weights) of the decoder. Furthermore, to allow maximal adaptivity, the w

Yuval Nirkin 182 Dec 14, 2022
Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language (NeurIPS 2021)

VRDP (NeurIPS 2021) Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language Mingyu Ding, Zhenfang Chen, Tao Du, Pin

Mingyu Ding 36 Sep 20, 2022
An example showing how to use jax to train resnet50 on multi-node multi-GPU

jax-multi-gpu-resnet50-example This repo shows how to use jax for multi-node multi-GPU training. The example is adapted from the resnet50 example in d

Yangzihao Wang 20 Jul 04, 2022
Context-Aware Image Matting for Simultaneous Foreground and Alpha Estimation

Context-Aware Image Matting for Simultaneous Foreground and Alpha Estimation This is the inference codes of Context-Aware Image Matting for Simultaneo

Qiqi Hou 125 Oct 22, 2022
Distributed Asynchronous Hyperparameter Optimization better than HyperOpt.

UltraOpt : Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. UltraOpt is a simple and efficient library to minimize expensive

98 Aug 16, 2022
Code base for NeurIPS 2021 publication titled Kernel Functional Optimisation (KFO)

KernelFunctionalOptimisation Code base for NeurIPS 2021 publication titled Kernel Functional Optimisation (KFO) We have conducted all our experiments

2 Jun 29, 2022
Orbivator AI - To Determine which features of data (measurements) are most important for diagnosing breast cancer and find out if breast cancer occurs or not.

Orbivator_AI Breast Cancer Wisconsin (Diagnostic) GOAL To Determine which features of data (measurements) are most important for diagnosing breast can

anurag kumar singh 1 Jan 02, 2022
Useful materials and tutorials for 110-1 NTU DBME5028 (Application of Deep Learning in Medical Imaging)

Useful materials and tutorials for 110-1 NTU DBME5028 (Application of Deep Learning in Medical Imaging)

7 Jun 22, 2022
SSD-based Object Detection in PyTorch

SSD-based Object Detection in PyTorch 서강대학교 현대모비스 SW 프로그램에서 진행한 인공지능 프로젝트입니다. Jetson nano를 이용해 pre-trained network를 fine tuning시켜 차량 및 신호등 인식을 구현하였습니다

Haneul Kim 1 Nov 16, 2021
Code to reproduce experiments in the paper "Explainability Requires Interactivity".

Explainability Requires Interactivity This repository contains the code to train all custom models used in the paper Explainability Requires Interacti

Digital Health & Machine Learning 5 Apr 07, 2022
[NeurIPS-2020] Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID.

Self-paced Contrastive Learning (SpCL) The official repository for Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID

Yixiao Ge 286 Dec 21, 2022
Code for this paper The Lottery Ticket Hypothesis for Pre-trained BERT Networks.

The Lottery Ticket Hypothesis for Pre-trained BERT Networks Code for this paper The Lottery Ticket Hypothesis for Pre-trained BERT Networks. [NeurIPS

VITA 122 Dec 14, 2022
training script for space time memory network

Trainig Script for Space Time Memory Network This codebase implemented training code for Space Time Memory Network with some cyclic features. Requirem

Yuxi Li 100 Dec 20, 2022