Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

Overview

FFD Source Code

Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

The proposed network framework with attention mechanism

Project Webpage

See the MSU CVLab website for project details and access to the DFFD dataset.

http://cvlab.cse.msu.edu/project-ffd.html

Notes

This code is provided as example code, and may not reflect a specific combination of hyper-parameters presented in the paper.

Description of contents

  • xception.py: Defines the Xception network with the attention mechanism
  • train*.py: Train the model on the train data
  • test*.py: Evaluate the model on the test data

Acknowledgements

If you use or refer to this source code, please cite the following paper:

@inproceedings{cvpr2020-dang,
  title={On the Detection of Digital Face Manipulation},
  author={Hao Dang, Feng Liu, Joel Stehouwer, Xiaoming Liu, Anil Jain},
  booktitle={In Proceeding of IEEE Computer Vision and Pattern Recognition (CVPR 2020)},
  address={Seattle, WA},
  year={2020}
}
Comments
  • Is it possible to release the script for generating edited images by FaceApp?

    Is it possible to release the script for generating edited images by FaceApp?

    Hi, Thanks for releasing the code and dataset! Part of your dataset is generated by FaceApp (using automated scripts running on android devices). I am wondering if you could also release this android script? I also plan to generate some edited images using FaceApp, and an automated script will be quite helpful!! Thanks!

    opened by zjxgithub 2
  • Question about mask images in dataset

    Question about mask images in dataset

    Thank you for releasing the code and the DFFD dataset!

    I noticed that in the "faceapp" part of the dataset, there is a ground-truth manipulation masks image for each fake image. How are these mask images generated?

    The paper mentioned that the ground-truth manipulation mask were calculated by source images and fake images, but I still did not understand how.

    Thank you for answering my question. :)

    opened by piddnad 2
  • Serveral question about dataset

    Serveral question about dataset

    Thanks for releasing the code and the dataset. I have some questions for the dataset,

    • In align_faces/align_faces.m inside scripts.zip, there is a file called box.txt. But I can't find it anywhere. It seems crucial to align and crop the images.

    image

    • All of the images in dataset are in the resolution of 299x299. I wonder how did you process the images in CelebA. I remember the aligned and cropped image in CelebA are in the resolution of 128x128.
    opened by wheatdog 2
  • attention map and gt mask matching

    attention map and gt mask matching

    Hi, thanks for your work. I have a small question. The attention map size is 19x19, but the gt mask (diff image) is 299x299. Are they matched by downsampling gt mask?

    opened by neverUseThisName 1
  • Are label information leaked in testing process?

    Are label information leaked in testing process?

    Thanks for uploading your code and dataset. After a short view I'm considering your predicting process is like: generating masks with scripts on test data, using test data and their masks to feed into trained model to predict. But I was confused that in your test.py file, you get dataset like this:

    def get_dataset():
      return Dataset('test', BATCH_SIZE, CONFIG['img_size'], CONFIG['map_size'], CONFIG['norms'], SEED)
    

    then you differ masks of real and fake photos by using their labels in dataset.py:

      def __getitem__(self, index):
        im_name = self.images[index]
        img = self.load_image(im_name)
        if self.label_name == 'Real':
          msk = torch.zeros(1,19,19)
        else:
          msk = self.load_mask(im_name.replace('Fake/', 'Mask/'))
        return {'img': img, 'msk': msk, 'lab': self.label, 'im_name': im_name}
    

    Is it fair to distinguish masks by label_name in the testing process? I also wonder how to create Mask/ folder when you predict fake images that donot have corresponding real images?

    If i misunderstand anything please correct me, thanks a lot!

    opened by insomnia1996 0
  • May I know where I can find the imagenet pretrained model?

    May I know where I can find the imagenet pretrained model?

    Hi,

    For using pretrained model: xception-b5690688.pth, may I know where I can find the model specified here: https://github.com/JStehouwer/FFD_CVPR2020/blob/master/xception.py#L243

    Thanks.

    opened by ilovecv 2
  • Error in get_batch in train.py

    Error in get_batch in train.py

    Greetings,

    Many thanks to your wok. I am very interested in your work and I want to try out your model. When I ran the train*.py, I encounter the following issue , here are part of the error messages.

    batch = [next(_.generator, None) for _ in self.datasets]
    

    File "D:\Fake Detector\attention_map_to_detect_manipulation\FFD_CVPR2020\dataset.py", line 91, in self = reduction.pickle.load(from_parent)batch = [next(_.generator, None) for _ in self.datasets]

    File "D:\Fake Detector\attention_map_to_detect_manipulation\FFD_CVPR2020\dataset.py", line 73, in get_batch EOFError: Ran out of input

    and reduction.dump(process_obj, to_child) File "C:\Users\xxx\anaconda3\envs\d2l\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'generator' object

    What I did is just make directory data/train/Real(Fake) and place my images dataset into the corresponding folder and then ran the train.py. However, it seems it can't work. May I ask whether I missed anything. I am running the program in windows system and I don't know that will affect as well.

    opened by bitrookie 1
  • Use pretrained model to classify own data?

    Use pretrained model to classify own data?

    Hi @JStehouwer - thank you so much for the awesome code (v2.1)!

    I am trying to use your pretrained model on my own images in order to try out the classifier.

    Are you able to confirm:

    • Filename and format of pretrained model
    • Whether anything else is needed to perform the above classification

    Thanks again

    opened by jtlz2 4
  • dataset questions

    dataset questions

    1、 Whether the published dataset ( FFHQ、FaceAPP、StarGAN、PGGAN、StyleGAN ) has been randomly selected ? And How to generate starGAN mask, how to determine the specific CelebA picture used ? 2、 I have downloaded the FF++、CelebA and DeepFaceLab dataset, how to randomly select the training set, test set and verification set ? And how to set the random seed ? 3、 Which data sets need align processing, and how, please specify ?

    Thank you for your work, it is very good, I will follow your work, but now the problem of dataset makes my work difficult, I hope to get your help.

    opened by miaoct 2
Releases(v2.1)
A deep learning based semantic search platform that computes similarity scores between provided query and documents

semanticsearch This is a deep learning based semantic search platform that computes similarity scores between provided query and documents. Documents

1 Nov 30, 2021
AOT-GAN for High-Resolution Image Inpainting (codebase for image inpainting)

AOT-GAN for High-Resolution Image Inpainting Arxiv Paper | AOT-GAN: Aggregated Contextual Transformations for High-Resolution Image Inpainting Yanhong

Multimedia Research 214 Jan 03, 2023
Official PyTorch implementation for FastDPM, a fast sampling algorithm for diffusion probabilistic models

Official PyTorch implementation for "On Fast Sampling of Diffusion Probabilistic Models". FastDPM generation on CIFAR-10, CelebA, and LSUN datasets. S

Zhifeng Kong 68 Dec 26, 2022
Composable transformations of Python+NumPy programsComposable transformations of Python+NumPy programs

Chex Chex is a library of utilities for helping to write reliable JAX code. This includes utils to help: Instrument your code (e.g. assertions) Debug

DeepMind 506 Jan 08, 2023
Code for Temporally Abstract Partial Models

Code for Temporally Abstract Partial Models Accompanies the code for the experimental section of the paper: Temporally Abstract Partial Models, Khetar

DeepMind 19 Jul 13, 2022
Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis for Eyewear Devices

EMOShip This repository contains the EMO-Film dataset described in the paper "Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis

1 Nov 18, 2022
ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

(Comet-) ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs Paper Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sa

AI2 152 Dec 27, 2022
Asymmetric Bilateral Motion Estimation for Video Frame Interpolation, ICCV2021

ABME (ICCV2021) Junheum Park, Chul Lee, and Chang-Su Kim Official PyTorch Code for "Asymmetric Bilateral Motion Estimation for Video Frame Interpolati

Junheum Park 86 Dec 28, 2022
Edge Restoration Quality Assessment

ERQA - Edge Restoration Quality Assessment ERQA - a full-reference quality metric designed to analyze how good image and video restoration methods (SR

MSU Video Group 27 Dec 17, 2022
Tensorflow implementation and notebooks for Implicit Maximum Likelihood Estimation

tf-imle Tensorflow 2 and PyTorch implementation and Jupyter notebooks for Implicit Maximum Likelihood Estimation (I-MLE) proposed in the NeurIPS 2021

NEC Laboratories Europe 69 Dec 13, 2022
Suite of 500 procedurally-generated NLP tasks to study language model adaptability

TaskBench500 The TaskBench500 dataset and code for generating tasks. Data The TaskBench dataset is available under wget http://web.mit.edu/bzl/www/Tas

Belinda Li 20 May 17, 2022
[CVPR 2022 Oral] Balanced MSE for Imbalanced Visual Regression https://arxiv.org/abs/2203.16427

Balanced MSE Code for the paper: Balanced MSE for Imbalanced Visual Regression Jiawei Ren, Mingyuan Zhang, Cunjun Yu, Ziwei Liu CVPR 2022 (Oral) News

Jiawei Ren 267 Jan 01, 2023
Multispectral Object Detection with Yolov5

Multispectral-Object-Detection Intro Official Code for Cross-Modality Fusion Transformer for Multispectral Object Detection. Multispectral Object Dete

Richard Fang 121 Jan 01, 2023
This is my research project for the Irving Center for Cancer Dynamics/Azizi Lab, Columbia University.

bayesian_uncertainty This is my research project for the Irving Center for Cancer Dynamics/Azizi Lab, Columbia University. In this project I build a s

Max David Gupta 1 Feb 13, 2022
Python Algorithm Interview Book Review

파이썬 알고리즘 인터뷰 책 리뷰 리뷰 IT 대기업에 들어가고 싶은 목표가 있다. 내가 꿈꿔온 회사에서 일하는 사람들의 모습을 보면 멋있다고 생각이 들고 나의 목표에 대한 열망이 강해지는 것 같다. 미래의 핵심 사업 중 하나인 SW 부분을 이끌고 발전시키는 우리나라의 I

SharkBSJ 1 Dec 14, 2021
Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

34 Oct 08, 2022
A mini library for Policy Gradients with Parameter-based Exploration, with reference implementation of the ClipUp optimizer from NNAISENSE.

PGPElib A mini library for Policy Gradients with Parameter-based Exploration [1] and friends. This library serves as a clean re-implementation of the

NNAISENSE 56 Jan 01, 2023
Code for reproducing experiments in "Improved Training of Wasserstein GANs"

Improved Training of Wasserstein GANs Code for reproducing experiments in "Improved Training of Wasserstein GANs". Prerequisites Python, NumPy, Tensor

Ishaan Gulrajani 2.2k Jan 01, 2023
Open-World Entity Segmentation

Open-World Entity Segmentation Project Website Lu Qi*, Jason Kuen*, Yi Wang, Jiuxiang Gu, Hengshuang Zhao, Zhe Lin, Philip Torr, Jiaya Jia This projec

DV Lab 410 Jan 03, 2023
MAU: A Motion-Aware Unit for Video Prediction and Beyond, NeurIPS2021

MAU (NeurIPS2021) Zheng Chang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Yan Ye, Xinguang Xiang, Wen GAo. Official PyTorch Code for "MAU: A Motion-Aware

ZhengChang 20 Nov 25, 2022