Compare GAN code.

Overview

Compare GAN

This repository offers TensorFlow implementations for many components related to Generative Adversarial Networks:

  • losses (such non-saturating GAN, least-squares GAN, and WGAN),
  • penalties (such as the gradient penalty),
  • normalization techniques (such as spectral normalization, batch normalization, and layer normalization),
  • neural architectures (BigGAN, ResNet, DCGAN), and
  • evaluation metrics (FID score, Inception Score, precision-recall, and KID score).

The code is configurable via Gin and runs on GPU/TPU/CPUs. Several research papers make use of this repository, including:

  1. Are GANs Created Equal? A Large-Scale Study [Code]
    Mario Lucic*, Karol Kurach*, Marcin Michalski, Sylvain Gelly, Olivier Bousquet [NeurIPS 2018]

  2. The GAN Landscape: Losses, Architectures, Regularization, and Normalization [Code] [Colab]
    Karol Kurach*, Mario Lucic*, Xiaohua Zhai, Marcin Michalski, Sylvain Gelly [ICML 2019]

  3. Assessing Generative Models via Precision and Recall [Code]
    Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, Sylvain Gelly [NeurIPS 2018]

  4. GILBO: One Metric to Measure Them All [Code]
    Alexander A. Alemi, Ian Fischer [NeurIPS 2018]

  5. A Case for Object Compositionality in Deep Generative Models of Images [Code]
    Sjoerd van Steenkiste, Karol Kurach, Sylvain Gelly [2018]

  6. On Self Modulation for Generative Adversarial Networks [Code]
    Ting Chen, Mario Lucic, Neil Houlsby, Sylvain Gelly [ICLR 2019]

  7. Self-Supervised GANs via Auxiliary Rotation Loss [Code] [Colab]
    Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lucic, Neil Houlsby [CVPR 2019]

  8. High-Fidelity Image Generation With Fewer Labels [Code] [Blog Post] [Colab]
    Mario Lucic*, Michael Tschannen*, Marvin Ritter*, Xiaohua Zhai, Olivier Bachem, Sylvain Gelly [ICML 2019]

Installation

You can easily install the library and all necessary dependencies by running: pip install -e . from the compare_gan/ folder.

Running experiments

Simply run the main.py passing a --model_dir (this is where checkpoints are stored) and a --gin_config (defines which model is trained on which data set and other training options). We provide several example configurations in the example_configs/ folder:

  • dcgan_celeba64: DCGAN architecture with non-saturating loss on CelebA 64x64px
  • resnet_cifar10: ResNet architecture with non-saturating loss and spectral normalization on CIFAR-10
  • resnet_lsun-bedroom128: ResNet architecture with WGAN loss and gradient penalty on LSUN-bedrooms 128x128px
  • sndcgan_celebahq128: SN-DCGAN architecture with non-saturating loss and spectral normalization on CelebA-HQ 128x128px
  • biggan_imagenet128: BigGAN architecture with hinge loss and spectral normalization on ImageNet 128x128px

Training and evaluation

To see all available options please run python main.py --help. Main options:

  • To train the model use --schedule=train (default). Training is resumed from the last saved checkpoint.
  • To evaluate all checkpoints use --schedule=continuous_eval --eval_every_steps=0. To evaluate only checkpoints where the step size is divisible by 5000, use --schedule=continuous_eval --eval_every_steps=5000. By default, 3 averaging runs are used to estimate the Inception Score and the FID score. Keep in mind that when running locally on a single GPU it may not be possible to run training and evaluation simultaneously due to memory constraints.
  • To train and evaluate the model use --schedule=eval_after_train --eval_every_steps=0.

Training on Cloud TPUs

We recommend using the ctpu tool to create a Cloud TPU and corresponding Compute Engine VM. We use v3-128 Cloud TPU v3 Pod for training models on ImageNet in 128x128 resolutions. You can use smaller slices if you reduce the batch size (options.batch_size in the Gin config) or model parameters. Keep in mind that the model quality might change. Before training make sure that the environment variable TPU_NAME is set. Running evaluation on TPUs is currently not supported. Use a VM with a single GPU instead.

Datasets

Compare GAN uses TensorFlow Datasets and it will automatically download and prepare the data. For ImageNet you will need to download the archive yourself. For CelebAHq you need to download and prepare the images on your own. If you are using TPUs make sure to point the training script to your Google Storage Bucket (--tfds_data_dir).

Owner
Google
Google ❤️ Open Source
Google
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.

Homepage | Paper | Datasets | Leaderboard | Documentation Graph Robustness Benchmark (GRB) provides scalable, unified, modular, and reproducible evalu

THUDM 66 Dec 22, 2022
The comma.ai Calibration Challenge!

Welcome to the comma.ai Calibration Challenge! Your goal is to predict the direction of travel (in camera frame) from provided dashcam video. This rep

comma.ai 697 Jan 05, 2023
Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms

FNet: Mixing Tokens with Fourier Transforms Pytorch implementation of Fnet : Mixing Tokens with Fourier Transforms. Citation: @misc{leethorp2021fnet,

Rishikesh (ऋषिकेश) 218 Jan 05, 2023
The mini-MusicNet dataset

mini-MusicNet A music-domain dataset for multi-label classification Music transcription is sequence-to-sequence prediction problem: given an audio per

John Thickstun 4 Nov 09, 2022
GAN-generated image detection based on CNNs

GAN-image-detection This repository contains a GAN-generated image detector developed to distinguish real images from synthetic ones. The detector is

Image and Sound Processing Lab 17 Dec 15, 2022
FLSim a flexible, standalone library written in PyTorch that simulates FL settings with a minimal, easy-to-use API

Federated Learning Simulator (FLSim) is a flexible, standalone core library that simulates FL settings with a minimal, easy-to-use API. FLSim is domain-agnostic and accommodates many use cases such a

Meta Research 162 Jan 02, 2023
SparseML is a libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art sparsification algorithms such as pruning and quantization to any neural network. General, recipe-dri

Neural Magic 1.5k Dec 30, 2022
A set of tools for creating and testing machine learning features, with a scikit-learn compatible API

Feature Forge This library provides a set of tools that can be useful in many machine learning applications (classification, clustering, regression, e

Machinalis 380 Nov 05, 2022
DLL: Direct Lidar Localization

DLL: Direct Lidar Localization Summary This package presents DLL, a direct map-based localization technique using 3D LIDAR for its application to aeri

Service Robotics Lab 127 Dec 16, 2022
A TensorFlow implementation of the Mnemonic Descent Method.

MDM A Tensorflow implementation of the Mnemonic Descent Method. Mnemonic Descent Method: A recurrent process applied for end-to-end face alignment G.

123 Oct 07, 2022
The code of paper "Block Modeling-Guided Graph Convolutional Neural Networks".

Block Modeling-Guided Graph Convolutional Neural Networks This repository contains the demo code of the paper: Block Modeling-Guided Graph Convolution

22 Dec 08, 2022
Predict stock movement with Machine Learning and Deep Learning algorithms

Project Overview Stock market movement prediction using LSTM Deep Neural Networks and machine learning algorithms Software and Library Requirements Th

Naz Delam 46 Sep 13, 2022
[LREC] MMChat: Multi-Modal Chat Dataset on Social Media

MMChat This repo contains the code and data for the LREC2022 paper MMChat: Multi-Modal Chat Dataset on Social Media. Dataset MMChat is a large-scale d

Silver 47 Jan 03, 2023
Simultaneous NMT/MMT framework in PyTorch

This repository includes the codes, the experiment configurations and the scripts to prepare/download data for the Simultaneous Machine Translation wi

<a href=[email protected]"> 37 Sep 29, 2022
Reading Group @mila-iqia on Computational Optimal Transport for Machine Learning Applications

Computational Optimal Transport for Machine Learning Reading Group Over the last few years, optimal transport (OT) has quickly become a central topic

Ali Harakeh 11 Aug 26, 2022
Official repo for SemanticGAN https://nv-tlabs.github.io/semanticGAN/

SemanticGAN This is the official code for: Semantic Segmentation with Generative Models: Semi-Supervised Learning and Strong Out-of-Domain Generalizat

151 Dec 28, 2022
Code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in Video".

Consistent Depth of Moving Objects in Video This repository contains training code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in

Google 203 Jan 05, 2023
Generalized Data Weighting via Class-level Gradient Manipulation

Generalized Data Weighting via Class-level Gradient Manipulation This repository is the official implementation of Generalized Data Weighting via Clas

18 Nov 12, 2022
DETReg: Unsupervised Pretraining with Region Priors for Object Detection

DETReg: Unsupervised Pretraining with Region Priors for Object Detection Amir Bar, Xin Wang, Vadim Kantorov, Colorado J Reed, Roei Herzig, Gal Chechik

Amir Bar 283 Dec 27, 2022
《Rethinking Sptil Dimensions of Vision Trnsformers》(2021)

Rethinking Spatial Dimensions of Vision Transformers Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh | Paper NAVER

NAVER AI 224 Dec 27, 2022