This package contains a PyTorch Implementation of IB-GAN of the submitted paper in AAAI 2021

Related tags

Deep LearningIB-GAN
Overview

The PyTorch implementation of IB-GAN model of AAAI 2021

This package contains a PyTorch implementation of IB-GAN presented in the submitted paper (IB-GAN: Disentangled Representation Learning with Information Bottleneck Generative Adversarial Networks) in AAAI 2021.

You can reproduce the experiment on dSprite (Color-dSprite, 3DChairs, and CelebA) dataset with the this code.

Current implementation is based on python==1.4.0. Please refer environments.yml for the environment settings.

Please refer to the Technical appendix page for more detailed information of hypter parameter settings for each experiment.

Contents

  • Main code for dsprites (and cdsprite): "main.py"

  • IB-GAN model for dsprites (and cdsprite): "./model/model.py"

  • Disentanglement Evaluation codes for dsprites (and cdsprite): "evaluator.py", "checkout_scores.ipynb"

  • Main code for 3d Chairs (and CelebA): "main2.py"

  • IB-GAN model for dsprites (and cdsprite): "./model/model2.py"

Visdom for visualization

Since the defulat visidom option for main.py is True, you first want to run Visidom server berfore excuting the main program by typing

python -m visdom.server -p 8097

Then you can observe the visualization of the "convergence plot and generated samples" for each training iterations from

localhost:8097

Reproducing dSprite experiment

  • dSprite dataset : "./data/dsprites-dataset/dsprites_ndarray_co1sh3sc6or40x32y32_64x64.npz"

You can reproduce dSprite expreiment by typing:

python -W ignore main.py --seed 7 --z_dim 16 --r_dim 10 --batch_size 64 --optim rmsprop --dataset dsprites --viz True --viz_port 8097 --z_bias 0 --viz_name dsprites --beta 0.141 --alpha 1 --gamma 1 --G_lr 5e-5 --D_lr 1e-6 --max_iter 150000 --logiter 500 --ptriter 2500 --ckptiter 2500 --load_ckpt -1 --init_type normal --save_img True

Note, all the default parameter settings are optimally set up for the dSprite experiment (in the "main.py" file). For more details on the parameter settings for other datasets, please refer to the Technical appendix.

  • dSprite dataset for Kim's disentanglement score evaluation : Evauation file is currently not available. (will be update soon) The evaulation process and code is same as cdsprite experiment.

Reproducing Color-dSprite expreiemnt

  • Color-dSprite dataset : Color dSprite Dataset is currently not available.

But you can create Colored-dSprites dataset by changing RGB channel of the original dsprites dataset.

Each channel of RGB takes 8 discrete values as : [0.00, 36.42, 72.85, 109.28, 145.71, 182.14, 218.57, 255.00] )

Then move Color-dSprites datset (eg. cdsprites_ndarray_co1sh3sc6or40x32y32_64x64.npz) npz file to the folder (./data/dsprites-dataset/)

Run the code with following argument:

python -W ignore main.py --seed 7 --z_dim 16 --r_dim 10 --batch_size 64 --optim rmsprop --dataset cdsprites --viz True --viz_port 8097 --z_bias 0 --viz_name dsprites --beta 0.071 --alpha 1 --gamma 1 --G_lr 5e-5 --D_lr 1e-6 --max_iter 500000 --logiter 500 --ptriter 2500 --ckptiter 2500 --load_ckpt -1 --init_type normal --save_img True
  • Color-dSprite dataset for Kim's disentanglement score evaluation : "./data/img4eval_cdsprites.7z".

You first need to unzip "imgs4eval_cdsprites.7z" file using 7za. Please locate all the unzip files in "/data/imgs4eval_cdsprites/*" folder.

run the evaluation on Kim's disentanglment metric, type

python evaluator.py --dset_dir data/imgs4eval_cdsprites --logiter 5000 --lastiter 500000 --name main

After all the evaluations for each checkpoint is done, you can see the overall disentanglement scores with the "checkout_scores.ipynb" (jupyter notebook) file. or you can just type

import os
import torch
torch.load('checkpoint/main/result.metric')

to see the scores in the python console. Then move Color-dSprites datset (eg. cdsprites_ndarray_co1sh3sc6or40x32y32_64x64.npz) to ./data/dsprites-dataset/

Reproducing CelebA experiment

  • CelebA dataset : please download CelebA dataset and prepare 64x64 center cropped image files into the folder (./data/CelebA/cropped_64)

Then run the code with following argument:

python -W ignore main2.py --seed 0 --z_dim 64 --r_dim 15 --batch_size 64 --optim rmsprop --dataset celeba --viz_port 8097 --z_bias 0 --r_weight 0 --viz_name celeba --beta 0.35 --alpha 1 --gamma 1 --max_iter 1000000 --G_lr 5e-5 --D_lr 2e-6 --R_lr 5e-5 --ckpt_dir checkpoint --output_dir output --logiter 500 --ptriter 20000 --ckptiter 20000 --ngf 64 --ndf 64 --label_smoothing True --instance_noise_start 0.5 --instance_noise_end 0.01 --init_type orthogonal

Reproducing 3dChairs experiment

  • 3dChairs dataset : please download 3dChairs dataset and move image files into the folder (./data/3DChairs/images)
python -W ignore main2.py --seed 0 --z_dim 64 --r_dim 10 --batch_size 64 --optim rmsprop --dataset 3dchairs --viz_port 8097 --z_bias 0 --r_weight 0 --viz_name 3dchairs --beta 0.325 --alpha 1 --gamma 1 --max_iter 700000 --G_lr 5e-5 --D_lr 2e-6 --R_lr 5e-5 --ckpt_dir checkpoint --output_dir output --logiter 500 --ptriter 20000 --ckptiter 20000 --ngf 32 --ndf 32 --label_smoothing True --instance_noise_start 0.5 --instance_noise_end 0.01 --init_type orthogonal

Citing IB-GAN

If you like this work and end up using IB-GAN for your reseach, please cite our paper with the bibtex code:

@inproceedings{jeon2021ib, title={IB-GAN: Disengangled Representation Learning with Information Bottleneck Generative Adversarial Networks}, author={Jeon, Insu and Lee, Wonkwang and Pyeon, Myeongjang and Kim, Gunhee}, booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, volume={35}, number={9}, pages={7926--7934}, year={2021} }

The disclosure and use of the currently published code is limited to research purposes only.

Owner
Insu Jeon
Stay hungry, stay foolish.
Insu Jeon
Official code repository for A Simple Long-Tailed Rocognition Baseline via Vision-Language Model.

This is the official code repository for A Simple Long-Tailed Rocognition Baseline via Vision-Language Model.

peng gao 42 Nov 26, 2022
Automatic library of congress classification, using word embeddings from book titles and synopses.

Automatic Library of Congress Classification The Library of Congress Classification (LCC) is a comprehensive classification system that was first deve

Ahmad Pourihosseini 3 Oct 01, 2022
General Multi-label Image Classification with Transformers

General Multi-label Image Classification with Transformers Jack Lanchantin, Tianlu Wang, Vicente Ordóñez Román, Yanjun Qi Conference on Computer Visio

QData 154 Dec 21, 2022
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX

ONNX msg_chn_wacv20 depth completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20 model in

Ibai Gorordo 19 Oct 22, 2022
Real-time pose estimation accelerated with NVIDIA TensorRT

trt_pose Want to detect hand poses? Check out the new trt_pose_hand project for real-time hand pose and gesture recognition! trt_pose is aimed at enab

NVIDIA AI IOT 803 Jan 06, 2023
Personal project about genus-0 meshes, spherical harmonics and a cow

How to transform a cow into spherical harmonics ? Spot the cow, from Keenan Crane's blog Context In the field of Deep Learning, training on images or

3 Aug 22, 2022
YOLOv7 - Framework Beyond Detection

🔥🔥🔥🔥 YOLO with Transformers and Instance Segmentation, with TensorRT acceleration! 🔥🔥🔥

JinTian 3k Jan 01, 2023
Optimus: the first large-scale pre-trained VAE language model

Optimus: the first pre-trained Big VAE language model This repository contains source code necessary to reproduce the results presented in the EMNLP 2

314 Dec 19, 2022
A command line simple note taking app

Why yet another note taking program? note was designed with a very specific target in mind: me, and my 2354 scraps of paper. It runs from the command

64 Nov 20, 2022
A implemetation of the LRCN in mxnet

A implemetation of the LRCN in mxnet ##Abstract LRCN is a combination of CNN and RNN ##Installation Download UCF101 dataset ./avi2jpg.sh to split the

44 Aug 25, 2022
Modified prey-predator system - Modified prey–predator model describes the rate of change for each species by adding coupling terms.

Modified prey-predator system We aim to study the behaviors of the modified prey–predator model and establish the effects of several parameters that p

Seoyoung Oh 1 Jan 02, 2022
YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )

Yolo v4, v3 and v2 for Windows and Linux (neural networks for object detection) Paper YOLO v4: https://arxiv.org/abs/2004.10934 Paper Scaled YOLO v4:

Alexey 20.2k Jan 09, 2023
A Python Reconnection Tool for alt:V

altv-reconnect What? It invokes a reconnect in the altV Client Dev Console. You get to determine when your local client should reconnect when developi

8 Jun 30, 2022
Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation

VT-UNet This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet. Environmen

Himashi Amanda Peiris 114 Dec 20, 2022
Direct LiDAR Odometry: Fast Localization with Dense Point Clouds

Direct LiDAR Odometry: Fast Localization with Dense Point Clouds DLO is a lightweight and computationally-efficient frontend LiDAR odometry solution w

VECTR at UCLA 369 Dec 30, 2022
Generating synthetic mobility data for a realistic population with RNNs to improve utility and privacy

lbs-data Motivation Location data is collected from the public by private firms via mobile devices. Can this data also be used to serve the public goo

Alex 11 Sep 22, 2022
Pytorch re-implementation of Paper: SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text Recognition (CVPR 2022)

SwinTextSpotter This is the pytorch implementation of Paper: SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text R

mxin262 183 Jan 03, 2023
Codebase for arXiv preprint "NeRF++: Analyzing and Improving Neural Radiance Fields"

NeRF++ Codebase for arXiv preprint "NeRF++: Analyzing and Improving Neural Radiance Fields" Work with 360 capture of large-scale unbounded scenes. Sup

Kai Zhang 722 Dec 28, 2022
Code for technical report "An Improved Baseline for Sentence-level Relation Extraction".

RE_improved_baseline Code for technical report "An Improved Baseline for Sentence-level Relation Extraction". Requirements torch = 1.8.1 transformers

Wenxuan Zhou 74 Nov 29, 2022
Code for "Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo"

Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo This repository includes the source code for our CVPR 2021 paper on multi-view mult

Jiahao Lin 66 Jan 04, 2023