Implements the training, testing and editing tools for "Pluralistic Image Completion"

Overview

Pluralistic Image Completion

ArXiv | Project Page | Online Demo | Video(demo)

This repository implements the training, testing and editing tools for "Pluralistic Image Completion" by Chuanxia Zheng, Tat-Jen Cham and Jianfei Cai at NTU. Given one masked image, the proposed Pluralistic model is able to generate multiple and diverse plausible results with various structure, color and texture.

Editing example

Example results

Example completion results of our method on images of face (CelebA), building (Paris), and natural scenes (Places2) with center masks (masks shown in gray). For each group, the masked input image is shown left, followed by sampled results from our model without any post-processing. The results are diverse and plusible.

More results on project page

Getting started

Installation

This code was tested with Pytoch 0.4.0, CUDA 9.0, Python 3.6 and Ubuntu 16.04

pip install visdom dominate
  • Clone this repo:
git clone https://github.com/lyndonzheng/Pluralistic
cd Pluralistic

Datasets

  • face dataset: 24183 training images and 2824 test images from CelebA and use the algorithm of Growing GANs to get the high-resolution CelebA-HQ dataset
  • building dataset: 14900 training images and 100 test images from Paris
  • natural scenery: original training and val images from Places2
  • object original training images from ImageNet.

Training

  • Train a model (default: random irregular and irregular holes):
python train.py --name celeba_random --img_file your_image_path
  • Set --mask_type in options/base_options.py for different training masks. --mask_file path is needed for external irregular mask, such as the irregular mask dataset provided by Liu et al. and Karim lskakov .
  • To view training results and loss plots, run python -m visdom.server and copy the URL http://localhost:8097.
  • Training models will be saved under the checkpoints folder.
  • The more training options can be found in options folder.

Testing

  • Test the model
python test.py  --name celeba_random --img_file your_image_path
  • Set --mask_type in options/base_options.py to test various masks. --mask_file path is needed for 3. external irregular mask,
  • The default results will be saved under the results folder. Set --results_dir for a new path to save the result.

Pretrained Models

Download the pre-trained models using the following links and put them undercheckpoints/ directory.

Our main novelty of this project is the multiple and diverse plausible results for one given masked image. The center_mask models are trained with images of resolution 256*256 with center holes 128x128, which have large diversity for the large missing information. The random_mask models are trained with random regular and irregular holes, which have different diversity for different mask sizes and image backgrounds.

GUI

Download the pre-trained models from Google drive and put them undercheckpoints/ directory.

  • Install the PyQt5 for GUI operation
pip install PyQt5

Basic usage is:

python -m visdom.server
python ui_main.py

The buttons in GUI:

  • Options: Select the model and corresponding dataset for editing.
  • Bush Width: Modify the width of bush for free_form mask.
  • draw/clear: Draw a free_form or rectangle mask for random_model. Clear all mask region for a new input.
  • load: Choose the image from the directory.
  • random: Random load the editing image from the datasets.
  • fill: Fill the holes ranges and show it on the right.
  • save: Save the inputs and outputs to the directory.
  • Original/Output: Switch to show the original or output image.

The steps are as follows:

1. Select a model from 'options'
2. Click the 'random' or 'load' button to get an input image.
3. If you choose a random model, click the 'draw/clear' button to input free_form mask.
4. If you choose a center model, the center mask has been given.
5. click 'fill' button to get multiple results.
6. click 'save' button to save the results.

Editing Example Results

  • Results (original, input, output) for object removing
  • Results (input, output) for face playing. When mask half or right face, the diversity will be small for the short+long term attention layer will copy information from other side. When mask top or down face, the diversity will be large.

Next

  • Free form mask for various Datasets
  • Higher resolution image completion

License


This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

This software is for educational and academic research purpose only. If you wish to obtain a commercial royalty bearing license to this software, please contact us at [email protected].

Citation

If you use this code for your research, please cite our paper.

@inproceedings{zheng2019pluralistic,
  title={Pluralistic Image Completion},
  author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={1438--1447},
  year={2019}
}

@article{zheng2021pluralistic,
  title={Pluralistic Free-From Image Completion},
  author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
  journal={International Journal of Computer Vision},
  pages={1--20},
  year={2021},
  publisher={Springer}
}
Owner
Chuanxia Zheng
Chuanxia Zheng
YOLOv7 - Framework Beyond Detection

🔥🔥🔥🔥 YOLO with Transformers and Instance Segmentation, with TensorRT acceleration! 🔥🔥🔥

JinTian 3k Jan 01, 2023
rliable is an open-source Python library for reliable evaluation, even with a handful of runs, on reinforcement learning and machine learnings benchmarks.

Open-source library for reliable evaluation on reinforcement learning and machine learning benchmarks. See NeurIPS 2021 oral for details.

Google Research 529 Jan 01, 2023
Nightmare-Writeup - Writeup for the Nightmare CTF Challenge from 2022 DiceCTF

Nightmare: One Byte to ROP // Alternate Solution TLDR: One byte write, no leak.

1 Feb 17, 2022
Individual Tree Crown classification on WorldView-2 Images using Autoencoder -- Group 9 Weak learners - Final Project (Machine Learning 2020 Course)

Created by Olga Sutyrina, Sarah Elemili, Abduragim Shtanchaev and Artur Bille Individual Tree Crown classification on WorldView-2 Images using Autoenc

2 Dec 08, 2022
BARTScore: Evaluating Generated Text as Text Generation

This is the Repo for the paper: BARTScore: Evaluating Generated Text as Text Generation Updates 2021.06.28 Release online evaluation Demo 2021.06.25 R

NeuLab 196 Dec 17, 2022
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch

Transformer in Transformer Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image c

Phil Wang 272 Dec 23, 2022
Time-Optimal Planning for Quadrotor Waypoint Flight

Time-Optimal Planning for Quadrotor Waypoint Flight This is an example implementation of the paper "Time-Optimal Planning for Quadrotor Waypoint Fligh

Robotics and Perception Group 38 Dec 02, 2022
Benchmark VAE - Library for Variational Autoencoder benchmarking

Documentation pythae This library implements some of the most common (Variational) Autoencoder models. In particular it provides the possibility to pe

1.1k Jan 02, 2023
Split Variational AutoEncoder

Split-VAE Split Variational AutoEncoder Introduction This repository contains and implemementation of a Split Variational AutoEncoder (SVAE). In a SVA

Andrea Asperti 2 Sep 02, 2022
RP-GAN: Stable GAN Training with Random Projections

RP-GAN: Stable GAN Training with Random Projections This repository contains a reference implementation of the algorithm described in the paper: Behna

Ayan Chakrabarti 20 Sep 18, 2021
Tensorflow 2.x implementation of Panoramic BlitzNet for object detection and semantic segmentation on indoor panoramic images.

Deep neural network for object detection and semantic segmentation on indoor panoramic images. The implementation is based on the papers:

Alejandro de Nova Guerrero 9 Nov 24, 2022
You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks.

AllSet This is the repo for our paper: You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks. We prepared all codes and a subse

Jianhao 51 Dec 24, 2022
Learning Skeletal Articulations with Neural Blend Shapes

This repository provides an end-to-end library for automatic character rigging and blend shapes generation as well as a visualization tool. It is based on our work Learning Skeletal Articulations wit

Peizhuo 504 Dec 30, 2022
The hippynn python package - a modular library for atomistic machine learning with pytorch.

The hippynn python package - a modular library for atomistic machine learning with pytorch. We aim to provide a powerful library for the training of a

Los Alamos National Laboratory 37 Dec 29, 2022
LBK 20 Dec 02, 2022
CCCL: Contrastive Cascade Graph Learning.

CCGL: Contrastive Cascade Graph Learning This repo provides a reference implementation of Contrastive Cascade Graph Learning (CCGL) framework as descr

Xovee Xu 19 Dec 05, 2022
A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen.

Master Release Pytorch - Py + Nim A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen. Because Nim compiles to C+

Giovanni Petrantoni 425 Dec 22, 2022
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples

Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples This repository is the official implementation of paper [Qimera: Data-free Q

Kanghyun Choi 21 Nov 03, 2022
Lightweight Cuda Renderer with Python Wrapper.

pyRender Lightweight Cuda Renderer with Python Wrapper. Compile Change compile.sh line 5 to the glm library include path. This library can be download

Jingwei Huang 53 Dec 02, 2022
Memory-efficient optimum einsum using opt_einsum planning and PyTorch kernels.

opt-einsum-torch There have been many implementations of Einstein's summation. numpy's numpy.einsum is the least efficient one as it only runs in sing

Haoyan Huo 9 Nov 18, 2022