Re-implement CycleGAN in Tensorlayer

Overview

CycleGAN_Tensorlayer

Re-implement CycleGAN in TensorLayer

  • Original CycleGAN
  • Improved CycleGAN with resize-convolution

Prerequisites:

  • TensorLayer
  • TensorFlow
  • Python

Run:

CUDA_VISIBLE_DEVICES=0 python main.py 

(if datasets are collected by yourself, you can use dataset_clean.py or dataset_crop.py to pre-process images)

Theory:

The generator process:

Image text

The discriminator process:

Image text

Result Improvement

  • Data augmentation
  • Resize convolution[4]
  • Instance normalization[5]

data augmentation:

Image text

Instance normalization(comparision by original paper https://arxiv.org/abs/1607.08022):

Image text

Resize convolution (Remove Checkerboard Artifacts):

Image text

Image text

Final Results:

Image text

Image text

Reference:

Comments
  • Difference from original code

    Difference from original code

    HI very nice implemented cyclegan I have a few questions...

    1. What does "Resize Convolution" mean?
    2. I wonder what is different from the original code of the author.
    opened by taki0112 7
  • Color inversion, black image and nan in loss after ~20 epochs

    Color inversion, black image and nan in loss after ~20 epochs

    I've tried to train the model on original summer2winter_yosemite dataset. After ~20 epochs all sample images turned completely black, and all all loss parameters turned to nan. However, the model continued to run for 30 more epochs regularly saving checkpoints until I stopped it.

    I've also used another, my own dataset, and it ran correctly for 70 epochs at least, unfortunately the only result I had was color inversion of images. Any advice on changing training parameters (I used default)?

    opened by victor-felicitas 0
  • How to change test output size?

    How to change test output size?

    Hi! It is a great implementation of Cyclegan, providing excellent results on Hiptensorflow and ROCm. However, I could not use it to generate test images of different from 256x256 sizes. How can I change that?

    For now, I have trained the model on 256x256 images and try to test it on bigger ones. I tried adding two more flags to main.py: flags.DEFINE_integer("image_width", 420, "The size of image to use (will be center cropped) [256]") flags.DEFINE_integer("image_height", 420, "The size of image to use (will be center cropped) [256]")

    Which I use later in Test section: test_A = tf.placeholder(tf.float32, [FLAGS.batch_size, FLAGS.image_height, FLAGS.image_width, FLAGS.c_dim], name='test_x') test_B = tf.placeholder(tf.float32, [FLAGS.batch_size, FLAGS.image_height, FLAGS.image_width, FLAGS.c_dim], name='test_y')

    However, I always get error: Invalid argument: Conv2DSlowBackpropInput: Size of out_backprop doesn't match computed: actual = 105, computed = 64 Traceback (most recent call last): File "main.py", line 285, in tf.app.run() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 44, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "main.py", line 281, in main test_cyclegan() File "main.py", line 262, in test_cyclegan fake_img = sess.run(net_g_logits, feed_dict={in_var: sample_image}) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 767, in run run_metadata_ptr) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 965, in _run feed_dict_string, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1015, in _do_run target_list, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1035, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: Conv2DSlowBackpropInput: Size of out_backprop doesn't match computed: actual = 105, computed = 64 [[Node: gen_A2B/u64/conv2d_transpose = Conv2DBackpropInput[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/gpu:0"](gen_A2B/u64/conv2d_transpose/output_shape, gen_A2B/u64/W_deconv2d/read, gen_A2B/b_residual_add/8)]]

    Is there any way to choose output image size? Original Cyclegan has special option to choose it - how can i implement it? resize_or_crop = 'resize_and_crop', -- resizing/cropping strategy: resize_and_crop | crop | scale_width | scale_height

    Any help would be appreciated!

    opened by victor-felicitas 0
  • About the imagepool.

    About the imagepool.

    opened by Zardinality 0
  • Error in main.py?

    Error in main.py?

    Hi @zsdonghao @luoxier , Is there an error in your main.py: _, errGB2A = sess.run([g_b2a_optim, g_b2a_loss], feed_dict={real_A: batch_imgB, real_B: batch_imgB}) Does it should be: _, errGB2A = sess.run([g_b2a_optim, g_b2a_loss], feed_dict={real_A: batch_imgA, real_B: batch_imgB}) Could you please check it and let me know, thanks.

    opened by yongqiangzhang1 2
  • Where are datasets shown in readme?

    Where are datasets shown in readme?

    opened by Zardinality 7
Releases(0.1)
Code for our EMNLP 2021 paper “Heterogeneous Graph Neural Networks for Keyphrase Generation”

GATER This repository contains the code for our EMNLP 2021 paper “Heterogeneous Graph Neural Networks for Keyphrase Generation”. Our implementation is

Jiacheng Ye 12 Nov 24, 2022
Official repo for BMVC2021 paper ASFormer: Transformer for Action Segmentation

ASFormer: Transformer for Action Segmentation This repo provides training & inference code for BMVC 2021 paper: ASFormer: Transformer for Action Segme

42 Dec 23, 2022
This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning (NeurIPS21).

Core-tuning This repository is the official implementation of ``Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regular

vanint 18 Dec 17, 2022
3D HourGlass Networks for Human Pose Estimation Through Videos

3D-HourGlass-Network 3D CNN Based Hourglass Network for Human Pose Estimation (3D Human Pose) from videos. This was my summer'18 research project. Dis

Naman Jain 51 Jan 02, 2023
Code for our paper Aspect Sentiment Quad Prediction as Paraphrase Generation in EMNLP 2021.

Aspect Sentiment Quad Prediction (ASQP) This repo contains the annotated data and code for our paper Aspect Sentiment Quad Prediction as Paraphrase Ge

Isaac 39 Dec 11, 2022
Discover hidden deepweb pages

DeepWeb Scapper Att: Demo version An simple script to scrappe deepweb to find pages. Will return if any of those exists and will save on a file. You s

Héber Júlio 77 Oct 02, 2022
Readings for "A Unified View of Relational Deep Learning for Polypharmacy Side Effect, Combination Therapy, and Drug-Drug Interaction Prediction."

Polypharmacy - DDI - Synergy Survey The Survey Paper This repository accompanies our survey paper A Unified View of Relational Deep Learning for Polyp

AstraZeneca 79 Jan 05, 2023
Official implementation for paper Knowledge Bridging for Empathetic Dialogue Generation (AAAI 2021).

Knowledge Bridging for Empathetic Dialogue Generation This is the official implementation for paper Knowledge Bridging for Empathetic Dialogue Generat

Qintong Li 50 Dec 20, 2022
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dea

MIC-DKFZ 1.2k Jan 04, 2023
Reinforcement Learning Theory Book (rus)

Reinforcement Learning Theory Book (rus)

qbrick 206 Nov 27, 2022
Learning Continuous Signed Distance Functions for Shape Representation

DeepSDF This is an implementation of the CVPR '19 paper "DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation" by Park et a

Meta Research 1.1k Jan 01, 2023
A data-driven maritime port simulator

PySeidon - A Data-Driven Maritime Port Simulator 🌊 Extendable and modular software for maritime port simulation. This software uses entity-component

6 Apr 10, 2022
A-ESRGAN aims to provide better super-resolution images by using multi-scale attention U-net discriminators.

A-ESRGAN: Training Real-World Blind Super-Resolution with Attention-based U-net Discriminators The authors are hidden for the purpose of double blind

77 Dec 16, 2022
Code for CPM-2 Pre-Train

CPM-2 Pre-Train Pre-train CPM-2 此分支为110亿非 MoE 模型的预训练代码,MoE 模型的预训练代码请切换到 moe 分支 CPM-2技术报告请参考link。 0 模型下载 请在智源资源下载页面进行申请,文件介绍如下: 文件名 描述 参数大小 100000.tar

Tsinghua AI 136 Dec 28, 2022
A simple, high level, easy-to-use open source Computer Vision library for Python.

ZoomVision : Slicing Aid Detection A simple, high level, easy-to-use open source Computer Vision library for Python. Installation Installing dependenc

Nurettin Sinanoğlu 2 Mar 04, 2022
Minimalistic PyTorch training loop

Backbone for PyTorch training loop Will try to keep it minimalistic. pip install back from back import Bone Features Progress bar Checkpoints saving/l

Kashin 4 Jan 16, 2020
Pytorch implementation of OCNet series and SegFix.

openseg.pytorch News 2021/09/14 MMSegmentation has supported our ISANet and refer to ISANet for more details. 2021/08/13 We have released the implemen

openseg-group 1.1k Dec 23, 2022
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

Pytorch Lightning 21.1k Jan 01, 2023
Tensorflow Implementation of ECCV'18 paper: Multimodal Human Motion Synthesis

MT-VAE for Multimodal Human Motion Synthesis This is the code for ECCV 2018 paper MT-VAE: Learning Motion Transformations to Generate Multimodal Human

Xinchen Yan 36 Oct 02, 2022
CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection

CLOCs is a novel Camera-LiDAR Object Candidates fusion network. It provides a low-complexity multi-modal fusion framework that improves the performance of single-modality detectors. CLOCs operates on

Su Pang 254 Dec 16, 2022