[SIGGRAPH Asia 2019] Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning

Overview

AGIS-Net

Introduction

This is the official PyTorch implementation of the Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning.

paper | supplementary material

Abstract

Automatic generation of artistic glyph images is a challenging task that attracts many research interests. Previous methods either are specifically designed for shape synthesis or focus on texture transfer. In this paper, we propose a novel model, AGIS-Net, to transfer both shape and texture styles in one-stage with only a few stylized samples. To achieve this goal, we first disentangle the representations for content and style by using two encoders, ensuring the multi-content and multi-style generation. Then we utilize two collaboratively working decoders to generate the glyph shape image and its texture image simultaneously. In addition, we introduce a local texture refinement loss to further improve the quality of the synthesized textures. In this manner, our one-stage model is much more efficient and effective than other multi-stage stacked methods. We also propose a large-scale dataset with Chinese glyph images in various shape and texture styles, rendered from 35 professional-designed artistic fonts with 7,326 characters and 2,460 synthetic artistic fonts with 639 characters, to validate the effectiveness and extendability of our method. Extensive experiments on both English and Chinese artistic glyph image datasets demonstrate the superiority of our model in generating high-quality stylized glyph images against other state-of-the-art methods.

Model Architecture

Architecture

Skip Connection Local Discriminator
skip-connection local-discriminator

Some Results

comparison

comparison

across_languae

Prerequisites

  • Linux
  • CPU or NVIDIA GPU + CUDA cuDNN
  • Python 3
  • PyTorch 0.4.0+

Get Started

Installation

  1. Install PyTorch, torchvison and dependencies from https://pytorch.org
  2. Install python libraries visdom and dominate:
    pip install visdom
    pip install dominate
  3. Clone this repo:
    git clone -b master --single-branch https://github.com/hologerry/AGIS-Net
    cd AGIS-Net
  4. Download the offical pre-trained vgg19 model: vgg19-dcbb9e9d.pth, and put it under the models/ folder

Datasets

The datasets server is down, you can download the datasets from PKU Disk, Dropbox or MEGA. Download the datasets using the following script, four datasets and the raw average font style glyph image are available.

It may take a while, please be patient

bash ./datasets/download_dataset.sh DATASET_NAME
  • base_gray_color English synthesized gradient glyph image dataset, proposed by MC-GAN.
  • base_gray_texture English artistic glyph image dataset, proposed by MC-GAN.
  • skeleton_gray_color Chinese synthesized gradient glyph image dataset by us.
  • skeleton_gray_texture Chinese artistic glyph image dataset proposed by us.
  • average_skeleton Raw Chinese avgerage font style (skeleton) glyph image dataset proposed by us.

Please refer to the data for more details about our datasets and how to prepare your own datasets.

Model Training

  • To train a model, download the training images (e.g., English artistic glyph transfer)

    bash ./datasets/download_dataset.sh base_gray_color
    bash ./datasets/download_dataset.sh base_gray_texture
  • Train a model:

    1. Start the Visdom Visualizer

      python -m visdom.server -port PORT

      PORT is specified in train.sh

    2. Pretrain on synthesized gradient glyph image dataset

      bash ./scripts/train.sh base_gray_color GPU_ID

      GPU_ID indicates which GPU to use.

    3. Fineture on artistic glyph image dataset

      bash ./scripts/train.sh base_gray_texture GPU_ID DATA_ID FEW_SIZE

      DATA_ID indicates which artistic font is fine-tuned.
      FEW_SIZE indicates the size of few-shot set.

      It will raise an error saying:

      FileNodeFoundError: [Error 2] No such file or directory: 'chechpoints/base_gray_texture/base_gray_texture_DATA_ID_TIME/latest_net_G.pth
      

      Copy the pretrained model to above path

      cp chechpoints/base_gray_color/base_gray_color_TIME/latest_net_* chechpoints/base_gray_texture/base_gray_texture_DATA_ID_TIME/

      And start train again. It will works well.

Model Testing

  • To test a model, copy the trained model from checkpoint to pretrained_models folder (e.g., English artistic glyph transfer)

    cp chechpoints/base_gray_color/base_gray_texture_DATA_ID_TIME/latest_net_* pretrained_models/base_gray_texture_DATA_ID/
  • Test a model

    bash ./scripts/test_base_gray_texture.sh GPU_ID DATA_ID

Acknowledgements

This code is inspired by the BicycleGAN.

Special thanks to the following works for sharing their code and dataset.

Citation

If you find our work is helpful, please cite our paper:

@article{Gao2019Artistic,
  author = {Yue, Gao and Yuan, Guo and Zhouhui, Lian and Yingmin, Tang and Jianguo, Xiao},
  title = {Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning},
  journal = {ACM Trans. Graph.},
  issue_date = {November 2019},
  volume = {38},
  number = {6},
  year = {2019},
  articleno = {185},
  numpages = {12},
  url = {http://doi.acm.org/10.1145/3355089.3356574},
  publisher = {ACM}
} 

Copyright

The code and dataset are only allowed for PERSONAL and ACADEMIC usage.

Owner
Yue Gao
Researcher at Microsoft Research Asia
Yue Gao
Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

730 Jan 09, 2023
Pytorch implementation of MLP-Mixer with loading pre-trained models.

MLP-Mixer-Pytorch PyTorch implementation of MLP-Mixer: An all-MLP Architecture for Vision with the function of loading official ImageNet pre-trained p

Qiushi Yang 2 Sep 29, 2022
To provide 100 JAX exercises over different sections structured as a course or tutorials to teach and learn for beginners, intermediates as well as experts

JaxTon 💯 JAX exercises Mission 🚀 To provide 100 JAX exercises over different sections structured as a course or tutorials to teach and learn for beg

Rohan Rao 512 Jan 01, 2023
TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.

TensorFlowOnSpark TensorFlowOnSpark brings scalable deep learning to Apache Hadoop and Apache Spark clusters. By combining salient features from the T

Yahoo 3.8k Jan 04, 2023
Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression

Regression Transformer Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression . Development se

International Business Machines 27 Jan 05, 2023
The official implementation of Variable-Length Piano Infilling (VLI).

Variable-Length-Piano-Infilling The official implementation of Variable-Length Piano Infilling (VLI). (paper: Variable-Length Music Score Infilling vi

29 Sep 01, 2022
计算机视觉中用到的注意力模块和其他即插即用模块PyTorch Implementation Collection of Attention Module and Plug&Play Module

PyTorch实现多种计算机视觉中网络设计中用到的Attention机制,还收集了一些即插即用模块。由于能力有限精力有限,可能很多模块并没有包括进来,有任何的建议或者改进,可以提交issue或者进行PR。

PJDong 599 Dec 23, 2022
Library to enable Bayesian active learning in your research or labeling work.

Bayesian Active Learning (BaaL) BaaL is an active learning library developed at ElementAI. This repository contains techniques and reusable components

ElementAI 687 Dec 25, 2022
Fake-user-agent-traffic-geneator - Python CLI Tool to generate fake traffic against URLs with configurable user-agents

Fake traffic generator for Gartner Demo Generate fake traffic to URLs with custo

New Relic Experimental 3 Oct 31, 2022
Python scripts form performing stereo depth estimation using the CoEx model in ONNX.

ONNX-CoEx-Stereo-Depth-estimation Python scripts form performing stereo depth estimation using the CoEx model in ONNX. Stereo depth estimation on the

Ibai Gorordo 8 Dec 29, 2022
Deepfake Scanner by Deepware.

Deepware Scanner (CLI) This repository contains the command-line deepfake scanner tool with the pre-trained models that are currently used at deepware

deepware 110 Jan 02, 2023
Hyperparameter tuning for humans

KerasTuner KerasTuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. Easily c

Keras 2.6k Dec 27, 2022
[ACM MM 2021] Yes, "Attention is All You Need", for Exemplar based Colorization

Transformer for Image Colorization This is an implemention for Yes, "Attention Is All You Need", for Exemplar based Colorization, and the current soft

Wang Yin 30 Dec 07, 2022
Hierarchical Attentive Recurrent Tracking

Hierarchical Attentive Recurrent Tracking This is an official Tensorflow implementation of single object tracking in videos by using hierarchical atte

Adam Kosiorek 147 Aug 07, 2021
A set of Deep Reinforcement Learning Agents implemented in Tensorflow.

Deep Reinforcement Learning Agents This repository contains a collection of reinforcement learning algorithms written in Tensorflow. The ipython noteb

Arthur Juliani 2.2k Jan 01, 2023
Development kit for MIT Scene Parsing Benchmark

Development Kit for MIT Scene Parsing Benchmark [NEW!] Our PyTorch implementation is released in the following repository: https://github.com/hangzhao

MIT CSAIL Computer Vision 424 Dec 01, 2022
The FIRST GANs-based omics-to-omics translation framework

OmiTrans Please also have a look at our multi-omics multi-task DL freamwork 👀 : OmiEmbed The FIRST GANs-based omics-to-omics translation framework Xi

Xiaoyu Zhang 6 Dec 14, 2022
Residual Dense Net De-Interlace Filter (RDNDIF)

Residual Dense Net De-Interlace Filter (RDNDIF) Work in progress deep de-interlacer filter. It is based on the architecture proposed by Bernasconi et

Louis 7 Feb 15, 2022
Chinese clinical named entity recognition using pre-trained BERT model

Chinese clinical named entity recognition (CNER) using pre-trained BERT model Introduction Code for paper Chinese clinical named entity recognition wi

Xiangyang Li 109 Dec 14, 2022
Trains an agent with stochastic policy gradient ascent to solve the Lunar Lander challenge from OpenAI

Introduction This script trains an agent with stochastic policy gradient ascent to solve the Lunar Lander challenge from OpenAI. In order to run this

Momin Haider 0 Jan 02, 2022