Official Pytorch Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images.

Overview

IAug_CDNet

Official Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images.

Overview

We propose a novel data-level solution, namely Instance-level change Augmentation (IAug), to generate bi-temporal images that contain changes involving plenty and diverse buildings by leveraging generative adversarial training. The key of IAug is to blend synthesized building instances onto appropriate positions of one of the bi-temporal images. To achieve this, a building generator is employed to produce realistic building images that are consistent with the given layouts. Diverse styles are later transferred onto the generated images. We further propose context-aware blending for a realistic composite of the building and the background. We augment the existing CD datasets and also design a simple yet effective CD model - CDNet. Our method (CDNet + IAug) has achieved state-of-the-art results in two building CD datasets (LEVIR-CD and WHU-CD). Interestingly, we achieve comparable results with only 20% of the training data as the current state-of-the-art methods using 100% data. Extensive experiments have validated the effectiveness of the proposed IAug. Our augmented dataset has a lower risk of class imbalance than the original one. Conventional learning on the synthesized dataset outperforms several popular cost-sensitive algorithms on the original dataset.

Building Generator

See building generator for details.

Synthesized images (256 * 256) by the generator (trained on the AIRS building dataset).syn_example_airs

Synthesized images (64 * 64) by the generator (trained on the Inria building dataset).syn_example_inria

Installation

This code requires PyTorch 1.0 and python 3+. Please install dependencies by

pip install -r requirements.txt

Generating Images Using Pretrained Model

Once the dataset is ready, the result images can be generated using pretrained models.

  1. Download the tar of the pretrained models from the Google Drive

  2. Generate images using the pretrained model.

    python test.py --model pix2pix --name $pretrained_folder --results_dir $results_dir --dataset_mode custom --label_dir $label_dir --label_nc 2 --batchSize $batchSize --load_size $size --crop_size $size --no_instance --which_epoch lastest

    pretrained_folder is the directory name of the checkpoint file downloaded in Step 1, results_dir is the directory name to save the synthesized images, label_dir is the directory name of the semantic labels, size is the size of the label map fed to the generator.

  3. The outputs images are stored at results_dir. You can view them using the autogenerated HTML file in the directory.

For simplicity, we also provide the test script in scripts/run_test.sh, one can modify the label_dir and name and then run the script.

Training New Models

New models can be trained with the following commands.

  1. Prepare the dataset. You can first prepare the building image patches and corresponding label maps in two folders (image_dir, label_dir).

  2. Train the model.

# To train on your own custom dataset
python train.py --name [experiment_name] --dataset_mode custom --label_dir [label_dir] -- image_dir [image_dir] --label_nc 2

There are many options you can specify. Please use python train.py --help. The specified options are printed to the console. To specify the number of GPUs to utilize, use --gpu_ids. If you want to use the second and third GPUs for example, use --gpu_ids 1,2.

To log training, use --tf_log for Tensorboard. The logs are stored at [checkpoints_dir]/[name]/logs.

Acknowledge

This code borrows heavily from spade.

Color Transfer

See Color Transfer for deteils.

We resort to a simple yet effective nonlearning approach to match the color distribution of the two image sets (GAN-generated images and original images in the change detection dataset).

color_transfer

Requirements

  • Matlab

Usage

We provide two demos to show the color transfer.

When you do not have the object mask. You can edit the file ColorTransferDemo.m, modify the file path of the Im_target and Im_source. After you run this file, the transfered image is saved as result_Image.jpg.

When you do have both the building image and the object mask. You can edit the file ColorTransferDemo_with_mask.m, modify the file path of the Im_target, Im_source, m_target and m_source. After you run this file, the transfered image is saved as result_Image.jpg.

Acknowledge

This code borrows heavily from https://github.com/AissamDjahnine/ColorTransfer.

Shadow Extraction

We show a simple shadow extraction method. The extracted shadow information can be used to make a more realistic image composite in the latter process.

shadow_extraction

We provide some examples for shadow extraction. The samples are in the folder samples\shadow_sample.

Usage

You can edit the file extract_shadow.py and modify the path of the image_folder, label_folder and out_folder. Make sure that the image files are in image_folder and the corresponding label files are in label_folder. Run the following script:

python extract_shadow.py

Once you have successfully run the python file, the results can be found in the out folder.

Instance augmentation

Here, we provide the python implementation of instance augmentation.

image-20210413152845314

We provide some examples for instance augmentation. The samples are in the folder samples\SYN_CD.

Usage

You can edit the file composite_CD_sample.py and modify the following values:

#  first define the some paths
A_folder = r'samples\LEVIR\A'
B_folder = r'samples\LEVIR\B'
L_folder = r'samples\LEVIR\label'
ref_folder = r'samples\LEVIR\ref'
#  instance path
src_folder = r'samples\SYN_CD\image' #test
label_folder = r'samples\SYN_CD\shadow'  # test
out_folder = r'samples\SYN_CD\out_sample'
os.makedirs(out_folder, exist_ok=True)
# how many instance to paste per sample
M = 50

Then, run the following script:

python composite_CD_sample.py

Once you have successfully run the python file, the results can be found in the out folder.

CDNet

Coming soon~~~~

Citation

If you use this code for your research, please cite our paper:

@Article{chen2021,
    title={Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images},
    author={Hao Chen, Wenyuan Li and Zhenwei Shi},
    year={2021},
    journal={IEEE Transactions on Geoscience and Remote Sensing},
    volume={},
    number={},
    pages={1-16},
    doi={10.1109/TGRS.2021.3066802}
}
Owner
keep forward
The toolkit to generate auto labeled datasets

Ozeu Ozeu is the toolkit to autolabal dataset for instance segmentation. You can generate datasets labaled with segmentation mask and bounding box fro

Xiong Jie 28 Mar 28, 2022
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

XCL 191 Dec 31, 2022
A criticism of a recent paper on buggy image downsampling methods in popular image processing and deep learning libraries.

A criticism of a recent paper on buggy image downsampling methods in popular image processing and deep learning libraries.

70 Jul 12, 2022
This repository contains the source code of Auto-Lambda and baselines from the paper, Auto-Lambda: Disentangling Dynamic Task Relationships.

Auto-Lambda This repository contains the source code of Auto-Lambda and baselines from the paper, Auto-Lambda: Disentangling Dynamic Task Relationship

Shikun Liu 76 Dec 20, 2022
A very lightweight monitoring system for Raspberry Pi clusters running Kubernetes.

OMNI A very lightweight monitoring system for Raspberry Pi clusters running Kubernetes. Why? When I finished my Kubernetes cluster using a few Raspber

Matias Godoy 148 Dec 29, 2022
Neural network chess engine trained on Gary Kasparov's games.

Neural Chess It's not the best chess engine, but it is a chess engine. Proof of concept neural network chess engine (feed-forward multi-layer perceptr

3 Jun 22, 2022
One-line your code easily but still with the fun of doing so!

One-liner-iser One-line your code easily but still with the fun of doing so! Have YOU ever wanted to write one-line Python code, but don't have the sa

5 May 04, 2022
This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning (NeurIPS21).

Core-tuning This repository is the official implementation of ``Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regular

vanint 18 Dec 17, 2022
Joint Detection and Identification Feature Learning for Person Search

Person Search Project This repository hosts the code for our paper Joint Detection and Identification Feature Learning for Person Search. The code is

712 Dec 17, 2022
This is the code for our paper "Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text"

Iconary This is the code for our paper "Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text". It includes the

AI2 6 May 24, 2022
The official implementation of ICCV paper "Box-Aware Feature Enhancement for Single Object Tracking on Point Clouds".

Box-Aware Tracker (BAT) Pytorch-Lightning implementation of the Box-Aware Tracker. Box-Aware Feature Enhancement for Single Object Tracking on Point C

Kangel Zenn 5 Mar 26, 2022
Torchserve server using a YoloV5 model running on docker with GPU and static batch inference to perform production ready inference.

Yolov5 running on TorchServe (GPU compatible) ! This is a dockerfile to run TorchServe for Yolo v5 object detection model. (TorchServe (PyTorch librar

82 Nov 29, 2022
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

5 Dec 10, 2022
Object tracking and object detection is applied to track golf puts in real time and display stats/games.

Putting_Game Object tracking and object detection is applied to track golf puts in real time and display stats/games. Works best with the Perfect Prac

Max 1 Dec 29, 2021
DP-CL(Continual Learning with Differential Privacy)

DP-CL(Continual Learning with Differential Privacy) This is the official implementation of the Continual Learning with Differential Privacy. If you us

Phung Lai 3 Nov 04, 2022
Pytorch codes for Feature Transfer Learning for Face Recognition with Under-Represented Data

FTLNet_Pytorch Pytorch codes for Feature Transfer Learning for Face Recognition with Under-Represented Data 1. Introduction This repo is an unofficial

1 Nov 04, 2020
Histocartography is a framework bringing together AI and Digital Pathology

Documentation | Paper Welcome to the histocartography repository! histocartography is a python-based library designed to facilitate the development of

155 Nov 23, 2022
A PyTorch Reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution

TecoGAN-PyTorch Introduction This is a PyTorch reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution (VSR). Please refer to

165 Dec 17, 2022
TCNN Temporal convolutional neural network for real-time speech enhancement in the time domain

TCNN Pandey A, Wang D L. TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain[C]//ICASSP 2019-2019 IEEE Int

凌逆战 16 Dec 30, 2022
[ArXiv 2021] One-Shot Generative Domain Adaptation

GenDA - One-Shot Generative Domain Adaptation One-Shot Generative Domain Adaptation Ceyuan Yang*, Yujun Shen*, Zhiyi Zhang, Yinghao Xu, Jiapeng Zhu, Z

GenForce: May Generative Force Be with You 46 Dec 19, 2022