Change is Everywhere: Single-Temporal Supervised Object Change Detection in Remote Sensing Imagery (ICCV 2021)

Overview

Change is Everywhere
Single-Temporal Supervised Object Change Detection
in Remote Sensing Imagery

by Zhuo Zheng, Ailong Ma, Liangpei Zhang and Yanfei Zhong

[Paper] [BibTeX]



This is an official implementation of STAR and ChangeStar in our ICCV 2021 paper Change is Everywhere: Single-Temporal Supervised Object Change Detection for High Spatial Resolution Remote Sensing Imagery.

We hope that STAR will serve as a solid baseline and help ease future research in weakly-supervised object change detection.


News

  • 2021/08/28, The code is available.
  • 2021/07/23, The code will be released soon.
  • 2021/07/23, This paper is accepted by ICCV 2021.

Features

  • Learning a good change detector from single-temporal supervision.
  • Strong baselines for bitemporal and single-temporal supervised change detection.
  • A clean codebase for weakly-supervised change detection.
  • Support both bitemporal and single-temporal supervised settings

Citation

If you use STAR or ChangeStar (FarSeg) in your research, please cite the following paper:

@inproceedings{zheng2021change,
  title={Change is Everywhere: Single-Temporal Supervised Object Change Detection for High Spatial Resolution Remote Sensing Imagery},
  author={Zheng, Zhuo and Ma, Ailong and Liangpei Zhang and Zhong, Yanfei},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={},
  year={2021}
}

@inproceedings{zheng2020foreground,
  title={Foreground-Aware Relation Network for Geospatial Object Segmentation in High Spatial Resolution Remote Sensing Imagery},
  author={Zheng, Zhuo and Zhong, Yanfei and Wang, Junjue and Ma, Ailong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={4096--4105},
  year={2020}
}

Getting Started

Install EVer

pip install --upgrade git+https://github.com/Z-Zheng/ever.git

Requirements:

  • pytorch >= 1.6.0
  • python >=3.6

Prepare Dataset

  1. Download xView2 dataset (training set and tier3 set) and LEVIR-CD dataset.

  2. Create soft link

ln -s </path/to/xView2> ./xView2
ln -s </path/to/LEVIR-CD> ./LEVIR-CD

Training and Evaluation under Single-Temporal Supervision

bash ./scripts/trainxView2/r50_farseg_changemixin_symmetry.sh

Training and Evaluation under Bitemporal Supervision

bash ./scripts/bisup_levircd/r50_farseg_changemixin.sh

License

ChangeStar is released under the Apache License 2.0.

Copyright (c) Zhuo Zheng. All rights reserved.

Comments
  • Can ChangeStar be used for general CD?

    Can ChangeStar be used for general CD?

    hi,

    Thanks for the great work. I wonder, can this work be used for general change detection? i.e., multi-class not just single class.

    If yes, do you have done the experiments? Thanks!

    opened by Richardych 3
  • hello, how to add changemixin when use bitemporal supervised

    hello, how to add changemixin when use bitemporal supervised

    hello I have question about your repo:

    1. how to add changeminxin when use bitemporal supervised, i see it in your paper table 4 but i cant find in codes?
    2. could changestar use LEVIR-CD train Single-Temporal(another dataset is too big for train, i cant download it)
    3. are your bitemporal suprvised methods just use torch.cat in the final layer? sorry for ask these question,
    opened by csliuchang 3
  • ValueError: Requested crop size (512, 512) is larger than the image size (384, 384)

    ValueError: Requested crop size (512, 512) is larger than the image size (384, 384)

    Traceback (most recent call last): File "./train_sup_change.py", line 48, in blob = trainer.run(after_construct_launcher_callbacks=[register_evaluate_fn]) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/api/trainer/th_amp_ddp_trainer.py", line 117, in run test_data_loader=kw_dataloader['testdata_loader']) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/core/launcher.py", line 232, in train_by_config signal_loss_dict = self.train_iters(train_data_loader, test_data_loader=test_data_loader, **config) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/core/launcher.py", line 174, in train_iters is_master=self._master) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/core/iterator.py", line 30, in next data = next(self._iterator) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in next data = self._next_data() File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/dataset.py", line 218, in getitem return self.datasets[dataset_idx][sample_idx] File "/home/yujianzhi/tem/ChangeStar-master/data/levir_cd/dataset.py", line 30, in getitem blob = self.transforms(**dict(image=imgs, mask=gt)) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/core/composition.py", line 191, in call data = t(force_apply=force_apply, **data) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/core/transforms_interface.py", line 90, in call return self.apply_with_params(params, **kwargs) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/core/transforms_interface.py", line 103, in apply_with_params res[key] = target_function(arg, **dict(params, **target_dependencies)) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/augmentations/crops/transforms.py", line 48, in apply return F.random_crop(img, self.height, self.width, h_start, w_start) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/augmentations/crops/functional.py", line 28, in random_crop crop_height=crop_height, crop_width=crop_width, height=height, width=width ValueError: Requested crop size (512, 512) is larger than the image size (384, 384) Traceback (most recent call last): File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in main() File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main cmd=cmd) subprocess.CalledProcessError: Command '['/home/yujianzhi/anaconda3/envs/CStar/bin/python', '-u', './train_sup_change.py', '--local_rank=0', '--config_path=levircd.r50_farseg_changestar_bisup', '--model_dir=./log/bisup-LEVIRCD/r50_farseg_changestar']' returned non-zero exit status 1.

    it says: ValueError: Requested crop size (512, 512) is larger than the image size (384, 384) but my img is 512*512 exactly.

    opened by themoongodyue 3
  • How to get the bitemporal images' labels if the model is trained on LEVIR-CD dataset?

    How to get the bitemporal images' labels if the model is trained on LEVIR-CD dataset?

    Hello, I'm very interested in your work, but I encountered a problem in the process of research. If the model is trained on the LEVIR-CD dataset, how to obtain the changed labels when there are no segmentation maps for each bitemporal image in the dataset? I would appreciate it if you could solve my problems.

    opened by SONGLEI-arch 2
  • Reproduction Problem

    Reproduction Problem

    Hello author.

    Your work is great!

    But I ran into a problem while running your code.

    The performance came as shown in the picture below, but this number is much higher than the number in table1 of your paper. (IoU) Can you tell me the reason? Screen Shot 2022-01-01 at 7 44 17 PM

    All hyperparameters and data are identical.

    opened by seominseok0429 1
  • AssertionError error

    AssertionError error

    Hello, this is really great work. I have one question for you. The LEVIR-CD dataset trains well, but the xview2 dataset gives the following unknown error.

    Do you have any idea how to fix it? All processes follow the recipe exactly Screen Shot 2021-12-31 at 4 57 41 PM .

    opened by seominseok0429 1
  • RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8

    RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8

    i have crazy,help me please

    Traceback (most recent call last): File "./train_sup_change.py", line 48, in blob = trainer.run(after_construct_launcher_callbacks=[register_evaluate_fn]) File "/home/cy/miniconda3/envs/STAnet/lib/python3.8/site-packages/ever/api/trainer/th_amp_ddp_trainer.py", line 98, in run kwargs.update(dict(model=self.make_model())) File "/home/cy/miniconda3/envs/STAnet/lib/python3.8/site-packages/ever/api/trainer/th_amp_ddp_trainer.py", line 87, in make_model model = nn.parallel.DistributedDataParallel( File "/home/cy/miniconda3/envs/STAnet/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 496, in init dist._verify_model_across_ranks(self.process_group, parameters) RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8 ncclSystemError: System call (socket, malloc, munmap, etc) failed. ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 31335) of binary: /home/cy/miniconda3/envs/STAnet/bin/python ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed

    opened by themoongodyue 1
  • Evaluation

    Evaluation

    Excuse me, I want to know how this module behave inference after training the model. And if you can offer an link for usage of 'ever' Lib, that will be fantastic

    opened by LIUZIJING-CHN 1
  • changestar_sisup results

    changestar_sisup results

    Hi, I have trained the model under single-temporal supervision, but the F1 result is only 0.73,which is worse than the result in your paper. Is there anything wrong with my experiment, below is my training log:

    1666753326.225779.log

    After training I only test the LEVIR-CD test set.

    opened by max2857 0
  • A question about PCC

    A question about PCC

    Hello,I have a question about PCC:

    PCC is mentioned in the paper. After obtaining the classification result through the segmentation model, how to obtain the change detection result through the classification result? Is it a direct subtraction?

    opened by Hyd1999618 0
  • [Feature] support [0~255] gt

    [Feature] support [0~255] gt

    The original dataset of LEVIR-CD consists of 0 and 255.

    However, the segmentation loss of this code works only when it consists of 0 and 1.

    Therefore, I added a code to change gt's 255 to 1.

    opened by seominseok0429 1
Releases(v0.1.0)
Owner
Zhuo Zheng
CV IN RS. Ph.D. Student.
Zhuo Zheng
An Open-Source Tool for Automatic Disease Diagnosis..

OpenMedicalChatbox An Open-Source Package for Automatic Disease Diagnosis. Overview Due to the lack of open source for existing RL-base automated diag

8 Nov 08, 2022
LogAvgExp - Pytorch Implementation of LogAvgExp

LogAvgExp - Pytorch Implementation of LogAvgExp for Pytorch Install $ pip instal

Phil Wang 31 Oct 14, 2022
MMGeneration is a powerful toolkit for generative models, based on PyTorch and MMCV.

Documentation: https://mmgeneration.readthedocs.io/ Introduction English | 简体中文 MMGeneration is a powerful toolkit for generative models, especially f

OpenMMLab 1.3k Dec 29, 2022
Adversarial Self-Defense for Cycle-Consistent GANs

Adversarial Self-Defense for Cycle-Consistent GANs This is the official implementation of the CycleGAN robust to self-adversarial attacks used in pape

Dina Bashkirova 10 Oct 10, 2022
Global-Local Context Network for Person Search

Global-Local Context Network for Person Search Abstract: Person search aims to jointly localize and identify a query person from natural, uncropped im

Peng Zheng 15 Oct 17, 2022
Character Controllers using Motion VAEs

Character Controllers using Motion VAEs This repo is the codebase for the SIGGRAPH 2020 paper with the title above. Please find the paper and demo at

Electronic Arts 165 Jan 03, 2023
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

107 Dec 02, 2022
Temporal Segment Networks (TSN) in PyTorch

TSN-Pytorch We have released MMAction, a full-fledged action understanding toolbox based on PyTorch. It includes implementation for TSN as well as oth

1k Jan 03, 2023
Flower - A Friendly Federated Learning Framework

Flower - A Friendly Federated Learning Framework Flower (flwr) is a framework for building federated learning systems. The design of Flower is based o

Adap 1.8k Jan 01, 2023
Continuous Query Decomposition for Complex Query Answering in Incomplete Knowledge Graphs

Continuous Query Decomposition This repository contains the official implementation for our ICLR 2021 (Oral) paper, Complex Query Answering with Neura

UCL Natural Language Processing 71 Dec 29, 2022
Fine-tuning StyleGAN2 for Cartoon Face Generation

Cartoon-StyleGAN 🙃 : Fine-tuning StyleGAN2 for Cartoon Face Generation Abstract Recent studies have shown remarkable success in the unsupervised imag

Jihye Back 520 Jan 04, 2023
Code accompanying the paper on "An Empirical Investigation of Domain Generalization with Empirical Risk Minimizers" published at NeurIPS, 2021

Code for "An Empirical Investigation of Domian Generalization with Empirical Risk Minimizers" (NeurIPS 2021) Motivation and Introduction Domain Genera

Meta Research 15 Dec 27, 2022
GeneDisco is a benchmark suite for evaluating active learning algorithms for experimental design in drug discovery.

GeneDisco is a benchmark suite for evaluating active learning algorithms for experimental design in drug discovery.

22 Dec 12, 2022
Implementation of Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning

advantage-weighted-regression Implementation of Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning, by Peng et al. (

Omar D. Domingues 1 Dec 02, 2021
Fast Differentiable Matrix Sqrt Root

Fast Differentiable Matrix Sqrt Root Geometric Interpretation of Matrix Square Root and Inverse Square Root This repository constains the official Pyt

YueSong 42 Dec 30, 2022
Python and C++ implementation of "MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation". Accepted at LXCV @ CVPR 2021.

MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation This is a PyTorch and LibTorch implementation of MarkerPose: a

Jhacson Meza 47 Nov 18, 2022
Recognize Handwritten Digits using Deep Learning on the browser itself.

MNIST on the Web An attempt to predict MNIST handwritten digits from my PyTorch model from the browser (client-side) and not from the server, with the

Harjyot Bagga 7 May 28, 2022
Code for the paper "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks"

ON-LSTM This repository contains the code used for word-level language model and unsupervised parsing experiments in Ordered Neurons: Integrating Tree

Yikang Shen 572 Nov 21, 2022
Official code for the paper: Deep Graph Matching under Quadratic Constraint (CVPR 2021)

QC-DGM This is the official PyTorch implementation and models for our CVPR 2021 paper: Deep Graph Matching under Quadratic Constraint. It also contain

Quankai Gao 55 Nov 14, 2022
Official Implementation of "Learning Disentangled Behavior Embeddings"

DBE: Disentangled-Behavior-Embedding Official implementation of Learning Disentangled Behavior Embeddings (NeurIPS 2021). Environment requirement The

Mishne Lab 12 Sep 28, 2022