LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

Overview

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation

by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zhong


This is an official implementation of LoveDA in our NeurIPS2021 paper " LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation"

Citation

If you use FactSeg in your research, please cite our coming NeurIPS2021 paper.

    @inproceedings{
    wang2021loveda,
    title={Love{DA}: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation},
    author={Junjue Wang and Zhuo Zheng and Ailong Ma and Xiaoyan Lu and Yanfei Zhong},
    booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
    year={2021},
    url={https://openreview.net/forum?id=bLBIbVaGDu}
    }

Dataset

Coming Soon!

Comments
  • bad cbst result

    bad cbst result

    hello, we re-run the cbst_train with the default settings you provide, but get bad results as shown in the fig, even worse than the source only method. i wonder the stability of the training of cbst, and i will appreciate that if you can provide the training log of the cbst. THANK YOU VERY MUCH! Uploading 屏幕截图 2021-11-11 112454.png…

    bug 
    opened by Luffy03 14
  • About the accuracy of the CodaLab website

    About the accuracy of the CodaLab website

    Why is the domain adaptation MIOU on the CodaLab site so high? Shouldn't the "Oracle" MIOU provided in the paper be the highest MIOU for this domain adaptation task?

    question 
    opened by Hcshenziyang 6
  • Results submitted to Codalab

    Results submitted to Codalab

    The results submitted to the CodaLab get zero score and zero ExecutionTime. I wonder is it any wrong with the CodaLab or it is just my own mistake. The output class index is 0~6 with 1024*1024 pixels.

    question 
    opened by Luffy03 6
  • Invitation of incoporating LoveDA dataset into MMSegmentation.

    Invitation of incoporating LoveDA dataset into MMSegmentation.

    Hi, I am member of OpenMMLab who develops MMSegmentation. Our vision is provide up-to-date methods and dataset(i.e., benchmark) for researchers and community around the world.

    First, congrats for acceptance of NeurIPS'21. I think this dataset and benchmark would definitely help Remote Sensing Image field where semantic segmentation plays an important role.

    Frankly speaking, right now we do not have too much human resources. Would you like to help us incpoorate your dataset into MMSegmentation? We appreciate all contibutors and users, here is our contributing details.

    I think if LoveDA is provided by MMSegmentation, it could let more people use & cite this excellent work, especially for those who want to establish standard segmentation benchmark.

    Looking forward to your reply. Wish you all the best.

    Best,

    good first issue 
    opened by MengzhangLI 6
  • Potential shift in class labels

    Potential shift in class labels

    Following up on the discussion from #23, I was wondering whether in the context of the segmantic segmentation task there could be a shift in class labels between the data on which the pretrained model hrnetw32.pth was trained on and the data provided in this repo.

    Here I have visualised the true and predicted segmentations on training image 1338 for 2 different COLOR_MAP-s from the repo (render.py and data.loveda.py)

    Screenshot 2022-03-26 at 10 06 23 Screenshot 2022-03-26 at 10 06 31

    Based on the input image we can see that the colours are correct for the top left and bottom right visualisations. Also, the black colours in top right image corresponds to label IGNORE with RGB values (0,0,0) while in the bottom left the black colour has RGB values (7,7,7), which seems to be because in data.loveda.py the COLOR_MAP only has 7 classes and with indexing 0-6 with agriculture having label 7 in the masked images, it is not colour mapped.

    This seems to be related to the difference between labels in the current repo:

    Category labels: background – 1, building – 2, road – 3, water – 4, barren – 5, forest – 6, agriculture – 7. And the no-data regions were assigned 0 which should be ignored. The provided data loader will help you construct your pipeline.

    and the ones described on CodaLab:

    Classes indexes: Background - 0, Building - 1, Road - 2, Water - 3, Barren - 4, Forest - 5, Agriculture - 6

    Could this class label offset be the case or perhaps there is an alternative explanation which I have not thought about?

    question 
    opened by keliive 3
  • Dataset links for Google drive return a 404 error

    Dataset links for Google drive return a 404 error

    The links mentioned on the README.md of this repository as well as the competition page for google drive of the dataset are broken as of 30-01-2022 and return a 404 error. Please update the link with a working one.

    opened by AnkushMalaker 3
  • The different resolutions in training and testing

    The different resolutions in training and testing

    I found that in the training process, the input resolution is 512x512, while in the test phase, the input resolution is 1024x1024. Would you please tell me why?

    question 
    opened by Luffy03 3
  • Meaning of line 228 in the Unsupervised_Domian_Adaptation/utils/tools.py

    Meaning of line 228 in the Unsupervised_Domian_Adaptation/utils/tools.py

    Hello,

    Thank you very much for making your excellent work open to the public.

    May I ask you the meaning of line 228 in tools.py for Unsupervised Domain Adaptation? I found that when running bash ./scripts/predict_cbst.sh, it will generate a bug saying AttributeError: 'NoneType' object has no attribute 'info'. This bug is due to line 228 and also the default setting _default_logger=None. Hence, I wonder what this line is for. Also, I would like to let you know that after commenting the line 228, the command can be run successfully.

    Many thanks for your help.

    opened by simonep1052 2
  • [Request] Release codalab evaluation script

    [Request] Release codalab evaluation script

    Would it be possible to release the evaluation script from codalab? File format detail is a bit confusing. For example, if I set empty regions as transparent or embed color palette within the image the evaluation script shows warning:

    /opt/conda/lib/python2.7/site-packages/PIL/Image.py:870: UserWarning: Palette images with Transparency   expressed in bytes should be converted to RGBA images
      'to RGBA images')
    

    Even if i remove the color palette I get the following error:

    Traceback (most recent call last):
      File "/tmp/codalab/tmpS_IrwU/run/program/evaluate.py", line 157, in <module>
        metric.forward(gt[valid_inds], mask[valid_inds])
      File "/tmp/codalab/tmpS_IrwU/run/program/evaluate.py", line 22, in forward
        cm = sparse.coo_matrix((v, (y_true, y_pred)), shape=(self.num_classes, self.num_classes), dtype=np.float32)
      File "/opt/conda/lib/python2.7/site-packages/scipy/sparse/coo.py", line 182, in __init__
        self._check()
      File "/opt/conda/lib/python2.7/site-packages/scipy/sparse/coo.py", line 219, in _check
        nnz = self.nnz
      File "/opt/conda/lib/python2.7/site-packages/scipy/sparse/coo.py", line 196, in getnnz
        raise ValueError('row, column, and data array must all be the '
    ValueError: row, column, and data array must all be the same length
    

    I made sure all my images are 1024 × 1024 with a single uint8 channel. The class ids have been assigned as per the specification, with empty regions assigned with value 15

    Classes indexes

    Background - 0
    Building - 1
    Road - 2
    Water - 3
    Barren - 4
    Forest - 5
    Agriculture - 6
    

    So, it would be helpful to see the evaluation script and generate compatible prediction images.

    opened by digital-idiot 2
  • Can you provide the pre-training weights of the adversarial learning?

    Can you provide the pre-training weights of the adversarial learning?

    Hi, I would like to use the visualized results of Adaptseg and CLAN for comparison, could you provide the pre-training weights (Rural to Urban weights) of these two networks?

    opened by csliujw 2
  • Running pretrained model without CUDA

    Running pretrained model without CUDA

    Hi,

    Is there a way to run ./scripts/predict_test.sh without CUDA?

    I am using the LoveDA dataset and pretrained model weights hrnetw32.pth as described in the ReadME.

    Initially I got the error urllib.error.HTTPError: HTTP Error 403: Forbidden, which I fixed by setting pretrained=False as recommended here: https://github.com/Junjue-Wang/LoveDA/issues/9.

    Then when rerunning the predict_test.sh, I got the error:

    Traceback (most recent call last):
      File "predict.py", line 52, in <module>
        predict_test(args.ckpt_path, args.config_path, args.out_dir)
      File "predict.py", line 38, in predict_test
        model.cuda()
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 680, in cuda
        return self._apply(lambda t: t.cuda(device))
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
        module._apply(fn)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
        module._apply(fn)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
        module._apply(fn)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 593, in _apply
        param_applied = fn(param)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 680, in <lambda>
        return self._apply(lambda t: t.cuda(device))
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/cuda/__init__.py", line 208, in _lazy_init
        raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled
    

    I then commented out the line 38: https://github.com/Junjue-Wang/LoveDA/blob/4d574ce08f84cbc8d27becf2bd9dce8fbb7f50f8/Semantic_Segmentation/predict.py#L38 and after rerunning predict_test.sh, I got the output:

    Load model!
    INFO:data.loveda:./LoveDA/Val/Urban/images_png -- Dataset images: 0
    INFO:data.loveda:./LoveDA/Val/Rural/images_png -- Dataset images: 0
    INFO:ever.core.logger:HRNetEncoder: pretrained = False
    0it [00:00, ?it/s]
    
    question 
    opened by keliive 2
  • bash eval_hrnetw32.sh  Error!

    bash eval_hrnetw32.sh Error!

    Traceback (most recent call last). File ""home/libowen/LoveDA-master/Semantic_Segmentation/predict.py", line 52, in smodule> predict test(argsckpt path, args.config path, args.out dir) File "/home/libowen/LoveDA-master/Semantic_Segmentation/predictpy", line 37, in predict test model.load_state_dictmodel_state_dict) File "home/libowen/.conda/envs/bw/ib/python3.8/site-packages/torch/nn/modules/module,py", line 1667, in load_state_dictraise RuntimeError('Error(s) in loading state_dic for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state dict for HRNetFusion. Missing keys) in state dict: "ackbone.het.conv1.weight"ackbone hmet bn1.weight, "ackbone hretbn1.bias""backbone hmet bn1.running mean"

    question 
    opened by kukujoyyo 1
  • Predict.py Problem

    Predict.py Problem

    I download pretrained weight and use predict.py to test some images, but meet this bug, what's the problem of the fuse_layers?

    File "test4/Road/LoveDA-master/Semantic_Segmentation/module/baseline/base_hrnet/_hrnet.py", line 394, in forward y = y + self.fuse_layers[i][j](x[j]) RuntimeError: The size of tensor a (500) must match the size of tensor b (504) at non-singleton dimension 3

    question 
    opened by Acid-knight 3
  • Can run with One GPU in this work?

    Can run with One GPU in this work?

    **Shall we run this work with One GPU? If possible how to set parameters? **

    I'v got the issue below:

    PS F:\Models\LoveDA-master\Semantic_Segmentation> bash ./scripts/train_hrnetw32.sh NOTE: Redirects are currently not supported in Windows or MacOs. Init Trainer Set Seed Torch Traceback (most recent call last): File "train.py", line 79, in trainer = er.trainer.get_trainer('th_amp_ddp')() File "D:\ProgramData\Anaconda3\lib\site-packages\ever\api\trainer\th_amp_ddp_trainer.py", line 77, in init torch.cuda.set_device(self.args.local_rank) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\cuda_init_.py", line 311, in set_device device = _get_device_index(device) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\cuda_utils.py", line 34, in _get_device_index return _torch_get_device_index(device, optional, allow_cpu) File "D:\ProgramData\Anaconda3\lib\site-packages\torch_utils.py", line 537, in _get_device_index 'or an integer, but got:{}'.format(device)) ValueError: Expected a torch.device with a specified index or an integer, but got:None ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 108252) of binary: D:\ProgramData\Anaconda3\python.exe Traceback (most recent call last): File "D:\ProgramData\Anaconda3\lib\runpy.py", line 193, in run_module_as_main "main", mod_spec) File "D:\ProgramData\Anaconda3\lib\runpy.py", line 85, in run_code exec(code, run_globals) File "D:\ProgramData\Anaconda3\Scripts\torchrun.exe_main.py", line 7, in File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\elastic\multiprocessing\errors_init.py", line 345, in wrapper return f(*args, **kwargs) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\run.py", line 724, in main run(args) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\run.py", line 718, in run )(*cmd_args) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\launcher\api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args)) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\launcher\api.py", line 247, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

    train.py FAILED

    Failures: <NO_OTHER_FAILURES>

    Root Cause (first observed failure): [0]: time : 2022-11-13_13:10:33 host : KWPAACQRFTY8V05 rank : 0 (local_rank: 0) exitcode : 1 (pid: 108252) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

    question 
    opened by kukujoyyo 1
  • no such file problem when training ST 2urban scripts

    no such file problem when training ST 2urban scripts

    When training self-training 2urban scripts, such as CBST_train.py and IAST_train.py, there is a problem which is 'FileNotFoundError: No such file: '/home/xxx/ssuda/UDA/log/cbst/2urban/pseudo_label/3814.png''. I guess this is because that the batch size is set to 2, and as expected the problem is solved when batch size is modified to 1.

    So, I wonder that if this is a bug or something what?

    Thanks for your excellent works!

    question 
    opened by lyhnsn 2
Releases(v0.2.0-alpha)
Owner
Kingdrone
Deep learning in RS
Kingdrone
ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis

ImageBART NeurIPS 2021 Patrick Esser*, Robin Rombach*, Andreas Blattmann*, Björn Ommer * equal contribution arXiv | BibTeX | Poster Requirements A sui

CompVis Heidelberg 110 Jan 01, 2023
PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules

Dynamic Routing Between Capsules - PyTorch implementation PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules from Sara Sabour,

Adam Bielski 475 Dec 24, 2022
Melanoma Skin Cancer Detection using Convolutional Neural Networks and Transfer Learning🕵🏻‍♂️

This is a Kaggle competition in which we have to identify if the given lesion image is malignant or not for Melanoma which is a type of skin cancer.

Vipul Shinde 1 Jan 27, 2022
Collect super-resolution related papers, data, repositories

Collect super-resolution related papers, data, repositories

WangChaofeng 1.7k Jan 03, 2023
Replication package for the manuscript "Using Personality Detection Tools for Software Engineering Research: How Far Can We Go?" submitted to TOSEM

tosem2021-personality-rep-package Replication package for the manuscript "Using Personality Detection Tools for Software Engineering Research: How Far

Collaborative Development Group 1 Dec 13, 2021
本步态识别系统主要基于GaitSet模型进行实现

本步态识别系统主要基于GaitSet模型进行实现。在尝试部署本系统之前,建立理解GaitSet模型的网络结构、训练和推理方法。 系统的实现效果如视频所示: 演示视频 由于模型较大,部分模型文件存储在百度云盘。 链接提取码:33mb 具体部署过程 1.下载代码 2.安装requirements.txt

16 Oct 22, 2022
Repository for "Improving evidential deep learning via multi-task learning," published in AAAI2022

Improving evidential deep learning via multi task learning It is a repository of AAAI2022 paper, “Improving evidential deep learning via multi-task le

deargen 11 Nov 19, 2022
Generating Images with Recurrent Adversarial Networks

Generating Images with Recurrent Adversarial Networks Python (Theano) implementation of Generating Images with Recurrent Adversarial Networks code pro

Daniel Jiwoong Im 121 Sep 08, 2022
Script for getting information in discord

User-info.py Script for getting information in https://discord.com/ Instalação: apt-get update -y apt-get upgrade -y apt-get install git pkg install

Moleey 1 Dec 18, 2021
Spatial Contrastive Learning for Few-Shot Classification (SCL)

This repo contains the official implementation of Spatial Contrastive Learning for Few-Shot Classification (SCL), which presents of a novel contrastive learning method applied to few-shot image class

Yassine 34 Dec 25, 2022
This repository contains all data used for writing a research paper Multiple Object Trackers in OpenCV: A Benchmark, presented in ISIE 2021 conference in Kyoto, Japan.

OpenCV-Multiple-Object-Tracking Python is version 3.6.7 to install opencv: pip uninstall opecv-python pip uninstall opencv-contrib-python pip install

6 Dec 19, 2021
GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data

GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data By Shuchang Zhou, Taihong Xiao, Yi Yang, Dieqiao Feng, Qinyao He, W

Taihong Xiao 141 Apr 16, 2021
Deep Video Matting via Spatio-Temporal Alignment and Aggregation [CVPR2021]

Deep Video Matting via Spatio-Temporal Alignment and Aggregation [CVPR2021] Paper: https://arxiv.org/abs/2104.11208 Introduction Despite the significa

76 Dec 07, 2022
End-To-End Optimization of LiDAR Beam Configuration

End-To-End Optimization of LiDAR Beam Configuration arXiv | IEEE Xplore This repository is the official implementation of the paper: End-To-End Optimi

Niclas 30 Nov 28, 2022
Automatic Differentiation Multipole Moment Molecular Forcefield

Automatic Differentiation Multipole Moment Molecular Forcefield Performance notes On a single gpu, using waterbox_31ang.pdb example from MPIDplugin wh

4 Jan 07, 2022
Official PyTorch implementation of "Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics".

Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics This repository is the official PyTorch implementation of "Physics-aware Differ

USC-Melady 46 Nov 20, 2022
Check out the StyleGAN repo and place it in the same directory hierarchy as the present repo

Variational Model Inversion Attacks Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani Most commands are in run_scripts. W

Jackson Wang 15 Dec 26, 2022
Voice control for Garry's Mod

WIP: Talonvoice GMod integrations Very work in progress voice control demo for Garry's Mod. HOWTO Install https://talonvoice.com/ Press https://i.imgu

Meta Construct 5 Nov 15, 2022
This repo is customed for VisDrone.

Object Detection for VisDrone(无人机航拍图像目标检测) My environment 1、Windows10 (Linux available) 2、tensorflow = 1.12.0 3、python3.6 (anaconda) 4、cv2 5、ensemble

53 Jul 17, 2022
COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping

COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping Version 1.0 COVINS is an accurate, scalable, and versatile vis

ETHZ V4RL 183 Dec 27, 2022