A Semantic Segmentation Network for Urban-Scale Building Footprint Extraction Using RGB Satellite Imagery

Overview

A Semantic Segmentation Network for Urban-Scale Building Footprint Extraction Using RGB Satellite Imagery

This repository is the official implementation of A Semantic Segmentation Network for Urban-Scale Building Footprint Extraction Using RGB Satellite Imagery by Aatif Jiwani, Shubhrakanti Ganguly, Chao Ding, Nan Zhou, and David Chan.

model visualization

Requirements

  1. To install GDAL/georaster, please follow this doc for instructions.
  2. Install other dependencies from requirements.txt
pip install -r requirements.txt

Datasets

Downloading the Datasets

  1. To download the AICrowd dataset, please go here. You will have to either create an account or sign in to access the training and validation set. Please store the training/validation set inside <root>/AICrowd/<train | val> for ease of conversion.
  2. To download the Urban3D dataset, please run:
aws s3 cp --recursive s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/01-Provisional_Train/ <root>/Urban3D/train
aws s3 cp --recursive s3://spacenet-dataset/Hosted-Datasets/Urban_3D_Challenge/02-Provisional_Test/ <root>/Urban3D/test
  1. To download the SpaceNet Vegas dataset, please run:
aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/SN2_buildings_train_AOI_2_Vegas.tar.gz <root>/SpaceNet/Vegas/
aws s3 cp s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/AOI_2_Vegas_Test_public.tar.gz <root>/SpaceNet/Vegas/

tar xvf <root>/SpaceNet/Vegas/SN2_buildings_train_AOI_2_Vegas.tar.gz
tar xvf <root>/SpaceNet/Vegas/AOI_2_Vegas_Test_public.tar.gz

Converting the Datasets

Please use our provided dataset converters to process the datasets. For all converters, please look at the individual files for an example of how to use them.

  1. For AICrowd, use datasets/converters/cocoAnnotationToMask.py.
  2. For Urban3D, use datasets/converters/urban3dDataConverter.py.
  3. For SpaceNet, use datasets/converters/spaceNetDataConverter.py

Creating the Boundary Weight Maps

In order to train with the exponentially weighted boundary loss, you will need to create the weight maps as a pre-processing step. Please use datasets/converters/weighted_boundary_processor.py and follow the example usage. The inc parameter is specified for computational reasons. Please decrease this value if you notice very high memory usage.

Note: these maps are not required for evaluation / testing.

Training and Evaluation

To train / evaluate the DeepLabV3+ models described in the paper, please use train_deeplab.sh or test_deeplab.sh for your convenience. We employ the following primary command-line arguments:

Parameter Default Description (final argument)
--backbone resnet The DeeplabV3+ backbone (final method used drn_c42)
--out-stride 16 The backbone compression facter (8)
--dataset urban3d The dataset to train / evaluate on (other choices: spaceNet, crowdAI, combined)
--data-root /data/ Please replace this with the root folder of the dataset samples
--workers 2 Number of workers for dataset retrieval
--loss-type ce_dice Type of objective function. Use wce_dice for exponentially weighted boundary loss
--fbeta 1 The beta value to use with the F-Beta Measure (0.5)
--dropout 0.1 0.5 Dropout values to use in the DeepLabV3+ (0.3 0.5)
--epochs None Number of epochs to train (60 for train, 1 for test)
--batch-size None Batch size (3/4)
--test-batch-size None Testing Batch Size (1/4)
--lr 1e-4 Learning Rate (1e-3)
--weight-decay 5e-4 L2 Regularization Constant (1e-4)
--gpu-ids 0 GPU Ids (Use --no-cuda for only CPU)
--checkname None Experiment name
--use-wandb False Track experiment using WandB
--resume None Experiment name to load weights from (i.e. urban for weights/urban/checkpoint.pth.tar)
--evalulate False Enable this flag for testing
--best-miou False Enable this flag to get best results when testing
--incl-bounds False Enable this flag when training with wce_dice as a loss

To train with the cross-task training strategy, you need to:

  1. Train a model using --dataset=combined until the best loss has been achieved
  2. Train a model using --resume=<checkname> on one of the three primary datasets until the best mIoU is achieved

Pre-Trained Weights

We provide pre-trained model weights in the weights/ directory. Please use Git LFS to download these weights. These weights correspond to our best model on all three datasets.

Results

Our final model is a DeepLavV3+ module with a Dilated ResNet C42 backbone trained using the F-Beta Measure + Exponentially Weighted Cross Entropy Loss (Beta = 0.5). We employ the cross-task training strategy only for Urban3D and SpaceNet.

Our model achieves the following:

Dataset Avg. Precision Avg. Recall F1 Score mIoU
Urban3D 83.8% 82.2% 82.4% 83.3%
SpaceNet 91.4% 91.8% 91.6% 90.2%
AICrowd 96.2% 96.3% 96.3% 95.4%

Acknowledgements

We would like to thank jfzhang95 for his DeepLabV3+ model and training template. You can access this repository here

Owner
Aatif Jiwani
Hey! I am Aatif Jiwani, and I am currently a Machine Learning Engineer at C3.ai. Previously, I studied EECS at UC Berkeley and did research at BAIR and LBNL.
Aatif Jiwani
Calibrated Hyperspectral Image Reconstruction via Graph-based Self-Tuning Network.

mask-uncertainty-in-HSI This repository contains the testing code and pre-trained models for the paper Calibrated Hyperspectral Image Reconstruction v

JIAMIAN WANG 9 Dec 29, 2022
Face Recognition plus identification simply and fast | Python

PyFaceDetection Face Recognition plus identification simply and fast Ubuntu Setup sudo pip3 install numpy sudo pip3 install cmake sudo pip3 install dl

Peyman Majidi Moein 16 Sep 22, 2022
A privacy-focused, intelligent security camera system.

Self-Hosted Home Security Camera System A privacy-focused, intelligent security camera system. Features: Multi-camera support w/ minimal configuration

Scott Barnes 175 Jan 01, 2023
Trying to understand alias-free-gan.

alias-free-gan-explanation Trying to understand alias-free-gan in my own way. [Chinese Version 中文版本] CC-BY-4.0 License. Tzu-Heng Lin motivation of thi

Tzu-Heng Lin 12 Mar 17, 2022
A library of multi-agent reinforcement learning components and systems

Mava: a research framework for distributed multi-agent reinforcement learning Table of Contents Overview Getting Started Supported Environments System

InstaDeep Ltd 463 Dec 23, 2022
Beancount-mercury - Beancount importer for Mercury Startup Checking

beancount-mercury beancount-mercury provides an Importer for converting CSV expo

Michael Lynch 4 Oct 31, 2022
Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.

human-pose-estimation-3d-python-cpp RealSenseD435 (RGB) 480x640 + CPU Corei9 45 FPS (Depth is not used) 1. Run 1-1. RealSenseD435 (RGB) 480x640 + CPU

Katsuya Hyodo 8 Oct 03, 2022
Official implementation for the paper: Multi-label Classification with Partial Annotations using Class-aware Selective Loss

Multi-label Classification with Partial Annotations using Class-aware Selective Loss Paper | Pretrained models Official PyTorch Implementation Emanuel

99 Dec 27, 2022
Implicit Deep Adaptive Design (iDAD)

Implicit Deep Adaptive Design (iDAD) This code supports the NeurIPS paper 'Implicit Deep Adaptive Design: Policy-Based Experimental Design without Lik

Desi 12 Aug 14, 2022
Ağ tarayıcı.Gönderdiği paketler ile ağa bağlı olan cihazların IP adreslerini gösterir.

NetScanner.py Ağ tarayıcı.Gönderdiği paketler ile ağa bağlı olan cihazların IP adreslerini gösterir. Linux'da Kullanımı: git clone https://github.com/

4 Aug 23, 2021
Yolact-keras实例分割模型在keras当中的实现

Yolact-keras实例分割模型在keras当中的实现 目录 性能情况 Performance 所需环境 Environment 文件下载 Download 训练步骤 How2train 预测步骤 How2predict 评估步骤 How2eval 参考资料 Reference 性能情况 训练数

Bubbliiiing 11 Dec 26, 2022
Learning kernels to maximize the power of MMD tests

Code for the paper "Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy" (arXiv:1611.04488; published at ICLR 2017), by Douga

Danica J. Sutherland 201 Dec 17, 2022
Volsdf - Volume Rendering of Neural Implicit Surfaces

Volume Rendering of Neural Implicit Surfaces Project Page | Paper | Data This re

Lior Yariv 221 Jan 07, 2023
A library for uncertainty representation and training in neural networks.

Epistemic Neural Networks A library for uncertainty representation and training in neural networks. Introduction Many applications in deep learning re

DeepMind 211 Dec 12, 2022
PyTorch Implementation of "Light Field Image Super-Resolution with Transformers"

LFT PyTorch implementation of "Light Field Image Super-Resolution with Transformers", arXiv 2021. [pdf]. Contributions: We make the first attempt to a

Squidward 62 Nov 28, 2022
A Convolutional Transformer for Keyword Spotting

☢️ Audiomer ☢️ Audiomer: A Convolutional Transformer for Keyword Spotting [ arXiv ] [ Previous SOTA ] [ Model Architecture ] Results on SpeechCommands

49 Jan 27, 2022
Mmrotate - OpenMMLab Rotated Object Detection Benchmark

OpenMMLab website HOT OpenMMLab platform TRY IT OUT 📘 Documentation | 🛠️ Insta

OpenMMLab 1.2k Jan 04, 2023
📖 Deep Attentional Guided Image Filtering

📖 Deep Attentional Guided Image Filtering [Paper] Zhiwei Zhong, Xianming Liu, Junjun Jiang, Debin Zhao ,Xiangyang Ji Harbin Institute of Technology,

9 Dec 23, 2022
Rot-Pro: Modeling Transitivity by Projection in Knowledge Graph Embedding

Rot-Pro : Modeling Transitivity by Projection in Knowledge Graph Embedding This repository contains the source code for the Rot-Pro model, presented a

Tewi 9 Sep 28, 2022
Stream images from a connected camera over MQTT, view using Streamlit, record to file and sqlite

mqtt-camera-streamer Summary: Publish frames from a connected camera or MJPEG/RTSP stream to an MQTT topic, and view the feed in a browser on another

Robin Cole 183 Dec 16, 2022