Official implementation of TMANet.

Related tags

Deep LearningTMANet
Overview

Temporal Memory Attention for Video Semantic Segmentation, arxiv

PWC PWC

Introduction

We propose a Temporal Memory Attention Network (TMANet) to adaptively integrate the long-range temporal relations over the video sequence based on the self-attention mechanism without exhaustive optical flow prediction. Our method achieves new state-of-the-art performances on two challenging video semantic segmentation datasets, particularly 80.3% mIoU on Cityscapes and 76.5% mIoU on CamVid with ResNet-50. (Accepted by ICIP2021)

If this codebase is helpful for you, please consider give me a star โญ ๐Ÿ˜Š .

image

Updates

2021/1: TMANet training and evaluation code released.

2021/6: Update README.md:

  • adding some Camvid dataset download links;
  • update 'camvid_video_process.py' script.

Usage

  • Install mmseg

    • Please refer to mmsegmentation to get installation guide.
    • This repository is based on mmseg-0.7.0 and pytorch 1.6.0.
  • Clone the repository

    git clone https://github.com/wanghao9610/TMANet.git
    cd TMANet
    pip install -e .
  • Prepare the datasets

    • Download Cityscapes dataset and Camvid dataset.

    • For Camvid dataset, we need to extract frames from downloaded videos according to the following steps:

      • Download the raw video from here, in which I provide a google drive link to download.
      • Put the downloaded raw video(e.g. 0016E5.MXF, 0006R0.MXF, 0005VD.MXF, 01TP_extract.avi) to ./data/camvid/raw .
      • Download the extracted images and labels from here and split.txt file from here, untar the tar.gz file to ./data/camvid , and we will get two subdirs "./data/camvid/images" (stores the images with annotations), and "./data/camvid/labels" (stores the ground truth for semantic segmentation). Reference the following shell command:
        cd TMANet
        cd ./data/camvid
        wget https://drive.google.com/file/d/1FcVdteDSx0iJfQYX2bxov0w_j-6J7plz/view?usp=sharing
        # or first download on your PC then upload to your server.
        tar -xf camvid.tar.gz 
      • Generate image_sequence dir frame by frame from the raw videos. Reference the following shell command:
        cd TMANet
        python tools/convert_datasets/camvid_video_process.py
    • For Cityscapes dataset, we need to request the download link of 'leftImg8bit_sequence_trainvaltest.zip' from Cityscapes dataset official webpage.

    • The converted/downloaded datasets store on ./data/camvid and ./data/cityscapes path.

      File structure of video semantic segmentation dataset is as followed.

      โ”œโ”€โ”€ data                                              โ”œโ”€โ”€ data                              
      โ”‚   โ”œโ”€โ”€ cityscapes                                    โ”‚   โ”œโ”€โ”€ camvid                        
      โ”‚   โ”‚   โ”œโ”€โ”€ gtFine                                    โ”‚   โ”‚   โ”œโ”€โ”€ images                    
      โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ train                                 โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ xxx{img_suffix}       
      โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ xxx{img_suffix}                   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ yyy{img_suffix}       
      โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ yyy{img_suffix}                   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ zzz{img_suffix}       
      โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ zzz{img_suffix}                   โ”‚   โ”‚   โ”œโ”€โ”€ annotations               
      โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ val                                   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ train.txt             
      โ”‚   โ”‚   โ”œโ”€โ”€ leftImg8bit                               โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ val.txt               
      โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ train                                 โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ test.txt              
      โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ xxx{seg_map_suffix}               โ”‚   โ”‚   โ”œโ”€โ”€ labels                    
      โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ yyy{seg_map_suffix}               โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ xxx{seg_map_suffix}   
      โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ zzz{seg_map_suffix}               โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ yyy{seg_map_suffix}   
      โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ val                                   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ zzz{seg_map_suffix}   
      โ”‚   โ”‚   โ”œโ”€โ”€ leftImg8bit_sequence                      โ”‚   โ”‚   โ”œโ”€โ”€ image_sequence            
      โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ train                                 โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ xxx{sequence_suffix}  
      โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ xxx{sequence_suffix}              โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ yyy{sequence_suffix}  
      โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ yyy{sequence_suffix}              โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ zzz{sequence_suffix}  
      โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ zzz{sequence_suffix}              
      โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ val                                   
      
  • Evaluation

    • Download the trained models for Cityscapes and Camvid. And put them on ./work_dirs/{config_file}
    • Run the following command(on Cityscapes):
    sh eval.sh configs/video/cityscapes/tmanet_r50-d8_769x769_80k_cityscapes_video.py
  • Training

    • Please download the pretrained ResNet-50 model, and put it on ./init_models .
    • Run the following command(on Cityscapes):
    sh train.sh configs/video/cityscapes/tmanet_r50-d8_769x769_80k_cityscapes_video.py

    Note: the above evaluation and training shell commands execute on Cityscapes, if you want to execute evaluation or training on Camvid, please replace the config file on the shell command with the config file of Camvid.

Citation

If you find TMANet is useful in your research, please consider citing:

@misc{wang2021temporal,
    title={Temporal Memory Attention for Video Semantic Segmentation}, 
    author={Hao Wang and Weining Wang and Jing Liu},
    year={2021},
    eprint={2102.08643},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Acknowledgement

Thanks mmsegmentation contribution to the community!

Owner
wanghao
wanghao
Use your Philips Hue lights as Racing Flags. Works with Assetto Corsa, Assetto Corsa Competizione and iRacing.

phue-racing-flags Use your Philips Hue lights as Racing Flags. Explore the docs ยป Report Bug ยท Request Feature Table of Contents About The Project Bui

50 Sep 03, 2022
This repository contains the code for our fast polygonal building extraction from overhead images pipeline.

Polygonal Building Segmentation by Frame Field Learning We add a frame field output to an image segmentation neural network to improve segmentation qu

Nicolas Girard 186 Jan 04, 2023
A mini lib that implements several useful functions binding to PyTorch in C++.

Torch-gather A mini library that implements several useful functions binding to PyTorch in C++. What does gather do? Why do we need it? When dealing w

maxwellzh 8 Sep 07, 2022
Source code for TACL paper "KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation".

KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation Source code for TACL 2021 paper KEPLER: A Unified Model for Kn

THU-KEG 138 Dec 22, 2022
This reporistory contains the test-dev data of the paper "xGQA: Cross-lingual Visual Question Answering".

This reporistory contains the test-dev data of the paper "xGQA: Cross-lingual Visual Question Answering".

AdapterHub 18 Dec 09, 2022
BASH - Biomechanical Animated Skinned Human

We developed a method animating a statistical 3D human model for biomechanical analysis to increase accessibility for non-experts, like patients, athletes, or designers.

Machine Learning and Data Analytics Lab FAU 66 Nov 19, 2022
This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction

H3DS Dataset This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction Access

Crisalix 72 Dec 10, 2022
Collision risk estimation using stochastic motion models

collision_risk_estimation Collision risk estimation using stochastic motion models. This is a new approach, based on stochastic models, to predict the

Unmesh 7 Jun 26, 2022
Algebraic effect handlers in Python

PyEffect: Algebraic effects in Python What IDK. Usage effects.handle(operation, handlers=None) effects.set_handler(effect, handler) Supported effects

Greg Werbin 5 Dec 27, 2021
A Streamlit demo demonstrating the Deep Dream technique. Adapted from the TensorFlow Deep Dream tutorial.

Streamlit Demo: Deep Dream A Streamlit demo demonstrating the Deep Dream technique. Adapted from the TensorFlow Deep Dream tutorial How to run this de

Streamlit 11 Dec 12, 2022
Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)

Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching Official pytorch implementation of "Show, Attend and Distill: Kn

Clova AI Research 80 Dec 16, 2022
Relative Uncertainty Learning for Facial Expression Recognition

Relative Uncertainty Learning for Facial Expression Recognition The official implementation of the following paper at NeurIPS2021: Title: Relative Unc

35 Dec 28, 2022
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR2022)[paper] Authors: Chenhang He, Ruihuang Li, Shuai Li, L

Billy HE 141 Dec 30, 2022
Depth-Aware Video Frame Interpolation (CVPR 2019)

DAIN (Depth-Aware Video Frame Interpolation) Project | Paper Wenbo Bao, Wei-Sheng Lai, Chao Ma, Xiaoyun Zhang, Zhiyong Gao, and Ming-Hsuan Yang IEEE C

Wenbo Bao 7.7k Dec 31, 2022
We will see a basic program that is basically a hint to brute force attack to crack passwords. In other words, we will make a program to Crack Any Password Using Python. Show some โค๏ธ by starring this repository!

Crack Any Password Using Python We will see a basic program that is basically a hint to brute force attack to crack passwords. In other words, we will

Ananya Chatterjee 11 Dec 03, 2022
Approaches to modeling terrain and maps in python

topography ๐ŸŒŽ Contains different approaches to modeling terrain and topographic-style maps in python Features Inverse Distance Weighting (IDW) A given

John Gutierrez 1 Aug 10, 2022
Group project for MFIN7036. Our goal is to predict firm profitability with text-based competition measures.

NLP_0-project Group project for MFIN7036. Our goal is to predict firm profitability with text-based competition measures1. We are a "democratic" and c

3 Mar 16, 2022
JAX-based neural network library

Haiku: Sonnet for JAX Overview | Why Haiku? | Quickstart | Installation | Examples | User manual | Documentation | Citing Haiku What is Haiku? Haiku i

DeepMind 2.3k Jan 04, 2023
Code for the paper "Next Generation Reservoir Computing"

Next Generation Reservoir Computing This is the code for the results and figures in our paper "Next Generation Reservoir Computing". They are written

OSU QuantInfo Lab 105 Dec 20, 2022
Code accompanying the paper "ProxyFL: Decentralized Federated Learning through Proxy Model Sharing"

ProxyFL Code accompanying the paper "ProxyFL: Decentralized Federated Learning through Proxy Model Sharing" Authors: Shivam Kalra*, Junfeng Wen*, Jess

Layer6 Labs 14 Dec 06, 2022