Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR19

Related tags

Deep Learning2s-AGCN
Overview

2s-AGCN

Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR19

Note

PyTorch version should be 0.3! For PyTorch0.4 or higher, the codes need to be modified.
Now we have updated the code to >=Pytorch0.4.
A new model named AAGCN is added, which can achieve better performance.

Data Preparation

  • Download the raw data from NTU-RGB+D and Skeleton-Kinetics. Then put them under the data directory:

     -data\  
       -kinetics_raw\  
         -kinetics_train\
           ...
         -kinetics_val\
           ...
         -kinetics_train_label.json
         -keintics_val_label.json
       -nturgbd_raw\  
         -nturgb+d_skeletons\
           ...
         -samples_with_missing_skeletons.txt
    
  • Preprocess the data with

    python data_gen/ntu_gendata.py

    python data_gen/kinetics-gendata.py.

  • Generate the bone data with:

    python data_gen/gen_bone_data.py

Training & Testing

Change the config file depending on what you want.

`python main.py --config ./config/nturgbd-cross-view/train_joint.yaml`

`python main.py --config ./config/nturgbd-cross-view/train_bone.yaml`

To ensemble the results of joints and bones, run test firstly to generate the scores of the softmax layer.

`python main.py --config ./config/nturgbd-cross-view/test_joint.yaml`

`python main.py --config ./config/nturgbd-cross-view/test_bone.yaml`

Then combine the generated scores with:

`python ensemble.py` --datasets ntu/xview

Citation

Please cite the following paper if you use this repository in your reseach.

@inproceedings{2sagcn2019cvpr,  
      title     = {Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition},  
      author    = {Lei Shi and Yifan Zhang and Jian Cheng and Hanqing Lu},  
      booktitle = {CVPR},  
      year      = {2019},  
}

@article{shi_skeleton-based_2019,
    title = {Skeleton-{Based} {Action} {Recognition} with {Multi}-{Stream} {Adaptive} {Graph} {Convolutional} {Networks}},
    journal = {arXiv:1912.06971 [cs]},
    author = {Shi, Lei and Zhang, Yifan and Cheng, Jian and LU, Hanqing},
    month = dec,
    year = {2019},
}

Contact

For any questions, feel free to contact: [email protected]

Comments
  • Memory overloading issue

    Memory overloading issue

    First of all, thanks a lot for making your code public. I am trying to do the experiment on NTU RGB D 120 dataset and I have split the data into training and testing in CS as given in the NTU-RGB D 120 paper. I have 63026 training samples and 54702 testing samples. I am trying to train the model on a GPU cluster but after running for one epoch, my model exceeds the memory limit: image I try to clear the cache explicitly using gc.collect but the model still continues to grow in size. It will be great if you can help regarding this.

    opened by Anirudh257 46
  • I got some wrong when I was training the net

    I got some wrong when I was training the net

    首先我是得到了下面这个error, 1

    注释掉该参数后,got another error

    I got this error ,but I don't know how to solve. Could you give me some advice?

    Traceback (most recent call last): File "/home/sues/Desktop/2s-AGCN-master/main.py", line 550, in processor.start() File "/home/sues/Desktop/2s-AGCN-master/main.py", line 491, in start self.train(epoch, save_model=save_model) File "/home/sues/Desktop/2s-AGCN-master/main.py", line 372, in train loss.backward() File "/home/sues/anaconda3/envs/2sAGCN/lib/python3.5/site-pac[kages/torch/autograd/variable.py", line 167, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables) File "/home/sues/anaconda3/envs/2sAGCN/lib/python3.5/site-packages/torch/autograd/init.py", line 99, in backward variables, grad_variables, retain_graph) RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/torch/lib/THC/generic/THCTensorMath.cu:26 /pytorch/torch/lib/THCUNN/ClassNLLCriterion.cu:101: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes 2

    opened by Dongjiuqing 10
  • 内存分配不够,Unable to allocate 29.0 GiB for an array with shape (7790126400,) and data type float32

    内存分配不够,Unable to allocate 29.0 GiB for an array with shape (7790126400,) and data type float32

    当运行python data_gen/gen_bone_data.py这据代码时,会在 File "data_gen/gen_bone_data.py", line 62, in data = np.load('./data/{}/{}_data.npy'.format(dataset, set)) 处遇到 MemoryError: Unable to allocate 29.0 GiB for an array with shape (7790126400,) and data type float32 这样的错误,请问该如何解决呢?

    opened by XieLinMofromsomewhere 7
  • augmentation in feeder

    augmentation in feeder

    Hi, I want to know the data augmentation in the feeder has not improved? Does the length of the input have a big influence? Also, have you trained the model on the 120 dataset? How's the accuracy?

    opened by VSunN 7
  • problem with gen_bone_data.py

    problem with gen_bone_data.py

    你好,请问一下我跑gen_bone_data.py时报错,好像是矩阵的维度有问题,该怎么解决,谢谢 [email protected]:~/2s-AGCN-master/data_gen$ python gen_bone_data.py ntu/xsub train 4%|█▋ | 1/25 [06:49<2:43:40, 409.20s/it]Traceback (most recent call last): File "gen_bone_data.py", line 50, in fp_sp[:, :, :, v1, :] = data[:, :, :, v1, :] - data[:, :, :, v2, :] IndexError: index 20 is out of bounds for axis 3 with size 18 4%|█▋ | 1/25 [06:49<2:43:45, 409.41s/it]

    opened by JaxferZ 5
  • Accuracy of aagcn

    Accuracy of aagcn

    I ran your implemented code using J-AAGCN and NTU-RGBD CV dataset. But Accuracy is 94.64, not 95.1 in your paper. What is the difference? The batch size was 32, not 64 because of the resource limit. Are there any other things to be aware of? I use your implemented code.

    opened by ilikeokoge 4
  • 用released model做test的时候提示 Unexpected key(s) in state_dict:

    用released model做test的时候提示 Unexpected key(s) in state_dict:

    python main.py --config ./config/nturgbd-cross-view/test_joint.yaml 这段代码能得到论文的结果。 但是到了这段 python main.py --config ./config/nturgbd-cross-view/test_bone.yaml``,会提示RuntimeError: Error(s) in loading state_dict for Model:`

    Unexpected key(s) in state_dict: "l1.gcn1.conv_res.0.weight", "l1.gcn1.conv_res.0.bias", "l1.gcn1.conv_res.1.weigh t", "l1.gcn1.conv_res.1.bias", "l1.gcn1.conv_res.1.running_mean", "l1.gcn1.conv_res.1.running_var", "l5.gcn1.conv_res.0.we ight", "l5.gcn1.conv_res.0.bias", "l5.gcn1.conv_res.1.weight", "l5.gcn1.conv_res.1.bias", "l5.gcn1.conv_res.1.running_mean ", "l5.gcn1.conv_res.1.running_var", "l8.gcn1.conv_res.0.weight", "l8.gcn1.conv_res.0.bias", "l8.gcn1.conv_res.1.weight", "l8.gcn1.conv_res.1.bias", "l8.gcn1.conv_res.1.running_mean", "l8.gcn1.conv_res.1.running_var".

    看起来是这个pretrained模型与提供的代码不匹配,我怎么做才能得到结果呢! 期待回复!

    opened by tailin1009 3
  • dataload error

    dataload error

    thank your source code, but when I run this code, The following error occurs: ValueError: num_samples should be a positive integer value, but got num_samples=0

    I've run the program 'python data_gen/ntu_gendata.py 'before, and some documents were generated : train_data_joint.npy train_label.pkl val_data_joint.npy val_label.pkl

    but their size are all 1K

    How should I deal with, trouble you give directions.

    thanks

    opened by xuanshibin 3
  • RuntimeError: running_mean should contain 126 elements not 63 (example).

    RuntimeError: running_mean should contain 126 elements not 63 (example).

    What is your elements for number of joints (18)? When I run your code, I got this error " RuntimeError: running_mean should contain 126 elements not 63". 63 means I change number of node. How to adjust these elements and how to get your elements 126 for your experiment?

    opened by JasOlean 3
  • what is (N, C, T, V, M) in agcn.py?

    what is (N, C, T, V, M) in agcn.py?

    thank you for sharing code and information :) I have some question about agcn.py code

    1. what is (N, C, T, V, M) in agcn.py? i guess T is 300 frame, V is the similarity between nodes, M is number of men in one video, but i am not sure that it is right

    2. are bone train code and joint train(agcn.py) code same? if it is not, is bone train code aagcn.py?

    opened by lodado 2
  • No module named 'data_gen'  and  No such file or directory: '../data/kinetics_raw/kinetics_val'

    No module named 'data_gen' and No such file or directory: '../data/kinetics_raw/kinetics_val'

    When I run "python data_gen/ntu_gendata.py", gets the error : ModuleNotFoundError: No module named 'data_gen'.

    When I run "python data_gen/kinetics_gendata.py", gets the error : FileNotFoundError: [Errno 2] No such file or directory: '../data/kinetics_raw/kinetics_val'.

    My raw data has put in the ./data.

    Needs your help!

    opened by XiongXintyw 2
  • 关于MS-AAGCN的运行问题

    关于MS-AAGCN的运行问题

    大佬您好!我十分有幸拜读了您的文章《Skeleton-Based Action Recognition with Multi-Stream Adaptive Graph Convolutional Networks》,受益匪浅!我已经跑通了2S-AGCN的代码,想和您请教一下MS-AAGCN的代码该如何运行呢?

    opened by 15762260991 1
  • 注意力模块中参数A的定义

    注意力模块中参数A的定义

    在复现代码时 找不到关于图卷积层中参数A的定义 请问这个A指的是什么呢: class TCN_GCN_unit(nn.Module): def init(self, in_channels, out_channels, A, stride=1, residual=True, adaptive=True, attention=True):

    opened by wangxx0101 1
  • 关于自适应时,tanh和softmax函数的问题

    关于自适应时,tanh和softmax函数的问题

    大佬您好,有两个问题想请教一下。 ①tanh激活函数,它将返回一个范围在[- 1,1]的值,softmax激活函数返回一个[0, 1]的值,当我们建模关节之间的相关性时,如果使用tanh返回为负值的时候,是说明这两个关节负相关吗? ②为什么tanh的效果会比softmax好一点,这个我不是太懂,您可以详细的讲解一下吗?

    opened by blue-q 0
  • Where is the code for visualization in Figure 8 and 9?

    Where is the code for visualization in Figure 8 and 9?

    Dear Authors,

    I have already read your "Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition". In that paper, you showed some experimental results in Figure 8 and 9. I would like to know which part of the code for that. Or, how to use layers to show these result's visualization? If you answer my question, I will really appreciate you. Thank you.

    opened by JasOlean 3
Releases(v0.0)
Owner
LShi
Video Analysis, Action Recognition.
LShi
Code for generating a single image pretraining dataset

Single Image Pretraining of Visual Representations As shown in the paper A critical analysis of self-supervision, or what we can learn from a single i

Yuki M. Asano 12 Dec 19, 2022
BlueFog Tutorials

BlueFog Tutorials Welcome to the BlueFog tutorials! In this repository, we've put together a collection of awesome Jupyter notebooks. These notebooks

4 Oct 27, 2021
PyTorch ,ONNX and TensorRT implementation of YOLOv4

PyTorch ,ONNX and TensorRT implementation of YOLOv4

4.2k Jan 01, 2023
pytorch, hand(object) detect ,yolo v5,手检测

YOLO V5 物体检测,包括手部检测。 项目介绍 手部检测 手部检测示例如下 : 视频示例: 项目配置 作者开发环境: Python 3.7 PyTorch = 1.5.1 数据集 手部检测数据集 该项目数据集采用 TV-Hand 和 COCO-Hand (COCO-Hand-Big 部分) 进

Eric.Lee 11 Dec 20, 2022
Rax is a Learning-to-Rank library written in JAX

🦖 Rax: Composable Learning to Rank using JAX Rax is a Learning-to-Rank library written in JAX. Rax provides off-the-shelf implementations of ranking

Google 247 Dec 27, 2022
MOpt-AFL provided by the paper "MOPT: Optimized Mutation Scheduling for Fuzzers"

MOpt-AFL 1. Description MOpt-AFL is a AFL-based fuzzer that utilizes a customized Particle Swarm Optimization (PSO) algorithm to find the optimal sele

172 Dec 18, 2022
Layered Neural Atlases for Consistent Video Editing

Layered Neural Atlases for Consistent Video Editing Project Page | Paper This repository contains an implementation for the SIGGRAPH Asia 2021 paper L

Yoni Kasten 353 Dec 27, 2022
Mmdetection3d Noted - MMDetection3D is an open source object detection toolbox based on PyTorch

MMDetection3D is an open source object detection toolbox based on PyTorch

Jiangjingwen 13 Jan 06, 2023
A small fun project using python OpenCV, mediapipe, and pydirectinput

Here I tried a small fun project using python OpenCV, mediapipe, and pydirectinput. Here we can control moves car game when yellow color come to right box (press key 'd') left box (press key 'a') lef

Sameh Elisha 3 Nov 17, 2022
Face and other object detection using OpenCV and ML Yolo

Object-and-Face-Detection-Using-Yolo- Opencv and YOLO object and face detection is implemented. You only look once (YOLO) is a state-of-the-art, real-

Happy N. Monday 3 Feb 15, 2022
Header-only library for using Keras models in C++.

frugally-deep Use Keras models in C++ with ease Table of contents Introduction Usage Performance Requirements and Installation FAQ Introduction Would

Tobias Hermann 927 Jan 05, 2023
This implements the learning and inference/proposal algorithm described in "Learning to Propose Objects, Krähenbühl and Koltun"

Learning to propose objects This implements the learning and inference/proposal algorithm described in "Learning to Propose Objects, Krähenbühl and Ko

Philipp Krähenbühl 90 Sep 10, 2021
Fine-grained Post-training for Improving Retrieval-based Dialogue Systems - NAACL 2021

Fine-grained Post-training for Multi-turn Response Selection Implements the model described in the following paper Fine-grained Post-training for Impr

Janghoon Han 83 Dec 20, 2022
Latte: Cross-framework Python Package for Evaluation of Latent-based Generative Models

Cross-framework Python Package for Evaluation of Latent-based Generative Models Latte Latte (for LATent Tensor Evaluation) is a cross-framework Python

Karn Watcharasupat 30 Sep 08, 2022
New AidForBlind - Various Libraries used like OpenCV and other mentioned in Requirements.txt

AidForBlind Recommended PyCharm IDE Various Libraries used like OpenCV and other

Aalhad Chandewar 1 Jan 13, 2022
The official implementation of Theme Transformer

Theme Transformer This is the official implementation of Theme Transformer. Checkout our demo and paper : Demo | arXiv Environment: using python versi

Ian Shih 85 Dec 08, 2022
Spline is a tool that is capable of running locally as well as part of well known pipelines like Jenkins (Jenkinsfile), Travis CI (.travis.yml) or similar ones.

Welcome to spline - the pipeline tool Important note: Since change in my job I didn't had the chance to continue on this project. My main new project

Thomas Lehmann 29 Aug 22, 2022
[CVPR2021 Oral] End-to-End Video Instance Segmentation with Transformers

VisTR: End-to-End Video Instance Segmentation with Transformers This is the official implementation of the VisTR paper: Installation We provide instru

Yuqing Wang 687 Jan 07, 2023
DiffWave is a fast, high-quality neural vocoder and waveform synthesizer.

DiffWave DiffWave is a fast, high-quality neural vocoder and waveform synthesizer. It starts with Gaussian noise and converts it into speech via itera

LMNT 498 Jan 03, 2023
SCAN: Learning to Classify Images without Labels, incl. SimCLR. [ECCV 2020]

Learning to Classify Images without Labels This repo contains the Pytorch implementation of our paper: SCAN: Learning to Classify Images without Label

Wouter Van Gansbeke 1.1k Dec 30, 2022