A custom DeepStack model for detecting 16 human actions.

Overview

DeepStack_ActionNET

This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API for detecting 16 human actions present in the ActionNET Dataset dataset. Also included in this repository is that dataset with the YOLO annotations.

>> Watch Video Demo

  • Download DeepStack Model and Dataset
  • Create API and Detect Objects
  • Discover more Custom Models
  • Train your own Model

Download DeepStack Model and Dataset

You can download the pre-trained DeepStack_ActionNET model and the annotated dataset via the links below.

Create API and Detect Actions

The Trained Model can detect the following actions in images and videos.

  • calling
  • clapping
  • cycling
  • dancing
  • drinking
  • eating
  • fighting
  • hugging
  • kissing
  • laughing
  • listening-to-music
  • running
  • sitting
  • sleeping
  • texting
  • using-laptop

To start detecting, follow the steps below

  • Install DeepStack: Install DeepStack AI Server with instructions on DeepStack's documentation via https://docs.deepstack.cc

  • Download Custom Model: Download the trained custom model actionnetv2.pt from this GitHub release. Create a folder on your machine and move the downloaded model to this folder.

    E.g A path on Windows Machine C\Users\MyUser\Documents\DeepStack-Models, which will make your model file path C\Users\MyUser\Documents\DeepStack-Models\actionnet.pt

  • Run DeepStack: To run DeepStack AI Server with the custom ActionNET model, run the command that applies to your machine as detailed on DeepStack's documentation linked here.

    E.g

    For a Windows version, you run the command below

    deepstack --MODELSTORE-DETECTION "C\Users\MyUser\Documents\DeepStack-Models" --PORT 80

    For a Linux machine

    sudo docker run -v /home/MyUser/Documents/DeepStack-Models -p 80:5000 deepquestai/deepstack

    Once DeepStack runs, you will see a log like the one below in your Terminal/Console

    That means DeepStack is running your custom actionnet.pt model and now ready to start detecting actions images via the API endpoint http://localhost:80/v1/vision/custom/actionnet or http://your_machine_ip:80/v1/vision/custom/actionnet

  • Detect actions in image: You can detect objects in an image by sending a POST request to the url mentioned above with the paramater image set to an image using any proggramming language or with a tool like POSTMAN. For the purpose of this repository, we have provided a sample Python code below.

    • A sample image can be found in images/test.jpg of this repository

    • Install Python and install the DeepStack Python SDK via the command below

      pip install deepstack_sdk
    • Run the Python file detect.py in this repository.

      python detect.py
    • After the code runs, you will find a new image in images/test_detected.jpg with the detection visualized, with the following results printed in the Terminal/Console.

      Name: dancing
      Confidence: 0.91482425
      x_min: 270
      x_max: 516
      y_min: 18
      y_max: 480
      -----------------------
      

    • You can try running action detection for other images.

Discover more Custom Models

For more custom DeepStack models that has been trained and ready to use, visit the Custom Models sample page on DeepStack's documentation https://docs.deepstack.cc/custom-models-samples/ .

Train your own Model

If you will like to train a custom model yourself, follow the instructions below.

  • Prepare and Annotate: Collect images on and annotate object(s) you plan to detect as detailed here
  • Train your Model: Train the model as detailed here
You might also like...
NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR2021)
NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR2021)

NExT-QA We reproduce some SOTA VideoQA methods to provide benchmark results for our NExT-QA dataset accepted to CVPR2021 (with 1 'Strong Accept' and 2

Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions.
🎓Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

🎓Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

An experiment on the performance of homemade Q-learning AIs in Agar.io depending on their state representation and available actions
An experiment on the performance of homemade Q-learning AIs in Agar.io depending on their state representation and available actions

Agar.io_Q-Learning_AI An experiment on the performance of homemade Q-learning AIs in Agar.io depending on their state representation and available act

Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label.
Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label.

Tensorflow-Mobile-Generic-Object-Localizer Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label. Ori

Python TFLite scripts for detecting objects of any class in an image without knowing their label.
Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images
CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images

Code and result about CCAFNet(IEEE TMM) 'CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images' IEE

Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis"

Beyond the Spectrum Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis" by Yang He, Ning Yu, Margret Keu

Comments
  • How to download a Custom Model action net v2.pt in Deepstack Server Docker?

    How to download a Custom Model action net v2.pt in Deepstack Server Docker?

    Tell me how to load a custom action network model correctly v2.pt in the Deepstack server docker? Did I do the right thing?

    DeepStack: Version 2021.09.01 I created the /model store/detection folders and threw the action net file there v2.pt image

    After the reboot, I got a v1/vision/custom/action net v2 entry in the logs. Did I do the right thing? It just confuses me that there is a v1/vision/custom/action net v2 entry in the logs, and the rest are written like this.

    /v1/vision/face
    /v1/vision/face/recognize
    ....
    

    image

    Is it necessary to enter here as in the case of face and object recognition? image image

    opened by DivanX10 0
Releases(v2)
  • v2(Aug 26, 2021)

    Version 2 of the DeepStack Custom Model for object detection API to detect human actions in images and videos. It detects the following actions

    • calling
    • clapping
    • cycling
    • dancing
    • drinking
    • eating
    • fighting
    • hugging
    • kissing
    • laughing
    • listening-to-music
    • running
    • sitting
    • sleeping
    • texting
    • using-laptop

    Download the model actionnetv2.pt from the Assets section (below) in this release.

    This Model is a YOLOv5x DeepStack custom model and that was trained for 150 epochs, generating a best model with the following evaluation result.

    [email protected]: 0.995 [email protected]: 0.913

    Source code(tar.gz)
    Source code(zip)
    actionnetv2.pt(169.41 MB)
  • v1(Aug 14, 2021)

    A DeepStack Custom Model for object detection API to detect human actions in images and videos. It detects the following actions

    • calling
    • clapping
    • cycling
    • dancing
    • drinking
    • eating
    • fighting
    • hugging
    • kissing
    • laughing
    • listening-to-music
    • running
    • sitting
    • sleeping
    • texting
    • using-laptop

    Download the model actionnet.pt from the Assets section (below) in this release.

    This Model is a YOLOv5x DeepStack custom model and that was trained for 150 epochs, generating a best model with the following evaluation result.

    [email protected]: 0.9858 [email protected]: 0.8051

    Source code(tar.gz)
    Source code(zip)
    actionnet.pt(169.41 MB)
Owner
MOSES OLAFENWA
Software Engineer @Microsoft , A self-Taught computer programmer, Deep Learning, Computer Vision Researcher and Developer. Creator of ImageAI.
MOSES OLAFENWA
Volumetric Correspondence Networks for Optical Flow, NeurIPS 2019.

VCN: Volumetric correspondence networks for optical flow [project website] Requirements python 3.6 pytorch 1.1.0-1.3.0 pytorch correlation module (opt

Gengshan Yang 144 Dec 06, 2022
MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python

Digital Image Processing Python MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python TO-DO: Refactor scripts, curren

Merve Noyan 24 Oct 16, 2022
一个免费开源一键搭建的通用验证码识别平台,大部分常见的中英数验证码识别都没啥问题。

captcha_server 一个免费开源一键搭建的通用验证码识别平台,大部分常见的中英数验证码识别都没啥问题。 使用方法 python = 3.8 以上环境 pip install -r requirements.txt -i https://pypi.douban.com/simple gun

Sml2h3 189 Dec 02, 2022
A Large Scale Benchmark for Individual Treatment Effect Prediction and Uplift Modeling

large-scale-ITE-UM-benchmark This repository contains code and data to reproduce the results of the paper "A Large Scale Benchmark for Individual Trea

10 Nov 19, 2022
[EMNLP 2021] MuVER: Improving First-Stage Entity Retrieval with Multi-View Entity Representations

MuVER This repo contains the code and pre-trained model for our EMNLP 2021 paper: MuVER: Improving First-Stage Entity Retrieval with Multi-View Entity

24 May 30, 2022
CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images

Code and result about CCAFNet(IEEE TMM) 'CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images' IEE

zyrant丶 14 Dec 29, 2021
Manim is an engine for precise programmatic animations, designed for creating explanatory math videos

Manim is an engine for precise programmatic animations, designed for creating explanatory math videos. Note, there are two versions of manim. This rep

Grant Sanderson 49k Jan 09, 2023
An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.

relational-rnn-pytorch An implementation of DeepMind's Relational Recurrent Neural Networks (Santoro et al. 2018) in PyTorch. Relational Memory Core (

Sang-gil Lee 241 Nov 18, 2022
🏆 The 1st Place Submission to AICity Challenge 2021 Natural Language-Based Vehicle Retrieval Track (Alibaba-UTS submission)

AI City 2021: Connecting Language and Vision for Natural Language-Based Vehicle Retrieval 🏆 The 1st Place Submission to AICity Challenge 2021 Natural

82 Dec 29, 2022
ManimML is a project focused on providing animations and visualizations of common machine learning concepts with the Manim Community Library.

ManimML ManimML is a project focused on providing animations and visualizations of common machine learning concepts with the Manim Community Library.

259 Jan 04, 2023
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

170.1k Jan 05, 2023
Pytorch implementation of MalConv

MalConv-Pytorch A Pytorch implementation of MalConv Desciprtion This is the implementation of MalConv proposed in Malware Detection by Eating a Whole

Alexander H. Liu 58 Oct 26, 2022
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch

Segformer - Pytorch Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch. Install $ pip install segformer-pytorch

Phil Wang 208 Dec 25, 2022
TeachMyAgent is a testbed platform for Automatic Curriculum Learning methods in Deep RL.

TeachMyAgent: a Benchmark for Automatic Curriculum Learning in Deep RL Paper Website Documentation TeachMyAgent is a testbed platform for Automatic Cu

Flowers Team 51 Dec 25, 2022
Code for the paper "A Study of Face Obfuscation in ImageNet"

A Study of Face Obfuscation in ImageNet Code for the paper: A Study of Face Obfuscation in ImageNet Kaiyu Yang, Jacqueline Yau, Li Fei-Fei, Jia Deng,

35 Oct 04, 2022
Code for the paper "Implicit Representations of Meaning in Neural Language Models"

Implicit Representations of Meaning in Neural Language Models Preliminaries Create and set up a conda environment as follows: conda create -n state-pr

Belinda Li 39 Nov 03, 2022
Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

41 Jan 04, 2023
This program writes christmas wish programmatically. It is using turtle as a pen pointer draw christmas trees and stars.

Introduction This is a simple program is written in python and turtle library. The objective of this program is to wish merry Christmas programmatical

Gunarakulan Gunaretnam 1 Dec 25, 2021
This Repostory contains the pretrained DTLN-aec model for real-time acoustic echo cancellation.

This Repostory contains the pretrained DTLN-aec model for real-time acoustic echo cancellation.

Nils L. Westhausen 182 Jan 07, 2023
Universal Adversarial Triggers for Attacking and Analyzing NLP (EMNLP 2019)

Universal Adversarial Triggers for Attacking and Analyzing NLP This is the official code for the EMNLP 2019 paper, Universal Adversarial Triggers for

Eric Wallace 248 Dec 17, 2022