Fight Recognition from Still Images in the Wild @ WACVW2022, Real-world Surveillance Workshop

Related tags

Deep LearningSMFI
Overview

Fight Detection from Still Images in the Wild

Detecting fights from still images is an important task required to limit the distribution of social media images with fight content, in order to prevent the negative effects of such violent media items. For this reason, in this study we addressed the problem of fight detection from still images collected from web and social media. We explored how well one can detect fights from just a single still image.

In this context, a new image dataset on the fight recognition from still images task is collected named Social Media Fight Images (SMFI) dataset. The dataset samples gathered from social media (Twitter and Google) and NTU-CCTV Fights 1 dataset. Since the main concern is recognizing fight actions in the wild, real-world scenarios are included in the dataset where a mass amount of them are spontaneous recordings of fight actions. Using different keywords while crawling the data, the regional diversity is also maintained since the social media uploadings are mostly regional where users share the content in their own language. Some example images from the dataset are given below:

samples

Both fight and non-fight samples are collected from the same domain where the non-fight samples are also content likely to be shared on social media. Hard non-fight samples are also included in the dataset which displays the actions that might be misinterpreted as fight such as hugging, throwing ball, dancing and more. This prevents the dataset bias, so that the trained models focuses on the actions and the performers on the scene instead of benefiting other characteristics such as motion blur. The distribution of the dataset samples among each class and source is given below:

Twitter Google NTU CCTV-Fights Total
Fight 2247 162 330 2739
Non-fight 2642 146 164 2952
Total 4889 308 494 5691

Due to the copyright issues the dataset images are not shared directly and the links to the images / videos are shared. As the dataset samples might be deleted in time by the users or the authorities, the size of the dataset is subject to change.

Dataset Format

The dataset samples are shared through a CSV file where the columns are as follows:

  • Image ID: Unique ID assigned to each image.
  • Class: class of the image as fight / nofight
  • Source: The source of the images or videos as twitter_img / twitter_video / google / ntu-cctv
  • URL: The link for the images / videos.
    • For Twitter and Google data, image and video URLs are shared.
    • For the NTU CCTV-Fights data, the path to the original video is shared.
  • Frame number: If the image is extracted from a video, this column indicates the number of frame within the video.
    • For Twitter videos, the frame number is the number of frame (0-9) out of 10 uniformly sampled frames from each video.
    • For NTU CCTV-Fight videos, the frame number is the number of frame (0-N) out of all frames (N) extracted from each video.

In order to retrieve the dataset, you should first download the NTU CCTV-Fights here.

Citation

TBA

References

1 Mauricio Perez, Alex C. Kot, Anderson Rocha, “Detection of Real-world Fights in Surveillance Videos”, in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2019

Owner
Şeymanur Aktı
Şeymanur Aktı
My published benchmark for a Kaggle Simulations Competition

Lux AI Working Title Bot Please refer to the Kaggle notebook for the comment section. The comment section contains my explanation on my code structure

Tong Hui Kang 29 Aug 22, 2022
CountDown to New Year and shoot fireworks

CountDown and Shoot Fireworks About App This is an small application make you re

5 Dec 31, 2022
The Multi-Mission Maximum Likelihood framework (3ML)

PyPi Conda The Multi-Mission Maximum Likelihood framework (3ML) A framework for multi-wavelength/multi-messenger analysis for astronomy/astrophysics.

The Multi-Mission Maximum Likelihood (3ML) 62 Dec 30, 2022
UniFormer - official implementation of UniFormer

UniFormer This repo is the official implementation of "Uniformer: Unified Transf

SenseTime X-Lab 573 Jan 04, 2023
code for CVPR paper Zero-shot Instance Segmentation

Code for CVPR2021 paper Zero-shot Instance Segmentation Code requirements python: python3.7 nvidia GPU pytorch1.1.0 GCC =5.4 NCCL 2 the other python

zhengye 86 Dec 13, 2022
Dataset and codebase for NeurIPS 2021 paper: Exploring Forensic Dental Identification with Deep Learning

Repository under construction. Example dataset, checkpoints, and training/testing scripts will be avaible soon! 💡 Collated best practices from most p

4 Jun 26, 2022
VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning

    VarCLR: Variable Representation Pre-training via Contrastive Learning New: Paper accepted by ICSE 2022. Preprint at arXiv! This repository contain

squaresLab 32 Oct 24, 2022
Code for Contrastive-Geometry Networks for Generalized 3D Pose Transfer

Code for Contrastive-Geometry Networks for Generalized 3D Pose Transfer

18 Jun 28, 2022
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. It's also a suite of learning algorithms to train agents to operate in these enviro

Google 1.5k Jan 02, 2023
Segmentation vgg16 fcn - cityscapes

VGGSegmentation Segmentation vgg16 fcn - cityscapes Priprema skupa skripta prepare_dataset_downsampled.py Iz slika cityscapesa izrezuje haubu automobi

6 Oct 24, 2020
The source code of the paper "SHGNN: Structure-Aware Heterogeneous Graph Neural Network"

SHGNN: Structure-Aware Heterogeneous Graph Neural Network The source code and dataset of the paper: SHGNN: Structure-Aware Heterogeneous Graph Neural

Wentao Xu 7 Nov 13, 2022
Semi-automated OpenVINO benchmark_app with variable parameters

Semi-automated OpenVINO benchmark_app with variable parameters. User can specify multiple options for any parameters in the benchmark_app and the progam runs the benchmark with all combinations of gi

Yasunori Shimura 8 Apr 11, 2022
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

VITA 59 Dec 28, 2022
This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"

This is an official pytorch implementation of ActionCLIP: A New Paradigm for Video Action Recognition [arXiv] Overview Content Prerequisites Data Prep

268 Jan 09, 2023
Datasets, Transforms and Models specific to Computer Vision

torchvision The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision. Installat

13.1k Jan 02, 2023
Implementation of "GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings" in PyTorch

PyGAS: Auto-Scaling GNNs in PyG PyGAS is the practical realization of our G NN A uto S cale (GAS) framework, which scales arbitrary message-passing GN

Matthias Fey 139 Dec 25, 2022
The codes for the work "Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation"

Swin-Unet The codes for the work "Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation"(https://arxiv.org/abs/2105.05537). A validatio

869 Jan 07, 2023
K-FACE Analysis Project on Pytorch

Installation Setup with Conda # create a new environment conda create --name insightKface python=3.7 # or over conda activate insightKface #install t

Jung Jun Uk 7 Nov 10, 2022
Real-Time-Student-Attendence-System - Real Time Student Attendence System

Real-Time-Student-Attendence-System The Student Attendance Management System Pro

Rounak Das 1 Feb 15, 2022
Source code, data, and evaluation details for “Cross-Lingual Citations in English Papers: A Large-Scale Analysis of Prevalence, Formation, and Ramifications”

Analysis of cross-lingual citations in English papers Contents initial_analysis Source code, data, and evaluation details as published at ICADL2020 ci

Tarek Saier 1 Oct 27, 2022