Efficient face emotion recognition in photos and videos

Overview

This repository contains code of face emotion recognition that was developed in the RSF (Russian Science Foundation) project no. 20-71-10010 (Efficient audiovisual analysis of dynamical changes in emotional state based on information-theoretic approach).

Our approach is described in the arXiv paper published at IEEE SISY 2021. The extended version of this paper is under considereation in the international journal.

All the models were pre-trained for face identification task using VGGFace2 dataset. In order to train PyTorch models, SAM code was borrowed.

We upload several models that obtained the state-of-the-art results for AffectNet dataset. The facial features extracted by these models lead to the state-of-the-art accuracy of face-only models on video datasets from EmotiW 2019, 2020 challenges: AFEW (Acted Facial Expression In The Wild), VGAF (Video level Group AFfect) and EngageWild.

Here are the accuracies measure on the testing set of above-mentioned datasets:

Model AffectNet (8 classes), original AffectNet (8 classes), aligned AffectNet (7 classes), original AffectNet (7 classes), aligned AFEW VGAF
mobilenet_7.h5 - - 64.71 - 55.35 68.92
enet_b0_8_best_afew.pt 60.95 60.18 64.63 64.54 59.89 66.80
enet_b0_8_best_vgaf.pt 61.32 61.03 64.57 64.89 55.14 68.29
enet_b0_7.pt - - 65.74 65.74 56.99 65.18
enet_b2_8.pt 63.025 62.40 66.29 - 57.78 70.23
enet_b2_7.pt - - 65.91 66.34 59.63 69.84

Please note, that we report the accuracies for AFEW and VGAFonly on the subsets, in which MTCNN detects facial regions. The code contains also computation of overall accuracy on the complete testing set, which is slightly lower due to the absence of faces or failed face detection.

In order to run our code on the datasets, please prepare them firstly using our TensorFlow notebooks: train_emotions.ipynb, AFEW_train.ipynb and VGAF_train.ipynb.

If you want to run our mobile application, please, run the following scripts inside mobile_app folder:

python to_tflite.py
python to_pytorchlite.py

Please be sure that EfficientNet models for PyTorch are based on old timm 0.4.5 package, so that exactly tis version should be installed by the following command:

pip install timm==0.4.5
Comments
  • can you share your Manually_Annotated_file cvs files?

    can you share your Manually_Annotated_file cvs files?

    I test affectnet validation data, but get 0.5965 using enet_b2_8.pt. can you share Manually_Annotated_file validation.csv and training.csv to me for debug?

    opened by Dian-Yi 10
  • affectnet march2021 version training script update

    affectnet march2021 version training script update

    As mentioned in #14 , we have different version of affectnet versions. I updated pytorch training script for AffectNet march2021. Two notes are

    • I used horizontal flip for training augmentation,
    • and we have different emotion order in logit.
    opened by sunggukcha 6
  • Confidence range for inference using python library

    Confidence range for inference using python library

    Hi,

    First of all, thank you so much for such a convenient setup to use!

    I'm using the python library face emotion in my code with the model_name = 'enet_b0_8_best_afew'. I was wondering what is the range of the confidence returned by the library or this model in particular. I wasn't able to figure that out.

    Thank you

    opened by varunsingh3000 4
  • Preprocessing of images to run inference

    Preprocessing of images to run inference

    Hello, thank you very much for your work.

    I am trying to preprocess a batch of images (I have my own dataset) the way you prepared your data. I'm following the notebook train_emotions.ipynb as it is in Tensforflow and I'm using that framework.

    I have a question about the steps of the preprocessing, so I would like to ask you if you can tell me the correct steps. These are the steps I'm following, let me know if I'm right or if something is missing:

    1. I already have my images with the faces detected and croppped, i.e, I have a dataset full of faces like this frame9

    2. img = cv2.imread(img_path)

    3. img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

    4. img = cv2.resize(img,(224,224))

    5. Then your notebook shows you make a normalization def mobilenet_preprocess_input(x,**kwargs): x[..., 0] -= 103.939 x[..., 1] -= 116.779 x[..., 2] -= 123.68 return x preprocessing_function=mobilenet_preprocess_input

    Here I am having an issue because I cannot cast the subtraction operation between an integer and a float, so I changed it to

    def mobilenet_preprocess_input(x,**kwargs): x[..., 0] = x[..., 0] - 103.939 x[..., 1] = x[..., 1] - 116.779 x[..., 2] = x[..., 2] - 123.68 return x preprocessing_function=mobilenet_preprocess_input

    So, let me know if the process I'm following is correct or if there's something missing.

    Thank you!

    opened by isa-tr 4
  • AttributeError: 'SqueezeExcite' object has no attribute 'gate'

    AttributeError: 'SqueezeExcite' object has no attribute 'gate'

    Excuse me, this problem occurs when using the ‘enet_b2_7.pt’ model to test. I completed it according to the steps you gave, but I really couldn't find the reason for this problem. Do you have any suggestions?

    opened by evercy 4
  • Age gender ethinicity model giving same output for different results

    Age gender ethinicity model giving same output for different results

    `class CNN(object):

    def __init__(self, model_filepath):
    
        self.model_filepath = model_filepath
        self.load_graph(model_filepath = self.model_filepath)
    
    def load_graph(self, model_filepath):
        print('Loading model...')
        self.graph = tf.Graph()
        self.sess = tf.compat.v1.InteractiveSession(graph = self.graph)
    
        with tf.compat.v1.gfile.GFile(model_filepath, 'rb') as f:
            graph_def = tf.compat.v1.GraphDef()
            graph_def.ParseFromString(f.read())
    
        print('Check out the input placeholders:')
        nodes = [n.name + ' => ' +  n.op for n in graph_def.node if n.op in ('Placeholder')]
        for node in nodes:
            print(node)
    
        # Define input tensor
        self.input = tf.compat.v1.placeholder(np.float32, shape = [None, 224, 224, 3], name='input')
        # self.dropout_rate = tf.placeholder(tf.float32, shape = [], name = 'dropout_rate')
    
        tf.import_graph_def(graph_def, {'input_1': self.input})
    
        print('Model loading complete!')
    
        
        # Get layer names
        layers = [op.name for op in self.graph.get_operations()]
        for layer in layers:
            print(layer)
    
    def test(self, data):
    
        # Know your output node name
        output_tensor1,output_tensor2 ,output_tensor3  = self.graph.get_tensor_by_name('import/age_pred/Softmax: 0'),self.graph.get_tensor_by_name('import/gender_pred/Sigmoid: 0'),self.graph.get_tensor_by_name('import/ethnicity_pred/Softmax: 0')
        output = self.sess.run([output_tensor1,output_tensor2 ,output_tensor3], feed_dict = {self.input: data})
    
        return output`
    

    Using this code load "age_gender_ethnicity_224_deep-03-0.13-0.97-0.88.pb" and predict on it. But when predicting on images, every time I am getting same output array.

    [array([[0.01319346, 0.00229602, 0.00176407, 0.00270929, 0.01408699, 0.00574261, 0.00756087, 0.01012164, 0.01221055, 0.01821703, 0.01120028, 0.00936489, 0.01003029, 0.00912451, 0.00813381, 0.00894791, 0.01277262, 0.01034999, 0.01053109, 0.0133063 , 0.01423471, 0.01610439, 0.01528896, 0.01825454, 0.01722076, 0.01933933, 0.01908059, 0.01899827, 0.01919533, 0.0278129 , 0.02204996, 0.02146631, 0.02125309, 0.02146868, 0.02230236, 0.02054285, 0.02096066, 0.01976574, 0.01990371, 0.02064857, 0.01843528, 0.01697922, 0.01610838, 0.01458549, 0.01581902, 0.01377539, 0.01298613, 0.01378927, 0.01191105, 0.01335083, 0.01154454, 0.01118198, 0.01019558, 0.01038121, 0.00920709, 0.00902615, 0.00936321, 0.00969135, 0.00867239, 0.00838663, 0.00797724, 0.00756043, 0.00890809, 0.00758041, 0.00743711, 0.00584346, 0.00555749, 0.00639214, 0.0061864 , 0.00784793, 0.00532241, 0.00567684, 0.00481544, 0.0052173 , 0.00513186, 0.00394571, 0.00415856, 0.00384584, 0.00452774, 0.0041736 , 0.00328163, 0.00327138, 0.00297012, 0.00369216, 0.00284221, 0.00255897, 0.00285459, 0.00232105, 0.00228869, 0.00218005, 0.0021927 , 0.00236659, 0.00233843, 0.00204793, 0.00209861, 0.00231407, 0.00145706, 0.00179674, 0.00186183, 0.00221309]], dtype=float32), array([[0.62949586]], dtype=float32), array([[0.21338916, 0.19771543, 0.19809113, 0.19525865, 0.19554558]], dtype=float32)] Is there something am missing or is this .pb file not meant for predicting?

    opened by sneakatyou 4
  • Provide the validation script/notebook.

    Provide the validation script/notebook.

    Hi,

    I am fond of your works and paper, but I can not find any validation script to validate your result, especially the highest result with efficientNetB2-8 classes-EffectNet.

    Or could you please provide a separate script to pre-process the input images then we can validate the provided weights on your GitHub repository?

    Thank you,

    opened by ltkhang 4
  • A few suggestions.

    A few suggestions.

    Hello!

    I have a couple of ideas:

    1. Could you, please, add text description about difference between models, especially between b0 and b2 general types?
    2. Please consider adding hsemotion-onnx package to the pip repository.
    opened by ioctl-user 3
  • Can not load pretrained models

    Can not load pretrained models

     File "/Users/xxx/Library/Python/3.8/lib/python/site-packages/timm/models/efficientnet_blocks.py", line 47, in forward
        return x * self.gate(x_se)
      File "/Users/xxx/Library/Python/3.8/lib/python/site-packages/torch/nn/modules/module.py", line 947, in __getattr__
        raise AttributeError("'{}' object has no attribute '{}'".format(
    AttributeError: 'SqueezeExcite' object has no attribute 'gate'
    
    opened by DefTruth 3
  • A error when runing codes.

    A error when runing codes.

    When runing AFEW_train.ipynb, an error occured:

    could not broadcast input array from shape (0,112,3) into shape (60,112,3) at facial_anylysis.py line 274 : tmp[dy[k]-1:edy[k],dx[k]-1:edx[k],:] = img[y[k]-1:ey[k],x[k]-1:ex[k],:]

    why dose this occured? could you please fixed it?

    opened by kiva12138 3
  • Valence and arousal

    Valence and arousal

    Hello again! I've read your paper and I've seen that you use the circumplex model's variables arousal and valence. How do those variable appears in the code? I can't find them :( Thank you, Amaia

    opened by AmaiaBiomedicalEngineer 2
  • Question about this work.

    Question about this work.

    Dear Andrey Savchenko,

    I'm a student and going to build a small system to detect student's emotions for my thesis. After finding a solution, I found your job. But I can't run https://github.com/HSE-asavchenko/face-emotion-recognition/blob/main/src/affectnet/train_emotions.ipynb by current AFFECT dataset's version. Please correct me if I'm wrong. My question is: Can I run this workhttps://github.com/HSE-asavchenko/face-emotion-recognition/blob/main/src/affectnet/train_affectnet_march2021_pytorch.ipynb with MobileNet. Because I tend to build small applications to detect emotions from client site then send result to server.

    Many thanks,

    Son Nguyen.

    opened by sonnguyen1996 2
Releases(v0.2.1)
Owner
Andrey Savchenko
Andrey Savchenko
PyExplainer: A Local Rule-Based Model-Agnostic Technique (Explainable AI)

PyExplainer PyExplainer is a local rule-based model-agnostic technique for generating explanations (i.e., why a commit is predicted as defective) of J

AI Wizards for Software Management (AWSM) Research Group 14 Nov 13, 2022
CTF challenges from redpwnCTF 2021

redpwnCTF 2021 Challenges This repository contains challenges from redpwnCTF 2021 in the rCDS format; challenge information is in the challenge.yaml f

redpwn 27 Dec 07, 2022
Code for KHGT model, AAAI2021

KHGT Code for KHGT accepted by AAAI2021 Please unzip the data files in Datasets/ first. To run KHGT on Yelp data, use python labcode_yelp.py For Movi

32 Nov 29, 2022
U-Net: Convolutional Networks for Biomedical Image Segmentation

Deep Learning Tutorial for Kaggle Ultrasound Nerve Segmentation competition, using Keras This tutorial shows how to use Keras library to build deep ne

Yihui He 401 Nov 21, 2022
OBBDetection is a oriented object detection library, which is based on MMdetection.

OBBDetection news: We are now updating OBBDetection to new vision based on MMdetection v2.10, which has more advanced models and more efficient featur

jbwang1997 401 Jan 02, 2023
Codes for “A Deeply Supervised Attention Metric-Based Network and an Open Aerial Image Dataset for Remote Sensing Change Detection”

DSAMNet The pytorch implementation for "A Deeply-supervised Attention Metric-based Network and an Open Aerial Image Dataset for Remote Sensing Change

Mengxi Liu 41 Dec 14, 2022
code associated with ACL 2021 DExperts paper

DExperts Hi! This repository contains code for the paper DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts to appear at

Alisa Liu 68 Dec 15, 2022
robomimic: A Modular Framework for Robot Learning from Demonstration

robomimic [Homepage]   [Documentation]   [Study Paper]   [Study Website]   [ARISE Initiative] Latest Updates [08/09/2021] v0.1.0: Initial code and pap

ARISE Initiative 178 Jan 05, 2023
Convolutional Neural Networks

Darknet Darknet is an open source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation. D

Joseph Redmon 23.7k Jan 05, 2023
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

24 Dec 26, 2022
Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning

radar-to-lidar-place-recognition This page is the coder of a pre-print, implemented by PyTorch. If you have some questions on this project, please fee

Huan Yin 37 Oct 09, 2022
Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021.

Conformal time-series forecasting Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021. If you use our code in yo

Kamilė Stankevičiūtė 36 Nov 21, 2022
GNN4Traffic - This is the repository for the collection of Graph Neural Network for Traffic Forecasting

GNN4Traffic - This is the repository for the collection of Graph Neural Network for Traffic Forecasting

564 Jan 02, 2023
Price-Prediction-For-a-Dream-Home - A machine learning based linear regression trained model for house price prediction.

Price-Prediction-For-a-Dream-Home ROADMAP TO THIS LINEAR REGRESSION BASED HOUSE PRICE PREDICTION PREDICTION MODEL Import all the dependencies of the p

DIKSHA DESWAL 1 Dec 29, 2021
Network Enhancement implementation in pytorch

network_enahncement_pytorch Network Enhancement implementation in pytorch Research paper Network Enhancement: a general method to denoise weighted bio

Yen 1 Nov 12, 2021
Picasso: a methods for embedding points in 2D in a way that respects distances while fitting a user-specified shape.

Picasso Code to generate Picasso embeddings of any input matrix. Picasso maps the points of an input matrix to user-defined, n-dimensional shape coord

Pachter Lab 45 Dec 23, 2022
This is our ARTS test set, an enriched test set to probe Aspect Robustness of ABSA.

This is the repository for our 2020 paper "Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis". Data We provide

35 Nov 16, 2022
Multimodal Descriptions of Social Concepts: Automatic Modeling and Detection of (Highly Abstract) Social Concepts evoked by Art Images

MUSCO - Multimodal Descriptions of Social Concepts Automatic Modeling of (Highly Abstract) Social Concepts evoked by Art Images This project aims to i

0 Aug 22, 2021
PyTorch implementation of paper "StarEnhancer: Learning Real-Time and Style-Aware Image Enhancement" (ICCV 2021 Oral)

StarEnhancer StarEnhancer: Learning Real-Time and Style-Aware Image Enhancement (ICCV 2021 Oral) Abstract: Image enhancement is a subjective process w

IDKiro 133 Dec 28, 2022
Sharing of contents on mitochondrial encounter networks

mito-network-sharing Sharing of contents on mitochondrial encounter networks Required: R with igraph, brainGraph, ggplot2, and XML libraries; igraph l

Stochastic Biology Group 0 Oct 01, 2021