Py-FEAT: Python Facial Expression Analysis Toolbox

Related tags

Deep Learningpy-feat
Overview

Py-FEAT: Python Facial Expression Analysis Toolbox

Package versioning Build Status Coverage Status Python Versions DOI

Py-FEAT is a suite for facial expressions (FEX) research written in Python. This package includes tools to detect faces, extract emotional facial expressions (e.g., happiness, sadness, anger), facial muscle movements (e.g., action units), and facial landmarks, from videos and images of faces, as well as methods to preprocess, analyze, and visualize FEX data.

For detailed examples, tutorials, and API please refer to the Py-FEAT website.

Installation

Option 1: Easy installation for quick use Clone the repository
pip install py-feat

Option 2: Installation in development mode

git clone https://github.com/cosanlab/feat.git
cd feat && python setup.py install -e . 

Usage examples

1. Detect FEX data from images or videos

FEAT is intended for use in Jupyter Notebook or Jupyter Lab environment. In a notebook cell, you can run the following to detect faces, facial landmarks, action units, and emotional expressions from images or videos. On the first execution, it will automatically download the default model files. You can also change the detection models from the list of supported models.

from feat.detector import Detector
detector = Detector() 
# Detect FEX from video
out = detector.detect_video("input.mp4")
# Detect FEX from image
out = detector.detect_image("input.png")

2. Visualize FEX data

Visualize FEX detection results.

from feat.detector import Detector
detector = Detector() 
out = detector.detect_image("input.png")
out.plot_detections()

3. Preprocessing & analyzing FEX data

We provide a number of preprocessing and analysis functionalities including baselining, feature extraction such as timeseries descriptors and wavelet decompositions, predictions, regressions, and intersubject correlations. See examples in our tutorial.

Supported Models

Please respect the usage licenses for each model.

Face detection models

Facial landmark detection models

Action Unit detection models

Emotion detection models

Contributing

  1. Fork the repository on GitHub.
  2. Run the tests with pytest tests/ to make confirm that all tests pass on your system. If some tests fail, try to find out why they are failing. Common issues may be not having downloaded model files or missing dependencies.
  3. Create your feature AND add tests to make sure they are working.
  4. Run the tests again with pytest tests/ to make sure everything still passes, including your new feature. If you broke something, edit your feature so that it doesn't break existing code.
  5. Create a pull request to the main repository's master branch.

Licenses

Py-FEAT is provided under the MIT license. You also need to respect the licenses of each model you are using. Please see the LICENSE file for links to each model's license information.

Comments
  • A few models cannot be found

    A few models cannot be found

    Hi,

    I was able to install py-feat with no issues. When I tried to run "Detector", it seemed that a few models were downloaded and others were not, which seem to be missing.

    These files show up in the folder "/py-feat/feat/resources/"

    • hog_pca_all_emotio.joblib
    • hog_scalar_aus.joblib
    • mobilefacenet_model_best.pth.tar
    • mobilenet0.25_Final.pth
    • mobilenet_224_model_best_gdconv_external.pth.tar
    • model_list.json

    Could you please let me know if I am doing something wrong?

    Thanks!

    detector = Detector(verbose=True) Downloading https://objects.githubusercontent.com/github-production-release-asset-2e65be/118517740/675aae00-6f8d-11eb-991e-c7886284a630?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221123%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221123T070446Z&X-Amz-Expires=300&X-Amz-Signature=74b29bce2b4913901b3db5574d330b63ec44e1348fd84a85910bb93b40450cc0&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=118517740&response-content-disposition=attachment%3B%20filename%3Dmobilenet0.25_Final.pth&response-content-type=application%2Foctet-stream to /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/mobilenet0.25_Final.pth 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1786259/1786259 [00:00<00:00, 3251717.53it/s] Downloading https://objects.githubusercontent.com/github-production-release-asset-2e65be/118517740/046a1680-6f8f-11eb-997d-d1266747f4bf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221123%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221123T070447Z&X-Amz-Expires=300&X-Amz-Signature=7e7a241f1533a768a774e192e216ea18fecfe6bab10c32b6b8fa436b747d2b82&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=118517740&response-content-disposition=attachment%3B%20filename%3Dmobilenet_224_model_best_gdconv_external.pth.tar&response-content-type=application%2Foctet-stream to /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/mobilenet_224_model_best_gdconv_external.pth.tar 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45256601/45256601 [00:06<00:00, 7100594.44it/s] Downloading https://objects.githubusercontent.com/github-production-release-asset-2e65be/118517740/ee9cd200-8b88-11eb-9992-cc9383e9a7eb?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221123%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221123T070455Z&X-Amz-Expires=300&X-Amz-Signature=8cad35bc21ac17cc6f9939cee2824e1a37c3ac37c2e8a0e445c714ced9f396b2&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=118517740&response-content-disposition=attachment%3B%20filename%3Dhog_pca_all_emotio.joblib&response-content-type=application%2Foctet-stream to /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/hog_pca_all_emotio.joblib 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 51773375/51773375 [00:01<00:00, 29761468.32it/s] Using downloaded and verified file: /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/hog_pca_all_emotio.joblib Downloading https://objects.githubusercontent.com/github-production-release-asset-2e65be/118517740/ecfc2580-8da5-11eb-9d71-275376e20c4c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221123%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221123T070458Z&X-Amz-Expires=300&X-Amz-Signature=8c3ed03733a6bff706e6b0e1f89fa815f23f42674a37f338fd61e38a627474ac&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=118517740&response-content-disposition=attachment%3B%20filename%3Dhog_scalar_aus.joblib&response-content-type=application%2Foctet-stream to /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/hog_scalar_aus.joblib 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 130390/130390 [00:00<00:00, 109379059.71it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/detector.py", line 108, in __init__ face, landmark, au, emotion, facepose = get_pretrained_models( File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/pretrained.py", line 239, in get_pretrained_models download_url(url, get_resource_path(), verbose=verbose) File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/utils.py", line 1063, in download_url return tv_download_url(*args, **kwargs) File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/torchvision/datasets/utils.py", line 147, in download_url url = _get_redirect_url(url, max_hops=max_redirect_hops) File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/torchvision/datasets/utils.py", line 95, in _get_redirect_url with urllib.request.urlopen(urllib.request.Request(url, headers=headers)) as response: File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 214, in urlopen return opener.open(url, data, timeout) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 523, in open response = meth(req, response) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 632, in http_response response = self.parent.error( File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 555, in error result = self._call_chain(*args) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 494, in _call_chain result = func(*args) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 747, in http_error_302 return self.parent.open(new, timeout=req.timeout) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 523, in open response = meth(req, response) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 632, in http_response response = self.parent.error( File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 561, in error return self._call_chain(*args) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 494, in _call_chain result = func(*args) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 641, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found

    opened by ethanfa 11
  • GPU not really utilized well

    GPU not really utilized well

    So I've run a few tests, as I've noticed that PyFeat is quite slow in comparison to something like OpenFace2. Turns out, while the CPU is always utilized, the GPU is not. This seems to be, because the data is loaded 1-by-1 through OpenCV instead of using a proper GPU library for lists of images. I think it makes sense, if a list of images is given to a Detector for detect_image, to load and predict the images batch wise through torch.DataLoaders. Bildschirmfoto 2021-08-10 um 21 49 17 Bildschirmfoto 2021-08-10 um 21 49 35

    enhancement 
    opened by ichitaka 8
  • Sessions Attribute Functionality Discussion.

    Sessions Attribute Functionality Discussion.

    Working on sessions attribute for Fex objects. Right now users can pass in a sessions array that can be iterated over using Fex.itersessions(), exactly the same way that pd.DataFrame.iterrows() works.

    Just wanted to run a few things by @jcheong0428 and @Nathaniel-Haines. If Fex.sesssions is not None, should we iterate over every unique session for all preprocessing/descriptive/feature extraction methods? For example, Fex.downsample() should probably downsample separately for each unique session (e.g., trial, subject) right? And Fex.clean() should do the same as well as Fex.extract_boft()

    opened by ljchang 8
  • added Fextractor class for extracting features from a Fex instance

    added Fextractor class for extracting features from a Fex instance

    The Fextractor class works like this:

    # cleaned data
    df = read_facet('iMotions_Test.txt')
    sessions = np.array([[x]*10 for x in range(1+int(len(df)/10))]).flatten()[:-1]
    dat = Fex(df, sampling_freq=30, sessions=sessions)
    dat = dat.interpolate(method='linear')
    
    # Create instance of class
    extractor = Fextractor()
    
    # Extraction methods build a list of different features within extractor.extracted_features
    extractor.mean(fex_object=dat)
    extractor.max(fex_object=dat)
    extractor.min(fex_object=dat)
    #extractor.boft(fex_object=dat, min_freq=.01, max_freq=.20, bank=1) # boft not working yet
    extractor.multi_wavelet(fex_object=dat)    
    extractor.wavelet(fex_object=dat, freq=f, num_cyc=num_cyc)
    
    # Merge and return all extracted features as a single wide or long DataFrame
    newdat = extractor.merge(out_format='long')
    

    Currently, it returns a pandas DataFrame. Also the boft extraction does not work (but I am not sure if that was already the case?).

    Anyway, this is a first attempt at a feature extraction class. Let me know what you think!

    opened by Nathaniel-Haines 7
  • calculation of sampling freq

    calculation of sampling freq

    our FACET, OpenFace can calculate sampling frequency by doing a .diff() on the timestamps. It would be nice to have that set as the default when sampling_freq is not passed when the class is not initialized.

    enhancement low priority 
    opened by jcheong0428 7
  • How to use au_model=

    How to use au_model="rf" and emotion_model = "rf" in version 0.4

    Hi,

    I wish to use au_model="rf" since RF model gives the intensity as random continuous variable unlike SVM which just gives the possiblity of detecting aus for my analysis.

    It was possible in version 0.3.7 but is not in latest version. What's the alternative for this? Also, my requirement is to use HOG based models only for au and emotions. So I can only use either RF or SVM. Is there any alternative for it?

    opened by ritika24-s 6
  • added mean, min, and max feature extraction methods

    added mean, min, and max feature extraction methods

    Let me know what you all think of this method of feature extraction. The output format is the same as the boft extractor (1 row, and a column for each feature), and specifying the 'by' argument allows users to group observations by other features in the data before summarizing (e.g. by subjects, trials, or whatever). By default, the functions will summarize data across all rows.

    This is all default pandas functionality too, so it is quick and easy.

    opened by Nathaniel-Haines 6
  • Modify Interpolate method

    Modify Interpolate method

    At some point we should write a new interpolate method. We can base it off of the upsample function.

    It should:

    1. be able to accomodate nonlinear methods such as cubic and spline. Current one only seems to work with linear.
    2. we should add a 'limit' flag which means it won't try to interpolate if chunk of time is too large.
    enhancement low priority 
    opened by ljchang 5
  • RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0

    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0

    Hello, I encountered the following error when attempting to execute the example from notebook 3. It's the same code on a newly-initialized environment, so I couldn't figure out what may cause this problem.

    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    
    opened by TalBarami 4
  • Exception Occurred when using Detector

    Exception Occurred when using Detector

    An exception occurs when using "mobilenet" as the landmark model on my machine with CUDA installed. The other landmark models don't have this issue. The exception seems to be caused by not all of the torch module's parameters and buffers are on GPU.

    Traceback (most recent call last):
      File "pyfeat_test.py", line 26, in <module>
        image_prediction = detector.detect_image(test_image)
      File "venv/lib/python3.8/site-packages/feat/detector.py", line 689, in detect_image
        df = self.process_frame(frame)
      File "venv/lib/python3.8/site-packages/feat/detector.py", line 540, in process_frame
        landmarks = self.detect_landmarks(
      File "venv/lib/python3.8/site-packages/feat/detector.py", line 366, in detect_landmarks
        landmark = self.landmark_detector(input).cpu().data.numpy()
      File "venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "venv/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 153, in forward
        raise RuntimeError("module must have its parameters and buffers "
    RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu
    

    Does anyone have an idea on how to solve this? Thanks!

    investigate 
    opened by sulibo 4
  • WIP: Added sessions and updated wavelet

    WIP: Added sessions and updated wavelet

    Added new sessions attribute and itersessions() method. for grouping by sessions for preproc and feature extraction. Still needs to be tested.

    Updated wavelet and extract_wavelet() methods. Can now output complex wavelet and calculate filtered signal, phase angle, or power. Still needs to be tested.

    • changed sampling_rate -> sampling freq to be consistent
    • made duration automatically detected.
    • still might want to zero pad data before convolving.

    @jcheong0428: Feel free to modify if you want. Once all of the two new functions seem to be tested and working, I would recommend updating your extract_botf() method to use the new functions.

    opened by ljchang 4
  • Getting Error: List index out of rage. While using detect_video(). Please help

    Getting Error: List index out of rage. While using detect_video(). Please help

    I am using py-feat 0.5.0 pypi version and using detect_video function to extract features from video. The program started successfully and in the middle of execution, its failing with the below error.

    IndexError Traceback (most recent call last) Input In [44], in <cell line: 7>() 11 print(f"Processing: {video}") 13 # This is the line that does detection! ---> 14 fex = detector.detect_video(video) 16 fex.to_csv(out_name, index=False)

    File ~.conda\envs\LIE\lib\site-packages\feat\detector.py:802, in Detector.detect_video(self, video_path, skip_frames, output_size, batch_size, num_workers, pin_memory, **detector_kwargs) 800 frames = list(batch_data["Frame"].numpy()) 801 landmarks = _inverse_landmark_transform(landmarks, batch_data) --> 802 output = self._create_fex( 803 faces, landmarks, poses, aus, emotions, batch_data["FileName"], frames 804 ) 805 batch_output.append(output) 807 batch_output = pd.concat(batch_output)

    File ~.conda\envs\LIE\lib\site-packages\feat\detector.py:872, in Detector._create_fex(self, faces, landmarks, poses, aus, emotions, file_names, frame_counter) 856 for j, face_in_frame in enumerate(frame): 857 facebox_df = pd.DataFrame( 858 [ 859 [ (...) 868 index=[j], 869 ) 871 facepose_df = pd.DataFrame( --> 872 [poses[i][j].flatten(order="F")], 873 columns=self.info["facepose_model_columns"], 874 index=[j], 875 ) 877 landmarks_df = pd.DataFrame( 878 [landmarks[i][j].flatten(order="F")], 879 columns=self.info["face_landmark_columns"], 880 index=[j], 881 ) 883 aus_df = pd.DataFrame( 884 aus[i][j, :].reshape(1, len(self["au_presence_columns"])), 885 columns=self.info["au_presence_columns"], 886 index=[j], 887 )

    IndexError: list index out of range pic1 pic2

    opened by Abid-S 5
  • Make `.detect_video` more memory efficient

    Make `.detect_video` more memory efficient

    @ljchang after chatting with @TiankangXie it looks like we can fairly easily roll our own read_video function because torch also provides a lower level API with their VideoReader class.

    Just like in their examples, we can just write a function that wraps the next(reader) calls and return a generator so at most we load only batch_size frames at most into memory on each loop iteration. That way even long videos shouldn't be a problem on low RAM/VRAM machines, and more memory will simply allow for bigger batch sizes.

    The downside trying to get it to work right now is that torch needs to be compiled with support for it and requires a working ffmeg install:

    *** RuntimeError: Not compiled with video_reader support, to enable video_reader support, please install ffmpeg (version 4.2 is currently supported) and build torchvision from source.
    Traceback (most recent call last):
      File "/Users/Esh/anaconda3/envs/py-feat/lib/python3.8/site-packages/torchvision/io/__init__.py", line 130, in __init__
        raise RuntimeError(
    

    So it seems like the real cost of rolling our own solution with VideoReader until torch allows for more memory efficient read_video(), is an added dependency on ffmepg and potentially more installation hassle. Or we can try a different library or solution for loading video frames. From a brief search on github it looks like there are lots of custom solutions as third party libraries, because this isn't quite "solved." But most libraries "cheat" a bit IMO. e.g. Expecting that you've pre-saved each frame as a separate image file on disk

    opened by ejolly 0
  • Image size degrades emotion classification accuracy when holding face pixel size constant

    Image size degrades emotion classification accuracy when holding face pixel size constant

    Using v0.4.0 or the current m1_testing branch, using the default detectors specified in the documentation, I'm encountering an issue where using large images seems to degrade performance, when holding the pixel size of the actual faces constant. Simply cropping out face-free parts of the image improves performance considerably. I suspect that this might be happening because the image is downsampled for face detection, and then when the faces are extracted using the resulting bounding boxes, the downsampled rather than original image is used. This would result in the faces being unnecessarily downsampled in large images that are mostly free of faces, leading to degraded performance. If this is the problem, I would suggest upsampling the bounding boxes back to the original image resolution, and then extracting the faces from the original. They could always be downsampled from this point if necessary for the emotion model, but at least it wouldn't be based on something arbitrary like the overall image size.

    WIP 
    opened by markallenthornton 2
  • Different AU values when passing to detector.detect_image a single image or a batch of images

    Different AU values when passing to detector.detect_image a single image or a batch of images

    Hi there,

    I am trying to extract AU values ​​from some images. Calling the function "detector.detect_image" on each image individually, I obtain some values ​​in the AU columns different from those I obtain by passing to the function a list containing all the images (the same used in the single image case).

    The values ​​on the other columns are the same (same detection coordinates and same emotion values), only the values ​​for the AU columns are different.

    Is this behavior expected?

    WIP 
    opened by VaianiLorenzo 2
Releases(v0.5.0)
Owner
Computational Social Affective Neuroscience Laboratory
Computational Social Affective Neuroscience Laboratory
Chinese Advertisement Board Identification(Pytorch)

Chinese-Advertisement-Board-Identification. We use YoloV5 to extract the ROI of the location of the chinese word. Next, we sort the bounding box and recognize every chinese words which we extracted.

Li-Wei Hsiao 12 Jul 21, 2022
The official implementation of Equalization Loss v1 & v2 (CVPR 2020, 2021) based on MMDetection.

The Equalization Losses for Long-tailed Object Detection and Instance Segmentation This repo is official implementation CVPR 2021 paper: Equalization

Jingru Tan 129 Dec 16, 2022
Geometric Deep Learning Extension Library for PyTorch

Documentation | Paper | Colab Notebooks | External Resources | OGB Examples PyTorch Geometric (PyG) is a geometric deep learning extension library for

Matthias Fey 16.5k Jan 08, 2023
This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight).

NeurIPS 2021 (Spotlight): Task-Adaptive Neural Network Search with Meta-Contrastive Learning This is an official PyTorch implementation of Task-Adapti

Wonyong Jeong 15 Nov 21, 2022
TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors

TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors This package provides a simulator for vision-based

Facebook Research 255 Dec 27, 2022
Filtering variational quantum algorithms for combinatorial optimization

Current gate-based quantum computers have the potential to provide a computational advantage if algorithms use quantum hardware efficiently.

1 Feb 09, 2022
The implementation of 'Image synthesis via semantic composition'.

Image synthesis via semantic synthesis [Project Page] by Yi Wang, Lu Qi, Ying-Cong Chen, Xiangyu Zhang, Jiaya Jia. Introduction This repository gives

DV Lab 71 Jan 06, 2023
Capstone-Project-2 - A game program written in the Python language

Capstone-Project-2 My Pygame Game Information: Description This Pygame project i

Nhlakanipho Khulekani Hlophe 1 Jan 04, 2022
ML-Decoder: Scalable and Versatile Classification Head

ML-Decoder: Scalable and Versatile Classification Head Paper Official PyTorch Implementation Tal Ridnik, Gilad Sharir, Avi Ben-Cohen, Emanuel Ben-Baru

189 Jan 04, 2023
Generative Exploration and Exploitation - This is an improved version of GENE.

GENE This is an improved version of GENE. In the original version, the states are generated from the decoder of VAE. We have to check whether the gere

33 Mar 23, 2022
Bayesian optimization in PyTorch

BoTorch is a library for Bayesian Optimization built on PyTorch. BoTorch is currently in beta and under active development! Why BoTorch ? BoTorch Prov

2.5k Dec 31, 2022
PyTorch Implementation of NCSOFT's FastPitchFormant: Source-filter based Decomposed Modeling for Speech Synthesis

FastPitchFormant - PyTorch Implementation PyTorch Implementation of FastPitchFormant: Source-filter based Decomposed Modeling for Speech Synthesis. Qu

Keon Lee 63 Jan 02, 2023
K-Nearest Neighbor in Pytorch

Pytorch KNN CUDA 2019/11/02 This repository will no longer be maintained as pytorch supports sort() and kthvalue on tensors. git clone https://github.

Chris Choy 65 Dec 01, 2022
A collection of easy-to-use, ready-to-use, interesting deep neural network models

Interesting and reproducible research works should be conserved. This repository wraps a collection of deep neural network models into a simple and un

Aria Ghora Prabono 16 Jun 16, 2022
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

Bo Sun 132 Nov 28, 2022
Official Repository of NeurIPS2021 paper: PTR

PTR: A Benchmark for Part-based Conceptual, Relational, and Physical Reasoning Figure 1. Dataset Overview. Introduction A critical aspect of human vis

Yining Hong 32 Jun 02, 2022
Trading environnement for RL agents, backtesting and training.

TradzQAI Trading environnement for RL agents, backtesting and training. Live session with coinbasepro-python is finaly arrived ! Available sessions: L

Tony Denion 164 Oct 30, 2022
Model that predicts the probability of a Twitter user being anti-vaccination.

stylebody {text-align: justify}/style AVAXTAR: Anti-VAXx Tweet AnalyzeR AVAXTAR is a python package to identify anti-vaccine users on twitter. The

10 Sep 27, 2022
ECLARE: Extreme Classification with Label Graph Correlations

ECLARE ECLARE: Extreme Classification with Label Graph Correlations @InProceedings{Mittal21b, author = "Mittal, A. and Sachdeva, N. and Agrawal

Extreme Classification 35 Nov 06, 2022
A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility

Tensorpack is a neural network training interface based on TensorFlow. Features: It's Yet Another TF high-level API, with speed, and flexibility built

Tensorpack 6.2k Jan 09, 2023