Py-FEAT: Python Facial Expression Analysis Toolbox

Related tags

Deep Learningpy-feat
Overview

Py-FEAT: Python Facial Expression Analysis Toolbox

Package versioning Build Status Coverage Status Python Versions DOI

Py-FEAT is a suite for facial expressions (FEX) research written in Python. This package includes tools to detect faces, extract emotional facial expressions (e.g., happiness, sadness, anger), facial muscle movements (e.g., action units), and facial landmarks, from videos and images of faces, as well as methods to preprocess, analyze, and visualize FEX data.

For detailed examples, tutorials, and API please refer to the Py-FEAT website.

Installation

Option 1: Easy installation for quick use Clone the repository
pip install py-feat

Option 2: Installation in development mode

git clone https://github.com/cosanlab/feat.git
cd feat && python setup.py install -e . 

Usage examples

1. Detect FEX data from images or videos

FEAT is intended for use in Jupyter Notebook or Jupyter Lab environment. In a notebook cell, you can run the following to detect faces, facial landmarks, action units, and emotional expressions from images or videos. On the first execution, it will automatically download the default model files. You can also change the detection models from the list of supported models.

from feat.detector import Detector
detector = Detector() 
# Detect FEX from video
out = detector.detect_video("input.mp4")
# Detect FEX from image
out = detector.detect_image("input.png")

2. Visualize FEX data

Visualize FEX detection results.

from feat.detector import Detector
detector = Detector() 
out = detector.detect_image("input.png")
out.plot_detections()

3. Preprocessing & analyzing FEX data

We provide a number of preprocessing and analysis functionalities including baselining, feature extraction such as timeseries descriptors and wavelet decompositions, predictions, regressions, and intersubject correlations. See examples in our tutorial.

Supported Models

Please respect the usage licenses for each model.

Face detection models

Facial landmark detection models

Action Unit detection models

Emotion detection models

Contributing

  1. Fork the repository on GitHub.
  2. Run the tests with pytest tests/ to make confirm that all tests pass on your system. If some tests fail, try to find out why they are failing. Common issues may be not having downloaded model files or missing dependencies.
  3. Create your feature AND add tests to make sure they are working.
  4. Run the tests again with pytest tests/ to make sure everything still passes, including your new feature. If you broke something, edit your feature so that it doesn't break existing code.
  5. Create a pull request to the main repository's master branch.

Licenses

Py-FEAT is provided under the MIT license. You also need to respect the licenses of each model you are using. Please see the LICENSE file for links to each model's license information.

Comments
  • A few models cannot be found

    A few models cannot be found

    Hi,

    I was able to install py-feat with no issues. When I tried to run "Detector", it seemed that a few models were downloaded and others were not, which seem to be missing.

    These files show up in the folder "/py-feat/feat/resources/"

    • hog_pca_all_emotio.joblib
    • hog_scalar_aus.joblib
    • mobilefacenet_model_best.pth.tar
    • mobilenet0.25_Final.pth
    • mobilenet_224_model_best_gdconv_external.pth.tar
    • model_list.json

    Could you please let me know if I am doing something wrong?

    Thanks!

    detector = Detector(verbose=True) Downloading https://objects.githubusercontent.com/github-production-release-asset-2e65be/118517740/675aae00-6f8d-11eb-991e-c7886284a630?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221123%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221123T070446Z&X-Amz-Expires=300&X-Amz-Signature=74b29bce2b4913901b3db5574d330b63ec44e1348fd84a85910bb93b40450cc0&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=118517740&response-content-disposition=attachment%3B%20filename%3Dmobilenet0.25_Final.pth&response-content-type=application%2Foctet-stream to /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/mobilenet0.25_Final.pth 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1786259/1786259 [00:00<00:00, 3251717.53it/s] Downloading https://objects.githubusercontent.com/github-production-release-asset-2e65be/118517740/046a1680-6f8f-11eb-997d-d1266747f4bf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221123%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221123T070447Z&X-Amz-Expires=300&X-Amz-Signature=7e7a241f1533a768a774e192e216ea18fecfe6bab10c32b6b8fa436b747d2b82&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=118517740&response-content-disposition=attachment%3B%20filename%3Dmobilenet_224_model_best_gdconv_external.pth.tar&response-content-type=application%2Foctet-stream to /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/mobilenet_224_model_best_gdconv_external.pth.tar 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45256601/45256601 [00:06<00:00, 7100594.44it/s] Downloading https://objects.githubusercontent.com/github-production-release-asset-2e65be/118517740/ee9cd200-8b88-11eb-9992-cc9383e9a7eb?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221123%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221123T070455Z&X-Amz-Expires=300&X-Amz-Signature=8cad35bc21ac17cc6f9939cee2824e1a37c3ac37c2e8a0e445c714ced9f396b2&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=118517740&response-content-disposition=attachment%3B%20filename%3Dhog_pca_all_emotio.joblib&response-content-type=application%2Foctet-stream to /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/hog_pca_all_emotio.joblib 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 51773375/51773375 [00:01<00:00, 29761468.32it/s] Using downloaded and verified file: /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/hog_pca_all_emotio.joblib Downloading https://objects.githubusercontent.com/github-production-release-asset-2e65be/118517740/ecfc2580-8da5-11eb-9d71-275376e20c4c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221123%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221123T070458Z&X-Amz-Expires=300&X-Amz-Signature=8c3ed03733a6bff706e6b0e1f89fa815f23f42674a37f338fd61e38a627474ac&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=118517740&response-content-disposition=attachment%3B%20filename%3Dhog_scalar_aus.joblib&response-content-type=application%2Foctet-stream to /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/hog_scalar_aus.joblib 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 130390/130390 [00:00<00:00, 109379059.71it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/detector.py", line 108, in __init__ face, landmark, au, emotion, facepose = get_pretrained_models( File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/pretrained.py", line 239, in get_pretrained_models download_url(url, get_resource_path(), verbose=verbose) File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/utils.py", line 1063, in download_url return tv_download_url(*args, **kwargs) File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/torchvision/datasets/utils.py", line 147, in download_url url = _get_redirect_url(url, max_hops=max_redirect_hops) File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/torchvision/datasets/utils.py", line 95, in _get_redirect_url with urllib.request.urlopen(urllib.request.Request(url, headers=headers)) as response: File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 214, in urlopen return opener.open(url, data, timeout) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 523, in open response = meth(req, response) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 632, in http_response response = self.parent.error( File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 555, in error result = self._call_chain(*args) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 494, in _call_chain result = func(*args) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 747, in http_error_302 return self.parent.open(new, timeout=req.timeout) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 523, in open response = meth(req, response) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 632, in http_response response = self.parent.error( File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 561, in error return self._call_chain(*args) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 494, in _call_chain result = func(*args) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 641, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found

    opened by ethanfa 11
  • GPU not really utilized well

    GPU not really utilized well

    So I've run a few tests, as I've noticed that PyFeat is quite slow in comparison to something like OpenFace2. Turns out, while the CPU is always utilized, the GPU is not. This seems to be, because the data is loaded 1-by-1 through OpenCV instead of using a proper GPU library for lists of images. I think it makes sense, if a list of images is given to a Detector for detect_image, to load and predict the images batch wise through torch.DataLoaders. Bildschirmfoto 2021-08-10 um 21 49 17 Bildschirmfoto 2021-08-10 um 21 49 35

    enhancement 
    opened by ichitaka 8
  • Sessions Attribute Functionality Discussion.

    Sessions Attribute Functionality Discussion.

    Working on sessions attribute for Fex objects. Right now users can pass in a sessions array that can be iterated over using Fex.itersessions(), exactly the same way that pd.DataFrame.iterrows() works.

    Just wanted to run a few things by @jcheong0428 and @Nathaniel-Haines. If Fex.sesssions is not None, should we iterate over every unique session for all preprocessing/descriptive/feature extraction methods? For example, Fex.downsample() should probably downsample separately for each unique session (e.g., trial, subject) right? And Fex.clean() should do the same as well as Fex.extract_boft()

    opened by ljchang 8
  • added Fextractor class for extracting features from a Fex instance

    added Fextractor class for extracting features from a Fex instance

    The Fextractor class works like this:

    # cleaned data
    df = read_facet('iMotions_Test.txt')
    sessions = np.array([[x]*10 for x in range(1+int(len(df)/10))]).flatten()[:-1]
    dat = Fex(df, sampling_freq=30, sessions=sessions)
    dat = dat.interpolate(method='linear')
    
    # Create instance of class
    extractor = Fextractor()
    
    # Extraction methods build a list of different features within extractor.extracted_features
    extractor.mean(fex_object=dat)
    extractor.max(fex_object=dat)
    extractor.min(fex_object=dat)
    #extractor.boft(fex_object=dat, min_freq=.01, max_freq=.20, bank=1) # boft not working yet
    extractor.multi_wavelet(fex_object=dat)    
    extractor.wavelet(fex_object=dat, freq=f, num_cyc=num_cyc)
    
    # Merge and return all extracted features as a single wide or long DataFrame
    newdat = extractor.merge(out_format='long')
    

    Currently, it returns a pandas DataFrame. Also the boft extraction does not work (but I am not sure if that was already the case?).

    Anyway, this is a first attempt at a feature extraction class. Let me know what you think!

    opened by Nathaniel-Haines 7
  • calculation of sampling freq

    calculation of sampling freq

    our FACET, OpenFace can calculate sampling frequency by doing a .diff() on the timestamps. It would be nice to have that set as the default when sampling_freq is not passed when the class is not initialized.

    enhancement low priority 
    opened by jcheong0428 7
  • How to use au_model=

    How to use au_model="rf" and emotion_model = "rf" in version 0.4

    Hi,

    I wish to use au_model="rf" since RF model gives the intensity as random continuous variable unlike SVM which just gives the possiblity of detecting aus for my analysis.

    It was possible in version 0.3.7 but is not in latest version. What's the alternative for this? Also, my requirement is to use HOG based models only for au and emotions. So I can only use either RF or SVM. Is there any alternative for it?

    opened by ritika24-s 6
  • added mean, min, and max feature extraction methods

    added mean, min, and max feature extraction methods

    Let me know what you all think of this method of feature extraction. The output format is the same as the boft extractor (1 row, and a column for each feature), and specifying the 'by' argument allows users to group observations by other features in the data before summarizing (e.g. by subjects, trials, or whatever). By default, the functions will summarize data across all rows.

    This is all default pandas functionality too, so it is quick and easy.

    opened by Nathaniel-Haines 6
  • Modify Interpolate method

    Modify Interpolate method

    At some point we should write a new interpolate method. We can base it off of the upsample function.

    It should:

    1. be able to accomodate nonlinear methods such as cubic and spline. Current one only seems to work with linear.
    2. we should add a 'limit' flag which means it won't try to interpolate if chunk of time is too large.
    enhancement low priority 
    opened by ljchang 5
  • RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0

    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0

    Hello, I encountered the following error when attempting to execute the example from notebook 3. It's the same code on a newly-initialized environment, so I couldn't figure out what may cause this problem.

    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    
    opened by TalBarami 4
  • Exception Occurred when using Detector

    Exception Occurred when using Detector

    An exception occurs when using "mobilenet" as the landmark model on my machine with CUDA installed. The other landmark models don't have this issue. The exception seems to be caused by not all of the torch module's parameters and buffers are on GPU.

    Traceback (most recent call last):
      File "pyfeat_test.py", line 26, in <module>
        image_prediction = detector.detect_image(test_image)
      File "venv/lib/python3.8/site-packages/feat/detector.py", line 689, in detect_image
        df = self.process_frame(frame)
      File "venv/lib/python3.8/site-packages/feat/detector.py", line 540, in process_frame
        landmarks = self.detect_landmarks(
      File "venv/lib/python3.8/site-packages/feat/detector.py", line 366, in detect_landmarks
        landmark = self.landmark_detector(input).cpu().data.numpy()
      File "venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "venv/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 153, in forward
        raise RuntimeError("module must have its parameters and buffers "
    RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu
    

    Does anyone have an idea on how to solve this? Thanks!

    investigate 
    opened by sulibo 4
  • WIP: Added sessions and updated wavelet

    WIP: Added sessions and updated wavelet

    Added new sessions attribute and itersessions() method. for grouping by sessions for preproc and feature extraction. Still needs to be tested.

    Updated wavelet and extract_wavelet() methods. Can now output complex wavelet and calculate filtered signal, phase angle, or power. Still needs to be tested.

    • changed sampling_rate -> sampling freq to be consistent
    • made duration automatically detected.
    • still might want to zero pad data before convolving.

    @jcheong0428: Feel free to modify if you want. Once all of the two new functions seem to be tested and working, I would recommend updating your extract_botf() method to use the new functions.

    opened by ljchang 4
  • Getting Error: List index out of rage. While using detect_video(). Please help

    Getting Error: List index out of rage. While using detect_video(). Please help

    I am using py-feat 0.5.0 pypi version and using detect_video function to extract features from video. The program started successfully and in the middle of execution, its failing with the below error.

    IndexError Traceback (most recent call last) Input In [44], in <cell line: 7>() 11 print(f"Processing: {video}") 13 # This is the line that does detection! ---> 14 fex = detector.detect_video(video) 16 fex.to_csv(out_name, index=False)

    File ~.conda\envs\LIE\lib\site-packages\feat\detector.py:802, in Detector.detect_video(self, video_path, skip_frames, output_size, batch_size, num_workers, pin_memory, **detector_kwargs) 800 frames = list(batch_data["Frame"].numpy()) 801 landmarks = _inverse_landmark_transform(landmarks, batch_data) --> 802 output = self._create_fex( 803 faces, landmarks, poses, aus, emotions, batch_data["FileName"], frames 804 ) 805 batch_output.append(output) 807 batch_output = pd.concat(batch_output)

    File ~.conda\envs\LIE\lib\site-packages\feat\detector.py:872, in Detector._create_fex(self, faces, landmarks, poses, aus, emotions, file_names, frame_counter) 856 for j, face_in_frame in enumerate(frame): 857 facebox_df = pd.DataFrame( 858 [ 859 [ (...) 868 index=[j], 869 ) 871 facepose_df = pd.DataFrame( --> 872 [poses[i][j].flatten(order="F")], 873 columns=self.info["facepose_model_columns"], 874 index=[j], 875 ) 877 landmarks_df = pd.DataFrame( 878 [landmarks[i][j].flatten(order="F")], 879 columns=self.info["face_landmark_columns"], 880 index=[j], 881 ) 883 aus_df = pd.DataFrame( 884 aus[i][j, :].reshape(1, len(self["au_presence_columns"])), 885 columns=self.info["au_presence_columns"], 886 index=[j], 887 )

    IndexError: list index out of range pic1 pic2

    opened by Abid-S 5
  • Make `.detect_video` more memory efficient

    Make `.detect_video` more memory efficient

    @ljchang after chatting with @TiankangXie it looks like we can fairly easily roll our own read_video function because torch also provides a lower level API with their VideoReader class.

    Just like in their examples, we can just write a function that wraps the next(reader) calls and return a generator so at most we load only batch_size frames at most into memory on each loop iteration. That way even long videos shouldn't be a problem on low RAM/VRAM machines, and more memory will simply allow for bigger batch sizes.

    The downside trying to get it to work right now is that torch needs to be compiled with support for it and requires a working ffmeg install:

    *** RuntimeError: Not compiled with video_reader support, to enable video_reader support, please install ffmpeg (version 4.2 is currently supported) and build torchvision from source.
    Traceback (most recent call last):
      File "/Users/Esh/anaconda3/envs/py-feat/lib/python3.8/site-packages/torchvision/io/__init__.py", line 130, in __init__
        raise RuntimeError(
    

    So it seems like the real cost of rolling our own solution with VideoReader until torch allows for more memory efficient read_video(), is an added dependency on ffmepg and potentially more installation hassle. Or we can try a different library or solution for loading video frames. From a brief search on github it looks like there are lots of custom solutions as third party libraries, because this isn't quite "solved." But most libraries "cheat" a bit IMO. e.g. Expecting that you've pre-saved each frame as a separate image file on disk

    opened by ejolly 0
  • Image size degrades emotion classification accuracy when holding face pixel size constant

    Image size degrades emotion classification accuracy when holding face pixel size constant

    Using v0.4.0 or the current m1_testing branch, using the default detectors specified in the documentation, I'm encountering an issue where using large images seems to degrade performance, when holding the pixel size of the actual faces constant. Simply cropping out face-free parts of the image improves performance considerably. I suspect that this might be happening because the image is downsampled for face detection, and then when the faces are extracted using the resulting bounding boxes, the downsampled rather than original image is used. This would result in the faces being unnecessarily downsampled in large images that are mostly free of faces, leading to degraded performance. If this is the problem, I would suggest upsampling the bounding boxes back to the original image resolution, and then extracting the faces from the original. They could always be downsampled from this point if necessary for the emotion model, but at least it wouldn't be based on something arbitrary like the overall image size.

    WIP 
    opened by markallenthornton 2
  • Different AU values when passing to detector.detect_image a single image or a batch of images

    Different AU values when passing to detector.detect_image a single image or a batch of images

    Hi there,

    I am trying to extract AU values ​​from some images. Calling the function "detector.detect_image" on each image individually, I obtain some values ​​in the AU columns different from those I obtain by passing to the function a list containing all the images (the same used in the single image case).

    The values ​​on the other columns are the same (same detection coordinates and same emotion values), only the values ​​for the AU columns are different.

    Is this behavior expected?

    WIP 
    opened by VaianiLorenzo 2
Releases(v0.5.0)
Owner
Computational Social Affective Neuroscience Laboratory
Computational Social Affective Neuroscience Laboratory
Optimized code based on M2 for faster image captioning training

Transformer Captioning This repository contains the code for Transformer-based image captioning. Based on meshed-memory-transformer, we further optimi

lyricpoem 16 Dec 16, 2022
[CVPR 2021] Unsupervised 3D Shape Completion through GAN Inversion

ShapeInversion Paper Junzhe Zhang, Xinyi Chen, Zhongang Cai, Liang Pan, Haiyu Zhao, Shuai Yi, Chai Kiat Yeo, Bo Dai, Chen Change Loy "Unsupervised 3D

100 Dec 22, 2022
Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning

Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning This repository is official Tensorflow implementation of paper: Ensemb

Seunghyun Lee 12 Oct 18, 2022
Weakly Supervised Segmentation by Tensorflow.

Weakly Supervised Segmentation by Tensorflow. Implements semantic segmentation in Simple Does It: Weakly Supervised Instance and Semantic Segmentation, by Khoreva et al. (CVPR 2017).

CHENG-YOU LU 52 Dec 27, 2022
BT-Unet: A-Self-supervised-learning-framework-for-biomedical-image-segmentation-using-Barlow-Twins

BT-Unet: A-Self-supervised-learning-framework-for-biomedical-image-segmentation-using-Barlow-Twins Deep learning has brought most profound contributio

Narinder Singh Punn 12 Dec 04, 2022
This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds

LiDARTag Overview This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds (PDF)(arXiv). This wo

University of Michigan Dynamic Legged Locomotion Robotics Lab 159 Dec 21, 2022
Pytorch implementation of face attention network

Face Attention Network Pytorch implementation of face attention network as described in Face Attention Network: An Effective Face Detector for the Occ

Hooks 312 Dec 09, 2022
ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectives

Status: Under development (expect bug fixes and huge updates) ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectiv

37 Dec 28, 2022
Tutorial page of the Climate Hack, the greatest hackathon ever

Tutorial page of the Climate Hack, the greatest hackathon ever

UCL Artificial Intelligence Society 12 Jul 02, 2022
Official implementation of Meta-StyleSpeech and StyleSpeech

Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation Dongchan Min, Dong Bok Lee, Eunho Yang, and Sung Ju Hwang This is an official code

min95 168 Dec 28, 2022
An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

AnalyticMesh Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and top

Karbo 45 Dec 21, 2022
Totally Versatile Miscellanea for Pytorch

Totally Versatile Miscellania for PyTorch Thomas Viehmann [email protected] Thi

Thomas Viehmann 428 Dec 28, 2022
Pytorch implementation of BRECQ, ICLR 2021

BRECQ Pytorch implementation of BRECQ, ICLR 2021 @inproceedings{ li&gong2021brecq, title={BRECQ: Pushing the Limit of Post-Training Quantization by Bl

Yuhang Li 148 Dec 28, 2022
Dynamic vae - Dynamic VAE algorithm is used for anomaly detection of battery data

Dynamic VAE frame Automatic feature extraction can be achieved by probability di

10 Oct 07, 2022
Deep Face Recognition in PyTorch

Face Recognition in PyTorch By Alexey Gruzdev and Vladislav Sovrasov Introduction A repository for different experimental Face Recognition models such

Alexey Gruzdev 141 Sep 11, 2022
Epidemiology analysis package

zEpid zEpid is an epidemiology analysis package, providing easy to use tools for epidemiologists coding in Python 3.5+. The purpose of this library is

Paul Zivich 111 Jan 08, 2023
Python module providing a framework to trace individual edges in an image using Gaussian process regression.

Edge Tracing using Gaussian Process Regression Repository storing python module which implements a framework to trace individual edges in an image usi

Jamie Burke 7 Dec 27, 2022
Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection

Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection Main requirements torch = 1.0 torchvision = 0.2.0 Python 3 Environm

15 Apr 04, 2022
This a classic fintech problem that introduces real life difficulties such as data imbalance. Check out the notebook to find out more!

Credit Card Fraud Detection Introduction Online transactions have become a crucial part of any business over the years. Many of those transactions use

Jonathan Hasbani 0 Jan 20, 2022
Torchyolo - Yolov3 ve Yolov4 modellerin Pytorch uygulamasıdır

TORCHYOLO : Yolo Modellerin Pytorch Uygulaması Yapılacaklar: Yolov3 model.py ve

Kadir Nar 3 Aug 22, 2022