Py-FEAT: Python Facial Expression Analysis Toolbox

Related tags

Deep Learningpy-feat
Overview

Py-FEAT: Python Facial Expression Analysis Toolbox

Package versioning Build Status Coverage Status Python Versions DOI

Py-FEAT is a suite for facial expressions (FEX) research written in Python. This package includes tools to detect faces, extract emotional facial expressions (e.g., happiness, sadness, anger), facial muscle movements (e.g., action units), and facial landmarks, from videos and images of faces, as well as methods to preprocess, analyze, and visualize FEX data.

For detailed examples, tutorials, and API please refer to the Py-FEAT website.

Installation

Option 1: Easy installation for quick use Clone the repository
pip install py-feat

Option 2: Installation in development mode

git clone https://github.com/cosanlab/feat.git
cd feat && python setup.py install -e . 

Usage examples

1. Detect FEX data from images or videos

FEAT is intended for use in Jupyter Notebook or Jupyter Lab environment. In a notebook cell, you can run the following to detect faces, facial landmarks, action units, and emotional expressions from images or videos. On the first execution, it will automatically download the default model files. You can also change the detection models from the list of supported models.

from feat.detector import Detector
detector = Detector() 
# Detect FEX from video
out = detector.detect_video("input.mp4")
# Detect FEX from image
out = detector.detect_image("input.png")

2. Visualize FEX data

Visualize FEX detection results.

from feat.detector import Detector
detector = Detector() 
out = detector.detect_image("input.png")
out.plot_detections()

3. Preprocessing & analyzing FEX data

We provide a number of preprocessing and analysis functionalities including baselining, feature extraction such as timeseries descriptors and wavelet decompositions, predictions, regressions, and intersubject correlations. See examples in our tutorial.

Supported Models

Please respect the usage licenses for each model.

Face detection models

Facial landmark detection models

Action Unit detection models

Emotion detection models

Contributing

  1. Fork the repository on GitHub.
  2. Run the tests with pytest tests/ to make confirm that all tests pass on your system. If some tests fail, try to find out why they are failing. Common issues may be not having downloaded model files or missing dependencies.
  3. Create your feature AND add tests to make sure they are working.
  4. Run the tests again with pytest tests/ to make sure everything still passes, including your new feature. If you broke something, edit your feature so that it doesn't break existing code.
  5. Create a pull request to the main repository's master branch.

Licenses

Py-FEAT is provided under the MIT license. You also need to respect the licenses of each model you are using. Please see the LICENSE file for links to each model's license information.

Comments
  • A few models cannot be found

    A few models cannot be found

    Hi,

    I was able to install py-feat with no issues. When I tried to run "Detector", it seemed that a few models were downloaded and others were not, which seem to be missing.

    These files show up in the folder "/py-feat/feat/resources/"

    • hog_pca_all_emotio.joblib
    • hog_scalar_aus.joblib
    • mobilefacenet_model_best.pth.tar
    • mobilenet0.25_Final.pth
    • mobilenet_224_model_best_gdconv_external.pth.tar
    • model_list.json

    Could you please let me know if I am doing something wrong?

    Thanks!

    detector = Detector(verbose=True) Downloading https://objects.githubusercontent.com/github-production-release-asset-2e65be/118517740/675aae00-6f8d-11eb-991e-c7886284a630?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221123%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221123T070446Z&X-Amz-Expires=300&X-Amz-Signature=74b29bce2b4913901b3db5574d330b63ec44e1348fd84a85910bb93b40450cc0&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=118517740&response-content-disposition=attachment%3B%20filename%3Dmobilenet0.25_Final.pth&response-content-type=application%2Foctet-stream to /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/mobilenet0.25_Final.pth 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1786259/1786259 [00:00<00:00, 3251717.53it/s] Downloading https://objects.githubusercontent.com/github-production-release-asset-2e65be/118517740/046a1680-6f8f-11eb-997d-d1266747f4bf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221123%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221123T070447Z&X-Amz-Expires=300&X-Amz-Signature=7e7a241f1533a768a774e192e216ea18fecfe6bab10c32b6b8fa436b747d2b82&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=118517740&response-content-disposition=attachment%3B%20filename%3Dmobilenet_224_model_best_gdconv_external.pth.tar&response-content-type=application%2Foctet-stream to /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/mobilenet_224_model_best_gdconv_external.pth.tar 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45256601/45256601 [00:06<00:00, 7100594.44it/s] Downloading https://objects.githubusercontent.com/github-production-release-asset-2e65be/118517740/ee9cd200-8b88-11eb-9992-cc9383e9a7eb?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221123%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221123T070455Z&X-Amz-Expires=300&X-Amz-Signature=8cad35bc21ac17cc6f9939cee2824e1a37c3ac37c2e8a0e445c714ced9f396b2&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=118517740&response-content-disposition=attachment%3B%20filename%3Dhog_pca_all_emotio.joblib&response-content-type=application%2Foctet-stream to /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/hog_pca_all_emotio.joblib 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 51773375/51773375 [00:01<00:00, 29761468.32it/s] Using downloaded and verified file: /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/hog_pca_all_emotio.joblib Downloading https://objects.githubusercontent.com/github-production-release-asset-2e65be/118517740/ecfc2580-8da5-11eb-9d71-275376e20c4c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20221123%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221123T070458Z&X-Amz-Expires=300&X-Amz-Signature=8c3ed03733a6bff706e6b0e1f89fa815f23f42674a37f338fd61e38a627474ac&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=118517740&response-content-disposition=attachment%3B%20filename%3Dhog_scalar_aus.joblib&response-content-type=application%2Foctet-stream to /home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/resources/hog_scalar_aus.joblib 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 130390/130390 [00:00<00:00, 109379059.71it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/detector.py", line 108, in __init__ face, landmark, au, emotion, facepose = get_pretrained_models( File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/pretrained.py", line 239, in get_pretrained_models download_url(url, get_resource_path(), verbose=verbose) File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/feat/utils.py", line 1063, in download_url return tv_download_url(*args, **kwargs) File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/torchvision/datasets/utils.py", line 147, in download_url url = _get_redirect_url(url, max_hops=max_redirect_hops) File "/home/temp/miniconda3/envs/env100/lib/python3.9/site-packages/torchvision/datasets/utils.py", line 95, in _get_redirect_url with urllib.request.urlopen(urllib.request.Request(url, headers=headers)) as response: File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 214, in urlopen return opener.open(url, data, timeout) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 523, in open response = meth(req, response) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 632, in http_response response = self.parent.error( File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 555, in error result = self._call_chain(*args) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 494, in _call_chain result = func(*args) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 747, in http_error_302 return self.parent.open(new, timeout=req.timeout) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 523, in open response = meth(req, response) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 632, in http_response response = self.parent.error( File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 561, in error return self._call_chain(*args) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 494, in _call_chain result = func(*args) File "/home/temp/miniconda3/envs/env100/lib/python3.9/urllib/request.py", line 641, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found

    opened by ethanfa 11
  • GPU not really utilized well

    GPU not really utilized well

    So I've run a few tests, as I've noticed that PyFeat is quite slow in comparison to something like OpenFace2. Turns out, while the CPU is always utilized, the GPU is not. This seems to be, because the data is loaded 1-by-1 through OpenCV instead of using a proper GPU library for lists of images. I think it makes sense, if a list of images is given to a Detector for detect_image, to load and predict the images batch wise through torch.DataLoaders. Bildschirmfoto 2021-08-10 um 21 49 17 Bildschirmfoto 2021-08-10 um 21 49 35

    enhancement 
    opened by ichitaka 8
  • Sessions Attribute Functionality Discussion.

    Sessions Attribute Functionality Discussion.

    Working on sessions attribute for Fex objects. Right now users can pass in a sessions array that can be iterated over using Fex.itersessions(), exactly the same way that pd.DataFrame.iterrows() works.

    Just wanted to run a few things by @jcheong0428 and @Nathaniel-Haines. If Fex.sesssions is not None, should we iterate over every unique session for all preprocessing/descriptive/feature extraction methods? For example, Fex.downsample() should probably downsample separately for each unique session (e.g., trial, subject) right? And Fex.clean() should do the same as well as Fex.extract_boft()

    opened by ljchang 8
  • added Fextractor class for extracting features from a Fex instance

    added Fextractor class for extracting features from a Fex instance

    The Fextractor class works like this:

    # cleaned data
    df = read_facet('iMotions_Test.txt')
    sessions = np.array([[x]*10 for x in range(1+int(len(df)/10))]).flatten()[:-1]
    dat = Fex(df, sampling_freq=30, sessions=sessions)
    dat = dat.interpolate(method='linear')
    
    # Create instance of class
    extractor = Fextractor()
    
    # Extraction methods build a list of different features within extractor.extracted_features
    extractor.mean(fex_object=dat)
    extractor.max(fex_object=dat)
    extractor.min(fex_object=dat)
    #extractor.boft(fex_object=dat, min_freq=.01, max_freq=.20, bank=1) # boft not working yet
    extractor.multi_wavelet(fex_object=dat)    
    extractor.wavelet(fex_object=dat, freq=f, num_cyc=num_cyc)
    
    # Merge and return all extracted features as a single wide or long DataFrame
    newdat = extractor.merge(out_format='long')
    

    Currently, it returns a pandas DataFrame. Also the boft extraction does not work (but I am not sure if that was already the case?).

    Anyway, this is a first attempt at a feature extraction class. Let me know what you think!

    opened by Nathaniel-Haines 7
  • calculation of sampling freq

    calculation of sampling freq

    our FACET, OpenFace can calculate sampling frequency by doing a .diff() on the timestamps. It would be nice to have that set as the default when sampling_freq is not passed when the class is not initialized.

    enhancement low priority 
    opened by jcheong0428 7
  • How to use au_model=

    How to use au_model="rf" and emotion_model = "rf" in version 0.4

    Hi,

    I wish to use au_model="rf" since RF model gives the intensity as random continuous variable unlike SVM which just gives the possiblity of detecting aus for my analysis.

    It was possible in version 0.3.7 but is not in latest version. What's the alternative for this? Also, my requirement is to use HOG based models only for au and emotions. So I can only use either RF or SVM. Is there any alternative for it?

    opened by ritika24-s 6
  • added mean, min, and max feature extraction methods

    added mean, min, and max feature extraction methods

    Let me know what you all think of this method of feature extraction. The output format is the same as the boft extractor (1 row, and a column for each feature), and specifying the 'by' argument allows users to group observations by other features in the data before summarizing (e.g. by subjects, trials, or whatever). By default, the functions will summarize data across all rows.

    This is all default pandas functionality too, so it is quick and easy.

    opened by Nathaniel-Haines 6
  • Modify Interpolate method

    Modify Interpolate method

    At some point we should write a new interpolate method. We can base it off of the upsample function.

    It should:

    1. be able to accomodate nonlinear methods such as cubic and spline. Current one only seems to work with linear.
    2. we should add a 'limit' flag which means it won't try to interpolate if chunk of time is too large.
    enhancement low priority 
    opened by ljchang 5
  • RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0

    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0

    Hello, I encountered the following error when attempting to execute the example from notebook 3. It's the same code on a newly-initialized environment, so I couldn't figure out what may cause this problem.

    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    Traceback (most recent call last):
      File "C:\research\feat\feat\detector.py", line 787, in process_frame
        detected_faces = self.detect_faces(frame=frames)
      File "C:\research\feat\feat\detector.py", line 325, in detect_faces
        faces, poses = self.face_detector(frame)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 99, in __call__
        preds = self.scale_and_predict(img)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 132, in scale_and_predict
        preds = self.predict(img, border_size, scale)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_test.py", line 169, in predict
        pred = self.model.predict([self.transform(img)])[0]
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 107, in predict
        predictions = self.run_model(imgs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\img2pose_model.py", line 95, in run_model
        outputs = self.fpn_model(imgs, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\_utils.py", line 457, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\research\feat\feat\facepose_detectors\img2pose\deps\generalized_rcnn.py", line 59, in forward
        images, targets = self.transform(images, targets)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 127, in forward
        image = self.normalize(image)
      File "C:\Users\owner\anaconda3\envs\py38\lib\site-packages\torchvision\models\detection\transform.py", line 152, in normalize
        return (image - mean[:, None, None]) / std[:, None, None]
    RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
    
    exception occurred in the batch
    Since singleframe4error=FALSE, giving up this entire batch result
    
    opened by TalBarami 4
  • Exception Occurred when using Detector

    Exception Occurred when using Detector

    An exception occurs when using "mobilenet" as the landmark model on my machine with CUDA installed. The other landmark models don't have this issue. The exception seems to be caused by not all of the torch module's parameters and buffers are on GPU.

    Traceback (most recent call last):
      File "pyfeat_test.py", line 26, in <module>
        image_prediction = detector.detect_image(test_image)
      File "venv/lib/python3.8/site-packages/feat/detector.py", line 689, in detect_image
        df = self.process_frame(frame)
      File "venv/lib/python3.8/site-packages/feat/detector.py", line 540, in process_frame
        landmarks = self.detect_landmarks(
      File "venv/lib/python3.8/site-packages/feat/detector.py", line 366, in detect_landmarks
        landmark = self.landmark_detector(input).cpu().data.numpy()
      File "venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "venv/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 153, in forward
        raise RuntimeError("module must have its parameters and buffers "
    RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu
    

    Does anyone have an idea on how to solve this? Thanks!

    investigate 
    opened by sulibo 4
  • WIP: Added sessions and updated wavelet

    WIP: Added sessions and updated wavelet

    Added new sessions attribute and itersessions() method. for grouping by sessions for preproc and feature extraction. Still needs to be tested.

    Updated wavelet and extract_wavelet() methods. Can now output complex wavelet and calculate filtered signal, phase angle, or power. Still needs to be tested.

    • changed sampling_rate -> sampling freq to be consistent
    • made duration automatically detected.
    • still might want to zero pad data before convolving.

    @jcheong0428: Feel free to modify if you want. Once all of the two new functions seem to be tested and working, I would recommend updating your extract_botf() method to use the new functions.

    opened by ljchang 4
  • Getting Error: List index out of rage. While using detect_video(). Please help

    Getting Error: List index out of rage. While using detect_video(). Please help

    I am using py-feat 0.5.0 pypi version and using detect_video function to extract features from video. The program started successfully and in the middle of execution, its failing with the below error.

    IndexError Traceback (most recent call last) Input In [44], in <cell line: 7>() 11 print(f"Processing: {video}") 13 # This is the line that does detection! ---> 14 fex = detector.detect_video(video) 16 fex.to_csv(out_name, index=False)

    File ~.conda\envs\LIE\lib\site-packages\feat\detector.py:802, in Detector.detect_video(self, video_path, skip_frames, output_size, batch_size, num_workers, pin_memory, **detector_kwargs) 800 frames = list(batch_data["Frame"].numpy()) 801 landmarks = _inverse_landmark_transform(landmarks, batch_data) --> 802 output = self._create_fex( 803 faces, landmarks, poses, aus, emotions, batch_data["FileName"], frames 804 ) 805 batch_output.append(output) 807 batch_output = pd.concat(batch_output)

    File ~.conda\envs\LIE\lib\site-packages\feat\detector.py:872, in Detector._create_fex(self, faces, landmarks, poses, aus, emotions, file_names, frame_counter) 856 for j, face_in_frame in enumerate(frame): 857 facebox_df = pd.DataFrame( 858 [ 859 [ (...) 868 index=[j], 869 ) 871 facepose_df = pd.DataFrame( --> 872 [poses[i][j].flatten(order="F")], 873 columns=self.info["facepose_model_columns"], 874 index=[j], 875 ) 877 landmarks_df = pd.DataFrame( 878 [landmarks[i][j].flatten(order="F")], 879 columns=self.info["face_landmark_columns"], 880 index=[j], 881 ) 883 aus_df = pd.DataFrame( 884 aus[i][j, :].reshape(1, len(self["au_presence_columns"])), 885 columns=self.info["au_presence_columns"], 886 index=[j], 887 )

    IndexError: list index out of range pic1 pic2

    opened by Abid-S 5
  • Make `.detect_video` more memory efficient

    Make `.detect_video` more memory efficient

    @ljchang after chatting with @TiankangXie it looks like we can fairly easily roll our own read_video function because torch also provides a lower level API with their VideoReader class.

    Just like in their examples, we can just write a function that wraps the next(reader) calls and return a generator so at most we load only batch_size frames at most into memory on each loop iteration. That way even long videos shouldn't be a problem on low RAM/VRAM machines, and more memory will simply allow for bigger batch sizes.

    The downside trying to get it to work right now is that torch needs to be compiled with support for it and requires a working ffmeg install:

    *** RuntimeError: Not compiled with video_reader support, to enable video_reader support, please install ffmpeg (version 4.2 is currently supported) and build torchvision from source.
    Traceback (most recent call last):
      File "/Users/Esh/anaconda3/envs/py-feat/lib/python3.8/site-packages/torchvision/io/__init__.py", line 130, in __init__
        raise RuntimeError(
    

    So it seems like the real cost of rolling our own solution with VideoReader until torch allows for more memory efficient read_video(), is an added dependency on ffmepg and potentially more installation hassle. Or we can try a different library or solution for loading video frames. From a brief search on github it looks like there are lots of custom solutions as third party libraries, because this isn't quite "solved." But most libraries "cheat" a bit IMO. e.g. Expecting that you've pre-saved each frame as a separate image file on disk

    opened by ejolly 0
  • Image size degrades emotion classification accuracy when holding face pixel size constant

    Image size degrades emotion classification accuracy when holding face pixel size constant

    Using v0.4.0 or the current m1_testing branch, using the default detectors specified in the documentation, I'm encountering an issue where using large images seems to degrade performance, when holding the pixel size of the actual faces constant. Simply cropping out face-free parts of the image improves performance considerably. I suspect that this might be happening because the image is downsampled for face detection, and then when the faces are extracted using the resulting bounding boxes, the downsampled rather than original image is used. This would result in the faces being unnecessarily downsampled in large images that are mostly free of faces, leading to degraded performance. If this is the problem, I would suggest upsampling the bounding boxes back to the original image resolution, and then extracting the faces from the original. They could always be downsampled from this point if necessary for the emotion model, but at least it wouldn't be based on something arbitrary like the overall image size.

    WIP 
    opened by markallenthornton 2
  • Different AU values when passing to detector.detect_image a single image or a batch of images

    Different AU values when passing to detector.detect_image a single image or a batch of images

    Hi there,

    I am trying to extract AU values ​​from some images. Calling the function "detector.detect_image" on each image individually, I obtain some values ​​in the AU columns different from those I obtain by passing to the function a list containing all the images (the same used in the single image case).

    The values ​​on the other columns are the same (same detection coordinates and same emotion values), only the values ​​for the AU columns are different.

    Is this behavior expected?

    WIP 
    opened by VaianiLorenzo 2
Releases(v0.5.0)
Owner
Computational Social Affective Neuroscience Laboratory
Computational Social Affective Neuroscience Laboratory
A pytorch &keras implementation and demo of Fastformer.

Fastformer Notes from the authors Pytorch/Keras implementation of Fastformer. The keras version only includes the core fastformer attention part. The

153 Dec 28, 2022
Tracking Progress in Question Answering over Knowledge Graphs

Tracking Progress in Question Answering over Knowledge Graphs Table of contents Question Answering Systems with Descriptions The QA Systems Table cont

Knowledge Graph Question Answering 47 Jan 02, 2023
RefineMask (CVPR 2021)

RefineMask: Towards High-Quality Instance Segmentation with Fine-Grained Features (CVPR 2021) This repo is the official implementation of RefineMask:

Gang Zhang 191 Jan 07, 2023
The codes and related files to reproduce the results for Image Similarity Challenge Track 1.

ISC-Track1-Submission The codes and related files to reproduce the results for Image Similarity Challenge Track 1. Required dependencies To begin with

Wenhao Wang 115 Jan 02, 2023
用强化学习DQN算法,训练AI模型来玩合成大西瓜游戏,提供Keras版本和PARL(paddle)版本

用强化学习玩合成大西瓜 代码地址:https://github.com/Sharpiless/play-daxigua-using-Reinforcement-Learning 用强化学习DQN算法,训练AI模型来玩合成大西瓜游戏,提供Keras版本、PARL(paddle)版本和pytorch版本

72 Dec 17, 2022
High-Fidelity Pluralistic Image Completion with Transformers (ICCV 2021)

Image Completion Transformer (ICT) Project Page | Paper (ArXiv) | Pre-trained Models | Supplemental Material This repository is the official pytorch i

Ziyu Wan 243 Jan 03, 2023
Membership Inference Attack against Graph Neural Networks

MIA GNN Project Starter If you meet the version mismatch error for Lasagne library, please use following command to upgrade Lasagne library. pip insta

6 Nov 09, 2022
Official PyTorch code for CVPR 2020 paper "Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision"

Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision https://arxiv.org/abs/2003.00393 Abstract Active learning (AL) aims to min

Denis 29 Nov 21, 2022
Pytorch implementation of NeurIPS 2021 paper: Geometry Processing with Neural Fields.

Geometry Processing with Neural Fields Pytorch implementation for the NeurIPS 2021 paper: Geometry Processing with Neural Fields Guandao Yang, Serge B

Guandao Yang 162 Dec 16, 2022
object recognition with machine learning on Respberry pi

Respberrypi_object-recognition object recognition with machine learning on Respberry pi line.py 建立一支與樹梅派連線的 linebot 使用此 linebot 遠端控制樹梅派拍照 config.ini l

1 Dec 11, 2021
wgan, wgan2(improved, gp), infogan, and dcgan implementation in lasagne, keras, pytorch

Generative Adversarial Notebooks Collection of my Generative Adversarial Network implementations Most codes are for python3, most notebooks works on C

tjwei 1.5k Dec 16, 2022
Implementation detail for paper "Multi-level colonoscopy malignant tissue detection with adversarial CAC-UNet"

Multi-level-colonoscopy-malignant-tissue-detection-with-adversarial-CAC-UNet Implementation detail for our paper "Multi-level colonoscopy malignant ti

CVSM Group - email: <a href=[email protected]"> 84 Nov 22, 2022
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

LVI-SAM This repository contains code for a lidar-visual-inertial odometry and mapping system, which combines the advantages of LIO-SAM and Vins-Mono

Tixiao Shan 1.1k Dec 27, 2022
Transformer in Computer Vision

Transformer-in-Vision A paper list of some recent Transformer-based CV works. If you find some ignored papers, please open issues or pull requests. **

506 Dec 26, 2022
Learning Synthetic Environments and Reward Networks for Reinforcement Learning

Learning Synthetic Environments and Reward Networks for Reinforcement Learning We explore meta-learning agent-agnostic neural Synthetic Environments (

AutoML-Freiburg-Hannover 16 Sep 02, 2022
Breaking the Dilemma of Medical Image-to-image Translation

Breaking the Dilemma of Medical Image-to-image Translation Supervised Pix2Pix and unsupervised Cycle-consistency are two modes that dominate the field

Kid Liet 86 Dec 21, 2022
This repo. is an implementation of ACFFNet, which is accepted for in Image and Vision Computing.

Attention-Guided-Contextual-Feature-Fusion-Network-for-Salient-Object-Detection This repo. is an implementation of ACFFNet, which is accepted for in I

5 Nov 21, 2022
Evaluating saliency methods on artificial data with different background types

Evaluating saliency methods on artificial data with different background types This repository contains the relevant code for the MedNeurips 2021 subm

2 Jul 05, 2022
Light-Head R-CNN

Light-head R-CNN Introduction We release code for Light-Head R-CNN. This is my best practice for my research. This repo is organized as follows: light

jemmy li 835 Dec 06, 2022
CS5242_2021 - Neural Networks and Deep Learning, NUS CS5242, 2021

CS5242_2021 Neural Networks and Deep Learning, NUS CS5242, 2021 Cloud Machine #1 : Google Colab (Free GPU) Follow this Notebook installation : https:/

Xavier Bresson 165 Oct 25, 2022