Face recognition. Redefined.

Overview

Contributors Forks Stargazers Issues MIT License LinkedIn


Logo

FaceFinder

Use a powerful CNN to identify faces in images!

TABLE OF CONTENTS
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgements

About The Project

screenshot

There is lots of face recognition software out there on github, but most of it focuses on speed over accuracy and uses models such as 'hog'. However, FaceFinder is one of the most powerful face recognition programs which uses a very large CNN to make accurate predictions.

Here's why:

  • Several modern technologies make use of face recognition and its importance in the world is constantly increasing.
  • You shouldn't have to train a full neural net of your own every time you want to perform face recognition.
  • FaceFinder contains code which runs approximately 3.7 times faster than average.

If you're making an app of your own and want it to perform face recognition, this is your go-to option.

A list of commonly used resources that I find helpful are listed in the acknowledgements.

Built With

Getting Started

To get a local copy up and running follow these simple steps.

Prerequisites

  • Latest versions of pip and setuptools
    pip install --upgrade pip setuptools
  • Conda
    pip install conda
  • Dlib
    python -m conda install dlib
  • Required packages
    pip install -r requirements.txt

Installation

  1. Ensure you're in your home directory:

    cd ~

    When you clone the repository it should show up as a subfolder in your home folder. You can change its location whenever you want.

  2. Clone the repo:

    git clone https://github.com/BleepLogger/FaceFinder

    Clone the repository by its URL.

  3. Navigate to cloned repository:

    cd FaceFinder

    Commands that you run should be run within the cloned repository.

  4. To run the program, execute tasks.py with command line arguments:

    python Scripts/tasks.py --data-dir '<data folder path>' --input_image '<path to image>'

    Replace the and with the real paths. They're just placeholders.

Usage

To run it from the command line, you will need to pass two arguments.

python Scripts/tasks.py --data-dir '<data folder path>' --input_image '<path to image>'

Replace the and with the real paths.

This program needs one directory containing different images labelled with whose face is present in the image. And then, you need an input image which will be classified.

So if you want to check whether an image is an image of your mom or your dad, then this is how you could do it:

  1. Create a directory called dataset/ in the FaceFinder directory in ~.
  2. Create two subdirectories, dataset/mom and dataset/dad.
  3. Add images of your mother to the mom subdir and your father to your dad subdir.
  4. Click an image of either your mom or your dad, the one you want to classify. Title it 2bclassified.jpg and put it in the FaceFinder directory.
  5. Run this command:
    python Scripts/tasks.py --data-dir 'dataset/' --input_image '2bclassified.jpg'

Then, after about 20 minutes of processing (6-7 if you have a GPU), a window will open up displaying your image, with a box highlighting the detected face and a box of text saying either "Mom" or saying "Dad".

You will have to install dlib from source if you want your GPU to be utilized. Look up the instructions to do that.

Roadmap

See the open issues for a list of proposed features (and known issues).

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Kanav Bhasin - @kanav_bhasin - [email protected]

Project Link: https://github.com/BleepLogger/FaceFinder


# Thank you!
Owner
BleepLogger
App/system developer specializing in C, Python, and JavaScript. Writes unreadable but very fast code. Skills include AI/ML, Web Scraping, and The Cloud.
BleepLogger
Styled text-to-drawing synthesis method. Featured at the 2021 NeurIPS Workshop on Machine Learning for Creativity and Design

Styled text-to-drawing synthesis method. Featured at the 2021 NeurIPS Workshop on Machine Learning for Creativity and Design

Peter Schaldenbrand 247 Dec 23, 2022
dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVMJ)

dualFace dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVMJ) We provide python implementations for our CVM 2021 paper "dualFac

Haoran XIE 46 Nov 10, 2022
Code for the paper "Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness"

DU-VAE This is the pytorch implementation of the paper "Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness" Acknowledgement

Dazhong Shen 4 Oct 19, 2022
Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning

Here is deepparse. Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning. Use deepparse to Use the pr

GRAAL/GRAIL 192 Dec 20, 2022
Arxiv harvester - Poor man's simple harvester for arXiv resources

Poor man's simple harvester for arXiv resources This modest Python script takes

Patrice Lopez 5 Oct 18, 2022
PyTorch Implementation of Vector Quantized Variational AutoEncoders.

Pytorch implementation of VQVAE. This paper combines 2 tricks: Vector Quantization (check out this amazing blog for better understanding.) Straight-Th

Vrushank Changawala 2 Oct 06, 2021
Semi-automated OpenVINO benchmark_app with variable parameters

Semi-automated OpenVINO benchmark_app with variable parameters. User can specify multiple options for any parameters in the benchmark_app and the progam runs the benchmark with all combinations of gi

Yasunori Shimura 8 Apr 11, 2022
ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021 Dataset Code Demos Authors: He Zhang, Yuting Ye, Tak

HE ZHANG 194 Dec 06, 2022
Hard cater examples from Hopper ICLR paper

CATER-h Honglu Zhou*, Asim Kadav, Farley Lai, Alexandru Niculescu-Mizil, Martin Renqiang Min, Mubbasir Kapadia, Hans Peter Graf (*Contact: honglu.zhou

NECLA ML Group 6 May 11, 2021
Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, as a standalone package for Pytorch

Triangle Multiplicative Module - Pytorch Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or c

Phil Wang 22 Oct 28, 2022
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Shi Guo 32 Dec 15, 2022
Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research

Megaverse Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research. The efficient design of the engine enables ph

Aleksei Petrenko 191 Dec 23, 2022
The Official Implementation of Neural View Synthesis and Matching for Semi-Supervised Few-Shot Learning of 3D Pose [NIPS 2021].

Neural View Synthesis and Matching for Semi-Supervised Few-Shot Learning of 3D Pose Release Notes The offical PyTorch implementation of Neural View Sy

Angtian Wang 20 Oct 09, 2022
A PyTorch Implementation of "Watch Your Step: Learning Node Embeddings via Graph Attention" (NeurIPS 2018).

Attention Walk ⠀⠀ A PyTorch Implementation of Watch Your Step: Learning Node Embeddings via Graph Attention (NIPS 2018). Abstract Graph embedding meth

Benedek Rozemberczki 303 Dec 09, 2022
[ICML 2021] A fast algorithm for fitting robust decision trees.

GROOT: Growing Robust Trees Growing Robust Trees (GROOT) is an algorithm that fits binary classification decision trees such that they are robust agai

Cyber Analytics Lab 17 Nov 21, 2022
MTCNN face detection implementation for TensorFlow, as a PIP package.

MTCNN Implementation of the MTCNN face detector for Keras in Python3.4+. It is written from scratch, using as a reference the implementation of MTCNN

Iván de Paz Centeno 1.9k Dec 30, 2022
Tools for manipulating UVs in the Blender viewport.

UV Tool Suite for Blender A set of tools to make editing UVs easier in Blender. These tools can be accessed wither through the Kitfox - UV panel on th

35 Oct 29, 2022
OpenMMLab 3D Human Parametric Model Toolbox and Benchmark

Introduction English | 简体中文 MMHuman3D is an open source PyTorch-based codebase for the use of 3D human parametric models in computer vision and comput

OpenMMLab 782 Jan 04, 2023
ExCon: Explanation-driven Supervised Contrastive Learning

ExCon: Explanation-driven Supervised Contrastive Learning Link to the paper: https://arxiv.org/pdf/2111.14271.pdf Contributors of this repo: Zhibo Zha

Zhibo (Darren) Zhang 18 Nov 01, 2022
Rotated Box Is Back : Accurate Box Proposal Network for Scene Text Detection

Rotated Box Is Back : Accurate Box Proposal Network for Scene Text Detection This material is supplementray code for paper accepted in ICDAR 2021 We h

NCSOFT 30 Dec 21, 2022