Virtual hand gesture mouse using a webcam

Overview

NonMouse

GitHub GitHub release (latest by date) zenn

日本語のREADMEはこちら

This is an application that allows you to use your hand itself as a mouse.
The program uses a web camera to recognize your hand and control the mouse cursor.

The video is available on Youtube

スクリーンショット 2021-09-13 午後5 33 21

スクリーンショット 2021-09-13 午後5 33 21

Feature

  • Entirely new mouse: a mouse with a software approach. It recognizes your hand and works as a mouse.
  • NonMouse can be invoked by the global hotkey even when this application is inactive.
  • Works well with typing
  • You can make it look like a touch display, by pointing the web camera at the display.
  • You can use the mouse wherever you want.
  • Just download from the latest release (windows, mac only)

Installation

📁 Run as executable file

Download the zip file that matches your environment from the latest release.

OR

🐍 Run as python

Run the following script.

$ git clone https://github.com/takeyamayuki/NonMouse
$ cd NonMouse
$ pip install -r requirements.txt

If you have trouble installing mediapipe, please visit the official website.

† For mac, you need to add the location where you want to run it, such as Terminal or VScode, to the Security and Privacy Accessibility and Cammera section in System Preferences.

Usage

1. Install a camera

The following three ways of placing the device are assumed.

  • Normal: Place a webcam normally and point it at yourself (or use your laptop's built-in camera)

    Examples of installation methods Point the palm of your hand at the camera
    スクリーンショット 2021-09-13 午後5 33 21
    スクリーンショット 2021-09-13 午後5 33 21 スクリーンショット 2021-09-23 044041
  • Above: Place it above your hand and point it towards your hand.

    An example of installation methods Point the back of your hand at the camera.
    スクリーンショット 2021-09-13 午後5 33 21 スクリーンショット 2021-09-23 044243
  • Behind: Place it behind you and point it at the display.

    An example of installation methods Point the back of your hand at the camera.
    スクリーンショット 2021-09-13 午後5 33 21 スクリーンショット 2021-09-23 044403

2. Run

  • Run the executable as described in the GitHub wiki.

    OR

  • Run the following script from the continuation of the installation.

    For windows, linux(global hotkey function does not work in linux.)

    $ python3 app.py

    For MacOS, you need execute permission.

    $ sudo python3 app.py

3. Settings

When you run the program, You will see a screen similar to the following. On this screen, you can set the camera and sensitivity.

スクリーンショット 2021-12-02 154251

  • Camera
    Select a camera device. If multiple cameras are connected, try them in order, starting with the smallest number.

  • How to place
    Select the location where you placed the camera. Place the camera in one of the following positions: Normal, Above, Behind in [ 📷 Install a Camera].

  • Sensitivity
    Set the sensitivity. If set too high, the mouse cursor will shake slightly.

When you are done with the settings, click continue. The camera image will then be displayed, and you can use NonMouse with the settings you selected.

4. Hand Movements

stop cursor left click right click scroll
aaa aaa aaa aaa

The following hand movements are enabled only when you hold down Alt(Windows), Command(MacOS). You can define your own global hotkeys by rewriting here. You can use this function even if the window is not active.This feature is only available on windows and mac.

  • cursor
    • Mouse cursor: tip of index finger → A blue circle will appear at the tip of your index finger.
    • Stop mouse cursor: Attach the tip of your index finger to the tip of your middle finger. → The blue circle disappears.
  • left click
    • Left click: Attach the fingertips of your thumb to the second joint of your index finger → A yellow circle will appear on the tip of your index finger.
    • Left click release: Release the thumb fingertip and the second joint of the index finger. → The yellow circle disappears.
    • Double click: Left click twice within 0.5 seconds.
  • other
    • Right click: Hold the click state for 1.5 second without moving the cursor. → A red circle will appear at the tip of your index finger.
    • Scroll: Scroll with the index finger with the index finger folded → a black circle will appear.

† Use it with a bright light at hand.
† Keep your hand as straight as possible to the camera.

5. Quit

Press Ctrl+C, when a terminal window is active.
Press close button(Valid only on windows, linux) or Esc key, when an application window is active.

Build

† The built binary files can be downloaded from latest realease.

In app-mac.spec and app-win.spec, change pathex to fit your environment.
Run the following scripts for each OS.

  • windows

    Copy and paste the location obtained by pip show mediapipe into datas, referring to what is written originally.
    Run the following script.

    $ pip show mediapipe
    ...
    Location: c:\users\namik\appdata\local\programs\python\python37\lib\site_packages
    ...
    #Copy and paste into the datas in app-win.spec
    $ pyinstaller app-win.spec
    ... ````
  • mac

    Create a venv environment and perform pip install, because the directory specified in datas is for an assumed venv environment.

    $ python3 -m venv venv
    $ . venv/bin/activate
    (venv)$ pip install -r requirements.txt
    (venv)$ pyinstaller app-mac.spec
Comments
  • error: PyObjC requires macOS to build

    error: PyObjC requires macOS to build

    Hello. When installing requirements on Linux I get this error:

    Collecting pyobjc-core==7.3
      Downloading pyobjc-core-7.3.tar.gz (684 kB)
         ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 684.2/684.2 KB 5.4 MB/s eta 0:00:00
      Preparing metadata (setup.py) ... error
      error: subprocess-exited-with-error
      
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [2 lines of output]
          running egg_info
          error: PyObjC requires macOS to build
          [end of output]
      
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    
    × Encountered error while generating package metadata.
    ╰─> See above for output.
    
    opened by jbrepogmailcom 2
  • ZeroDivisionErrorでクラッシュする

    ZeroDivisionErrorでクラッシュする

    cfps = int(cap.get(cv2.CAP_PROP_FPS)) でcfpsが1になって if cfps < 30: cap.set(cv2.CAP_PROP_FRAME_WIDTH, cap_width) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, cap_height) cfps = int(cap.get(cv2.CAP_PROP_FPS)) を通してもcfpsが1になり、 ran = int(cfps/10) でranが0になってしまい、ZeroDivisionErrorでクラッシュします。

    環境は以下の通りです PC: MacBook Pro 2020 Intel OS: macOS Monterey カメラ: Mac内蔵Webカメラ Pythonバージョン: 3.9.7

    opened by takpika 2
  • MacOS MontereyでカメラFPSが5Hz程度になる

    MacOS MontereyでカメラFPSが5Hz程度になる

    https://github.com/takeyamayuki/NonMouse/blob/1e7f7bc5e9f52a29196ae631b4cb7e83a910aa27/app.py#L117-L121

    cfps = int(cap.get(cv2.CAP_PROP_FPS))で取得するカメラのFPSが5Hzになってしまい、カーソルが小刻みに揺れてしまう。 これは恐らく、cv2の問題だと思われる。

    opened by takeyamayuki 0
  • How to set screen size?

    How to set screen size?

    Hello the mouse only works on left two thirds of the screen. I gues it has something to do with screen size, my lapto has 1920x1080. Can I reconfigure it somewhere?

    opened by jbrepogmailcom 1
  • カメラや手が傾くと、自分の指先移動量に対するマウス移動量が異なってしまう

    カメラや手が傾くと、自分の指先移動量に対するマウス移動量が異なってしまう

    たとえば、カメラを斜めに取り付けると、カメラに映る指は縦方向に圧縮されたものになり、当然移動量もその分小さくなる。 現在の自分の環境だと手に向けてまっすぐ取り付けられているので問題はないが、初めて使う人にとってはこのアプリケーションを使う上での大きな壁となる。

    「解決策」 まず、基準となる指の関節座標を用意しておく。 その基準となる座標群と現在の座標群を比較して手の傾き具合を求める。 それによって、dx,dy変える

    opened by takeyamayuki 0
Releases(2.6.0)
Codebase for "Revisiting spatio-temporal layouts for compositional action recognition" (Oral at BMVC 2021).

Revisiting spatio-temporal layouts for compositional action recognition Codebase for "Revisiting spatio-temporal layouts for compositional action reco

Gorjan 20 Dec 15, 2022
PushForKiCad - AISLER Push for KiCad EDA

AISLER Push for KiCad Push your layout to AISLER with just one click for instant

AISLER 31 Dec 29, 2022
To Design and Implement Logistic Regression to Classify Between Benign and Malignant Cancer Types

To Design and Implement Logistic Regression to Classify Between Benign and Malignant Cancer Types, from a Database Taken From Dr. Wolberg reports his Clinic Cases.

Astitva Veer Garg 1 Jul 31, 2022
CLADE - Efficient Semantic Image Synthesis via Class-Adaptive Normalization (TPAMI 2021)

Efficient Semantic Image Synthesis via Class-Adaptive Normalization (Accepted by TPAMI)

tzt 49 Nov 17, 2022
Code for the paper "JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design"

JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design This repository contains code for the paper: JA

Aspuru-Guzik group repo 55 Nov 29, 2022
CT-Net: Channel Tensorization Network for Video Classification

[ICLR2021] CT-Net: Channel Tensorization Network for Video Classification @inproceedings{ li2021ctnet, title={{\{}CT{\}}-Net: Channel Tensorization Ne

33 Nov 15, 2022
Biomarker identification for COVID-19 Severity in BALF cells Single-cell RNA-seq data

scBALF Covid-19 dataset Analysis Here is the Github page that has the codes for the bioinformatics pipeline described in the paper COVID-Datathon: Bio

Nami Niyakan 2 May 21, 2022
ADOP: Approximate Differentiable One-Pixel Point Rendering

ADOP: Approximate Differentiable One-Pixel Point Rendering Abstract: We present a novel point-based, differentiable neural rendering pipeline for scen

Darius Rückert 1.9k Jan 06, 2023
Official Pytorch implementation of Meta Internal Learning

Official Pytorch implementation of Meta Internal Learning

10 Aug 24, 2022
A PyTorch Implementation of "Neural Arithmetic Logic Units"

Neural Arithmetic Logic Units [WIP] This is a PyTorch implementation of Neural Arithmetic Logic Units by Andrew Trask, Felix Hill, Scott Reed, Jack Ra

Kevin Zakka 181 Nov 18, 2022
End-to-End Dense Video Captioning with Parallel Decoding (ICCV 2021)

PDVC Official implementation for End-to-End Dense Video Captioning with Parallel Decoding (ICCV 2021) [paper] [valse论文速递(Chinese)] This repo supports:

Teng Wang 118 Dec 16, 2022
El-Gamal on Elliptic Curve (Python)

El-Gamal-on-EC El-Gamal on Elliptic Curve (Python) References: https://docsdrive.com/pdfs/ansinet/itj/2005/299-306.pdf https://arxiv.org/ftp/arxiv/pap

3 May 04, 2022
Pytorch implementation of PTNet for high-resolution and longitudinal infant MRI synthesis

Pyramid Transformer Net (PTNet) Project | Paper Pytorch implementation of PTNet for high-resolution and longitudinal infant MRI synthesis. PTNet: A Hi

Xuzhe Johnny Zhang 6 Jun 08, 2022
ROMP: Monocular, One-stage, Regression of Multiple 3D People, ICCV21

Monocular, One-stage, Regression of Multiple 3D People ROMP, accepted by ICCV 2021, is a concise one-stage network for multi-person 3D mesh recovery f

Yu Sun 937 Jan 04, 2023
A Python library for Deep Graph Networks

PyDGN Wiki Description This is a Python library to easily experiment with Deep Graph Networks (DGNs). It provides automatic management of data splitti

Federico Errica 194 Dec 22, 2022
📝 Wrapper library for text generation / language models at char and word level with RNN in TensorFlow

tensorlm Generate Shakespeare poems with 4 lines of code. Installation tensorlm is written in / for Python 3.4+ and TensorFlow 1.1+ pip3 install tenso

Kilian Batzner 63 May 22, 2021
Code for 2021 NeurIPS --- Towards Multi-Grained Explainability for Graph Neural Networks

ReFine: Multi-Grained Explainability for GNNs We are trying hard to update the code, but it may take a while to complete due to our tight schedule rec

Shirley (Ying-Xin) Wu 47 Dec 16, 2022
💛 Code and Dataset for our EMNLP 2021 paper: "Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes"

Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes Official PyTorch implementation and EmoCause evaluatio

Hyunwoo Kim 51 Jan 06, 2023
Compute execution plan: A DAG representation of work that you want to get done. Individual nodes of the DAG could be simple python or shell tasks or complex deeply nested parallel branches or embedded DAGs themselves.

Hello from magnus Magnus provides four capabilities for data teams: Compute execution plan: A DAG representation of work that you want to get done. In

12 Feb 08, 2022
An implementation for the loss function proposed in Decoupled Contrastive Loss paper.

Decoupled-Contrastive-Learning This repository is an implementation for the loss function proposed in Decoupled Contrastive Loss paper. Requirements P

Ramin Nakhli 71 Dec 04, 2022