Free like Freedom

Overview

This is all very much a work in progress! More to come!

( We're working on it though! Stay tuned!)

Installation

  • Open an Anaconda Prompt (in Windows, or any terminal on Mac/Linux) and enter the following comands

conda create -n freemocap-env python=3.7

conda activate freemocap-env

pip install freemocap -v

ipython

import freemocap as fmc
fmc.RunMe() #this is where the magic happens.
2021-06-12_FreeMoCap_Clips_16MB.mp4

Prerequisites -

Required

  • A Python 3.7 environment: We recommend installing Anaconda from here (https://www.anaconda.com/products/individual#Downloads) to create your Python environment.

  • Two or more USB webcams attached to viable USB ports

    • (USB hubs typically don't work)
  • Each recording must (for now) start with an unobstructed view of a Charuco board generated with python commands (or equivalent):

     import cv2
     
     aruco_dict = cv2.aruco.Dictionary_get(cv2.aruco.DICT_4X4_250) #note `cv2.aruco` can be installed via `pip install opencv-contrib-python`
     
     board = cv2.aruco.CharucoBoard_create(7, 5, 1, .8, aruco_dict)
     
     charuco_board_image = board.draw((2000,2000)) #`2000` is the resolution of the resulting image. Increase this number if printing a large board (bigger is better! Esp for large spaces!
     
     cv2.imwrite('charuco_board_image.png',charuco_board_image)
     
    

Optional If you would like to use OpenPose for body tracking, install Cude and the Windows Portable Demo of OpenPose.

Follow the GitHub Repository and/or Join the Discord (https://discord.gg/HX7MTprYsK) for updates!

Stay Tuned for more soon!

Comments
  • Ubuntu Support?

    Ubuntu Support?

    The following diffs were needed to make it work under Linux

    +++ b/freemocap/webcam/camsetup.py
    @@ -21,7 +21,7 @@ class VideoSetup(threading.Thread):
             camName = "Camera" + str(self.camID)
     
             cv2.namedWindow(camName)
    -        cap = cv2.VideoCapture(self.camID, cv2.CAP_DSHOW)
    +        cap = cv2.VideoCapture(self.camID, cv2.CAP_ANY)
             cap.set(cv2.CAP_PROP_FRAME_WIDTH, resWidth)
             cap.set(cv2.CAP_PROP_FRAME_HEIGHT, resHeight)
             cap.set(cv2.CAP_PROP_EXPOSURE, exposure)
    diff --git a/freemocap/webcam/checkcams.py b/freemocap/webcam/checkcams.py
    index dda1348..fb4a6a0 100644
    --- a/freemocap/webcam/checkcams.py
    +++ b/freemocap/webcam/checkcams.py
    @@ -3,7 +3,7 @@ import cv2
     
     
     def TestDevice(source):
    -    cap = cv2.VideoCapture(source, cv2.CAP_DSHOW)
    +    cap = cv2.VideoCapture(source, cv2.CAP_ANY)
         # if cap is None or not cap.isOpened():
         # print('Warning: unable to open video source: ', source)
     
    diff --git a/freemocap/webcam/startcamrecording.py b/freemocap/webcam/startcamrecording.py
    index a0cdec1..011d853 100644
    --- a/freemocap/webcam/startcamrecording.py
    +++ b/freemocap/webcam/startcamrecording.py
    @@ -44,7 +44,7 @@ def CamRecording(
         flag = False
     
         cv2.namedWindow(camID)  # name the preview window for the camera its showing
    -    cam = cv2.VideoCapture(camInput, cv2.CAP_DSHOW)  # create the video capture object
    +    cam = cv2.VideoCapture(camInput, cv2.CAP_ANY)  # create the video capture object
         # if not cam.isOpened():
         #         raise RuntimeError('No camera found at input '+ str(camID))
         # pulling out all the dictionary paramters
    
    

    But while everything works with one camera at a time, it seems like it blocks with two identical cameras

    The error manifests itself outside of your code

    I get

    [  806.191512] uvcvideo: Failed to query (SET_CUR) UVC control 4 on unit 1: -32 (exp. 4).
    

    Here is lsusb

    [email protected]:~/Desktop$ lsusb 
    Bus 001 Device 005: ID 046d:09a4 Logitech, Inc. QuickCam E 3500
    Bus 001 Device 004: ID 046d:09a4 Logitech, Inc. QuickCam E 3500
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 004 Device 002: ID 046d:c542 Logitech, Inc. Wireless Receiver
    Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 002 Device 004: ID 060b:7a16 Solid Year MD800
    Bus 002 Device 003: ID 0627:0001 Adomax Technology Co., Ltd QEMU USB Hub
    Bus 002 Device 002: ID 0409:55aa NEC Corp. Hub
    Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    

    do each of the cameras need to be on a separate bus?

    enhancement question 
    opened by kognat-docs 27
  • Multi Camera Calibration Failure

    Multi Camera Calibration Failure

    My team and I are attempting to use FreeMoCap for a system that records eight views in synchrony. We are currently having issues calibrating all eight cameras, which face opposing directions in our hallway set-up, making it difficult for them to see one CharUco board at once. To resolve this, we printed a double-sided CharUco board, allowing us to create a 3D skeleton output; however, the skeleton seems to be both inverted and translated compared to the subject from certain camera angles. We then tried to calibrate our system using hallway cameras on one side, which all face each other and can observe a charUco board simultaneously. Still, we saw the output skeleton as translated and inverted, ruling out the double-sided board as the reason for the translation. Any suggestions on how to calibrate a system set up like this or why the skeletons seem to be inverted/displaced?

    Attached is a link with the videos and data array/calibration files for the 8 camera set-up. https://drive.google.com/drive/folders/1wv4RHzPFDLeIXr9xONC72fhbW3MKQT_q?usp=sharing

    opened by DestroytheCity 18
  • Using Blender from Wrong Session

    Using Blender from Wrong Session

    I started a new FMC session, yet when running Stage 5 FMC keeps using the .blend from my original project instead of the new one i made for it in the right session folder. It then fails saying that the file doesn't exist. Any thoughts? *replaced login with X

    Running sesh 2 from /home/X/bac_mj_ar/FreeMocap_Data ──────────────────────────────────────────────────────────────────────────────── Using blender executable located at: /home/X/bac_mj_ar/FreeMocap_Data/sesh/Session_ID.blend Skipping Video Recording Skipping Video Syncing Skipping Calibration Skipping 2d point tracking ──────────────────────────────────────────────────────────────────────────────── ────────────────────────────── EXPORTING FILES... ────────────────────────────── ─ Hijacking Blender's file format converters to export FreeMoCap data as vari… ─ ──────────────────────────────────────────────────────────────────────────────── ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /home/X/.conda/envs/freemocap-env/lib/python3.7/site-packages/freemoca │ │ p/fmc_runme.py:335 in RunMe │ │ │ │ 332 │ │ │ │ │ │ │ │ │ │ command_str, │ │ 333 │ │ │ │ │ │ │ │ │ │ shell=False, │ │ 334 │ │ │ │ │ │ │ │ │ │ stdout=subprocess.PIPE, │ │ ❱ 335 │ │ │ │ │ │ │ │ │ │ stderr=subprocess.PIPE │ │ 336 │ │ │ │ │ │ │ │ │ │ ) │ │ 337 │ │ │ │ while True: │ │ 338 │ │ │ │ │ output = blender_process.stdout.readline() │ │ │ │ /home/X/.conda/envs/freemocap-env/lib/python3.7/subprocess.py:800 in │ │ init │ │ │ │ 797 │ │ │ │ │ │ │ │ p2cread, p2cwrite, │ │ 798 │ │ │ │ │ │ │ │ c2pread, c2pwrite, │ │ 799 │ │ │ │ │ │ │ │ errread, errwrite, │ │ ❱ 800 │ │ │ │ │ │ │ │ restore_signals, start_new_session) │ │ 801 │ │ except: │ │ 802 │ │ │ # Cleanup if the child failed starting. │ │ 803 │ │ │ for f in filter(None, (self.stdin, self.stdout, self.stde │ │ │ │ /home/X/.conda/envs/freemocap-env/lib/python3.7/subprocess.py:1551 in │ │ _execute_child │ │ │ │ 1548 │ │ │ │ │ │ err_msg = os.strerror(errno_num) │ │ 1549 │ │ │ │ │ │ if errno_num == errno.ENOENT: │ │ 1550 │ │ │ │ │ │ │ err_msg += ': ' + repr(err_filename) │ │ ❱ 1551 │ │ │ │ │ raise child_exception_type(errno_num, err_msg, er │ │ 1552 │ │ │ │ raise child_exception_type(err_msg) │ │ 1553 │ │ 1554 │ ╰──────────────────────────────────────────────────────────────────────────────╯ FileNotFoundError: [Errno 2] No such file or directory: '/home/X/bac_mj_ar/FreeMocap_Data/sesh/Session_ID.blend --background --python /home/X/.conda/envs/freemocap-env/lib/python3.7/site-packages/fre emocap/freemocap_blender_megascript.py -- /home/X/bac_mj_ar/FreeMocap_Data/sesh 2 0': '/home/X/bac_mj_ar/FreeMocap_Data/sesh/Session_ID.blend --background --python /home/X/.conda/envs/freemocap-env/lib/python3.7/site-packages/fre emocap/freemocap_blender_megascript.py -- /home/X/bac_mj_ar/FreeMocap_Data/sesh 2 0'

    opened by DestroytheCity 13
  • macOS BigSur Support

    macOS BigSur Support

    Seems like the threading is the cause of the issue here.

    (freemocap) MacBook-Pro:freemocap sam$ python runme_FreeMoCap.py 
    Starting initialization for stage 1
      0%|                                                    | 0/20 [00:00<?, ?it/s]Oct  2 15:40:31  ThetaUVC_blender[33664] <Notice>: ------------ ThetaUVC_blender plugin start (version:2.0.1 built:Fri Aug 19 15:54:46 JST 2016 pid=33664 RELEASE). ------------ #thetauvc
      5%|██▏                                         | 1/20 [00:01<00:37,  1.98s/it]OpenCV: out device of bound (0-0): 1
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 2
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 3
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 4
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 5
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 6
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 7
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 8
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 9
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 10
    OpenCV: camera failed to properly initialize!
     55%|███████████████████████▋                   | 11/20 [00:02<00:01,  7.18it/s]OpenCV: out device of bound (0-0): 11
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 12
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 13
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 14
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 15
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 16
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 17
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 18
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 19
    OpenCV: camera failed to properly initialize!
    100%|███████████████████████████████████████████| 20/20 [00:02<00:00,  9.17it/s]
    2021-10-02 15:40:41.571 python[33664:261625353] WARNING: NSWindow drag regions should only be invalidated on the Main Thread! This will throw an exception in the future. Called from (
    	0   AppKit                              0x00007fff22b6ded1 -[NSWindow(NSWindow_Theme) _postWindowNeedsToResetDragMarginsUnlessPostingDisabled] + 352
    	1   AppKit                              0x00007fff22b58aa2 -[NSWindow _initContent:styleMask:backing:defer:contentView:] + 1296
    	2   AppKit                              0x00007fff22b5858b -[NSWindow initWithContentRect:styleMask:backing:defer:] + 42
    	3   AppKit                              0x00007fff22e6283c -[NSWindow initWithContentRect:styleMask:backing:defer:screen:] + 52
    	4   cv2.cpython-37m-darwin.so           0x000000010ead0c94 cvNamedWindow + 564
    	5   cv2.cpython-37m-darwin.so           0x000000010eacdd1a _ZN2cv11namedWindowERKNS_6StringEi + 58
    	6   cv2.cpython-37m-darwin.so           0x000000010dc4a487 _ZL23pyopencv_cv_namedWindowP7_objectS0_S0_ + 231
    	7   python                              0x0000000104cd3f2b _PyMethodDef_RawFastCallKeywords + 395
    	8   python                              0x0000000104e0b9bb call_function + 251
    	9   python                              0x0000000104e034eb _PyEval_EvalFrameDefault + 20171
    	10  python                              0x0000000104cd3b15 _PyFunction_FastCallKeywords + 229
    	11  python                              0x0000000104e0b977 call_function + 183
    	12  python                              0x0000000104e03462 _PyEval_EvalFrameDefault + 20034
    	13  python                              0x0000000104cd3b15 _PyFunction_FastCallKeywords + 229
    	14  python                              0x0000000104e0b977 call_function + 183
    	15  python                              0x0000000104e03462 _PyEval_EvalFrameDefault + 20034
    	16  python                              0x0000000104cd3b15 _PyFunction_FastCallKeywords + 229
    	17  python                              0x0000000104e0b977 call_function + 183
    	18  python                              0x0000000104e03462 _PyEval_EvalFrameDefault + 20034
    	19  python                              0x0000000104cd1fea _PyFunction_FastCallDict + 234
    	20  python                              0x0000000104cd66ba method_call + 122
    	21  python                              0x0000000104cd445f PyObject_Call + 127
    	22  python                              0x0000000104ef7e3a t_bootstrap + 122
    	23  python                              0x0000000104e7c764 pythread_wrapper + 36
    	24  libsystem_pthread.dylib             0x00007fff2031a8fc _pthread_start + 224
    	25  libsystem_pthread.dylib             0x00007fff20316443 thread_start + 15
    )
    __________________________________________
    cv2::videocapture properties for Camera# 0
    CV_CAP_PROP_FRAME_WIDTH: '1280.0'
    CV_CAP_PROP_FRAME_HEIGHT : '720.0'
    CAP_PROP_FPS : '30.0'
    CAP_PROP_EXPOSURE : '0.0'
    CAP_PROP_POS_MSEC : '0.0'
    CAP_PROP_FRAME_COUNT  : '0.0'
    CAP_PROP_BRIGHTNESS : '0.0'
    CAP_PROP_CONTRAST : '0.0'
    CAP_PROP_SATURATION : '0.0'
    CAP_PROP_HUE : '0.0'
    CAP_PROP_GAIN  : '0.0'
    CAP_PROP_CONVERT_RGB : '0.0'
    __________________________________________
    2021-10-02 15:40:42.898 python[33664:261625353] WARNING: nextEventMatchingMask should only be called from the Main Thread! This will throw an exception in the future.
    
    opened by kognat-docs 13
  • installation requirements or instructions

    installation requirements or instructions

    I have some undergrad students using this for a class project, and they ran into two installation issues that were easily solved, and should be solvable by either modifying the installation requirements, or just editing the instructions.

    All on windows.

    The first is that ipython is not automatically installed, so the start instructions fail. For those with new conda installations, after activating the env, conda install ipython is a simple fix.

    The second error comes after recording, with the end of the Traceback being:

    ~\Miniconda3\envs\freemocap\lib\site-packages\moviepy\video\io\ffmpeg_writer.py in __init__(self, filename, size, fps, codec, audiofile, preset, bitrate, withmask, logfile, threads, ffmpeg_params)
         86             '-s', '%dx%d' % (size[0], size[1]),
         87             '-pix_fmt', 'rgba' if withmask else 'rgb24',
    ---> 88             '-r', '%.02f' % fps,
         89             '-an', '-i', '-'
         90         ]
    
    TypeError: must be real number, not NoneType
    

    This is a problem with moviepy not finding ffmpeg. It's possible to install ffmpeg on windows and add to the path independently, but I prefer to install it in conda with conda install ffmpeg. However, moviepy can't find it, so after installing ffmpeg, running

    pip uninstall moviepy
    pip install moviepy
    

    works. I didn't have time to try to install ffmpeg first, but I think just adding it to the env creation line should fix the problem.

    opened by backyardbiomech 8
  • not enough values to unpack (expected 2, got 0)

    not enough values to unpack (expected 2, got 0)

    Hello,

    Could someone help me solve this problem, please! Thank you in advance for your help.

    Here is what displays to me, knowing that I used two webcam type cameras. image image image image

    opened by Ramdane-HACHOUR 7
  • FMC on Greyscale Videos

    FMC on Greyscale Videos

    Hello! We are running into an issue in which FMC seems to be swapping the skeletons generated from one camera onto the view of another camera (see video). Is this a potential result of using greyscale cameras? The calibration seems to work fine as the video generated reflects accurate skeletons, just placed on the wrong camera (ie camera 1 generates skeleton 1, but skeleton 1 is placed onto the view of camera 3). Has anyone encountered similar issues/have any suggestions on how to resolve this issue? Thank you in advance!

    https://user-images.githubusercontent.com/114196168/201419033-18be83ad-e852-4a82-b116-9fbfe470bec2.mp4

    opened by DestroytheCity 6
  • Need to raise a more informative Exception/Error when Charuco points not detected (and allow to continue if only 1 camera is selected)

    Need to raise a more informative Exception/Error when Charuco points not detected (and allow to continue if only 1 camera is selected)

    Currently, if no charuco points are detected, the code fails in this way -

    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\__init__.py", line 124, in RunMe
        sesh, board
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\calibrate.py", line 84, in CalibrateCaptureVolume
        error,merged = cgroup.calibrate_videos(vidnames, board)
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\fmc_anipose.py", line 1740, in calibrate_videos
        **kwargs
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\fmc_anipose.py", line 1672, in calibrate_rows
        objp, imgp = zip(*mixed)
    ValueError: not enough values to unpack (expected 2, got 0)
    

    ...which should instead -

    • [ ] Raise an informative Error/Exception/Whatever thing
    • [ ] (Optional) If only one camera is selected, this is a warning. If more than one selected, this is an Error
    • [ ] (Optional) maybe in general there should be a 'single-camera' mode? so that people can just use this wrapper on their single webcam to play with OpenPose, MediaPipe, DLC, etc?
    opened by jonmatthis 6
  • Can't use multiple cameras on Ubuntu 20.04

    Can't use multiple cameras on Ubuntu 20.04

    Hi, thanks for all your work on this project. I'm excited to see where it goes.

    I'm having an issue where I can't seem to get this working with multiple cameras. I have three webcams plugged directly into my laptop (I also tried plugging into a hub, but didn't change anything).

    When I go through the camera setup process and click "Submit", only the first camera lights up and only a blank rectangular window appears.

    This is the output I see:

    __________________________________________
    cv2::videocapture properties for Camera# 0
    CV_CAP_PROP_FRAME_WIDTH: '640.0'
    CV_CAP_PROP_FRAME_HEIGHT : '480.0'
    CAP_PROP_FPS : '30.0'
    CAP_PROP_EXPOSURE : '0.008200820082008202'
    CAP_PROP_POS_MSEC : '0.0'
    CAP_PROP_FRAME_COUNT  : '-1.0'
    CAP_PROP_BRIGHTNESS : '0.5019607843137255'
    CAP_PROP_CONTRAST : '0.12549019607843137'
    CAP_PROP_SATURATION : '0.12549019607843137'
    CAP_PROP_HUE : '-1.0'
    CAP_PROP_GAIN  : '0.20392156862745098'
    CAP_PROP_CONVERT_RGB : '1.0'
    __________________________________________
    

    It looks like it's getting stuck on the first camera and never manages to start up the other cameras?

    It works fine if I select just one camera.

    opened by frnsys 4
  • SPIKE: Look into HackMD / MkDocs Integration for Knowledge Base

    SPIKE: Look into HackMD / MkDocs Integration for Knowledge Base

    Workflow:

    1. Use mkdocs to create our knowledge base skin
    2. Use HackMd with MkDocs configuration protocols to write KB articles

    Notes: https://www.mkdocs.org/getting-started/

    PR #212

    opened by endurance 4
  • list index out or range on stage 3

    list index out or range on stage 3

    Hey, I've followed the setup but I'm having the following on Stage 3 - Calibrate Capture Volume: list index out of range.

    Here are some screenshots of the error: Image1 Image2.

    Thanks a lot for any help.

    opened by tomazsaraiva 4
  • Pre-recorded MP4s are not recognized by alpha GUI on Mac/Linux

    Pre-recorded MP4s are not recognized by alpha GUI on Mac/Linux

    Following the process in the documentation for processing previously recorded videos, videos with the file extension MP4 are not recognized on Mac/Linux. The non-capitalized variant mp4 is recognized by Mac/Linux, and both cases should be recognized by windows (although I can't test this personally). The case sensitivity issue is due to the changing behavior of glob.glob() across operating systems, as explained here.

    To resolve this issue, the file search should either be made case insensitive on Mac/Linux to match the Windows behavior, or case sensitive on Windows to match the Mac/Linux behavior (the linked article above describes a function to make glob case sensitive on Windows).

    opened by philipqueen 0
  • Question Regarding Output Files

    Question Regarding Output Files

    Have successfully gotten FMC off the ground for our project, but we had some questions regarding the output files produced.

    1. In the Mediapipe_body+3d+xyz.csv file produced via the Alpha GUI, what is the unit for time? We see that our videos are roughly 19 seconds long and contain 2063 frames, yet we have 1753 measurements of each key point. Is this the number when all cameras are active and tracking?
    2. How are the x,y,z coordinates established? Are they consistent between videos using the same calibration or different calibrations within the same hallway?

    EDIT: For the Mediapipe_body+3d+xyz.csv, also wondering what the coordinates represent/what the unit for each measurement is.

    opened by DestroytheCity 4
  • Set timer to start and stop recording

    Set timer to start and stop recording

    Life improvement:

    What? Make an option to start and stop a recording after a set duration.

    e.g. pres button, recording starts after 5 seconds and ends 30 seconds after that.

    Why? Because it makes recording alone easier (pres button, walk to recording volume, record) Because it can help standardize patient recordings (e.g. 30 seconds sit-to-stand test)

    BONUS POINTS if FreeMoCap plays a sound when the recordings start and stop.

    opened by steenharsted 0
  • Chose camera to use for orientation and start of global coordinate system

    Chose camera to use for orientation and start of global coordinate system

    Down the line, users should be able to control the start and orientation of the global coordinate system using the Charuco board (https://github.com/freemocap/freemocap/issues/282).

    Until that is implemented, a workable fix could be to allow users to select what camera is being used for the origin and orientation of the global coordinate system.

    This will allow users to have some control over the global coordinate system by having one camera set at a specific height and with a level orientation.

    It's not perfect, but it would be a great improvement.

    opened by steenharsted 0
  • Set Floor with Charuco board (global coordinate system)

    Set Floor with Charuco board (global coordinate system)

    Add a final option during calibration, "Set Floor"

    Place the charuco board on the floor and use the middle (or a set corner) as 0,0,0 in the global coordinate system. Use the orientation of the charuco board to assign x and z axis (y - axis will straight up from the board).

    This would greatly improve FreeMoCap's use in biomechanical settings.

    opened by steenharsted 1
  • Add sample `.blend` file output to repo

    Add sample `.blend` file output to repo

    Adding a sample of the blender output somewhere in the repository (or in the documentation...) would be great for helping folks see what freemocap produces!

    Minor 
    opened by trentwirth 0
Releases(v0.0.54)
  • v0.0.54(Jul 16, 2022)

    This is the Freemocap Pre-Alpha Release to create raw 3d skeletons from USB connected webcams

    Here is the relevant README for this version of freemocap v0.0.54: https://github.com/freemocap/freemocap/blob/main/OLD_README.md

    Source code(tar.gz)
    Source code(zip)
Code repository for "Stable View Synthesis".

Stable View Synthesis Code repository for "Stable View Synthesis". Setup Install the following Python packages in your Python environment - numpy (1.1

Intelligent Systems Lab Org 195 Dec 24, 2022
My freqtrade strategies

My freqtrade-strategies Hi there! This is repo for my freqtrade-strategies. My name is Ilya Zelenchuk, I'm a lecturer at the SPbU university (https://

171 Dec 05, 2022
Python package for missing-data imputation with deep learning

MIDASpy Overview MIDASpy is a Python package for multiply imputing missing data using deep learning methods. The MIDASpy algorithm offers significant

MIDASverse 77 Dec 03, 2022
Code for the paper "SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness" (NeurIPS 2021)

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness (NeurIPS2021) This repository contains code for the paper "Smo

Jongheon Jeong 17 Dec 27, 2022
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers

DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers Authors: Jaemin Cho, Abhay Zala, and Mohit Bansal (

Jaemin Cho 98 Dec 15, 2022
LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image.

This project is based on ultralytics/yolov3. LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image. The related paper is avai

26 Dec 13, 2022
Code for binary and multiclass model change active learning, with spectral truncation implementation.

Model Change Active Learning Paper (To Appear) Python code for doing active learning in graph-based semi-supervised learning (GBSSL) paradigm. Impleme

Kevin Miller 1 Jul 24, 2022
An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi

MetaICL: Learning to Learn In Context This includes an original implementation of "MetaICL: Learning to Learn In Context" by Sewon Min, Mike Lewis, Lu

Meta Research 141 Jan 07, 2023
Open-sourcing the Slates Dataset for recommender systems research

FINN.no Recommender Systems Slate Dataset This repository accompany the paper "Dynamic Slate Recommendation with Gated Recurrent Units and Thompson Sa

FINN.no 48 Nov 28, 2022
Neural Logic Inductive Learning

Neural Logic Inductive Learning This is the implementation of the Neural Logic Inductive Learning model (NLIL) proposed in the ICLR 2020 paper: Learn

36 Nov 28, 2022
Code for our paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021

SimCLS Code for our paper: "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021 1. How to Install Requirements

Yixin Liu 150 Dec 12, 2022
Towards Representation Learning for Atmospheric Dynamics (AtmoDist)

Towards Representation Learning for Atmospheric Dynamics (AtmoDist) The prediction of future climate scenarios under anthropogenic forcing is critical

Sebastian Hoffmann 4 Dec 15, 2022
Deep Learning to Improve Breast Cancer Detection on Screening Mammography

Shield: This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Deep Learning to Improve Breast

Li Shen 305 Jan 03, 2023
Semantically Contrastive Learning for Low-light Image Enhancement

Semantically Contrastive Learning for Low-light Image Enhancement Here, we propose an effective semantically contrastive learning paradigm for Low-lig

48 Dec 16, 2022
"3D Human Texture Estimation from a Single Image with Transformers", ICCV 2021

Texformer: 3D Human Texture Estimation from a Single Image with Transformers This is the official implementation of "3D Human Texture Estimation from

XiangyuXu 193 Dec 05, 2022
Good Semi-Supervised Learning That Requires a Bad GAN

Good Semi-Supervised Learning that Requires a Bad GAN This is the code we used in our paper Good Semi-supervised Learning that Requires a Bad GAN Ziha

Zhilin Yang 177 Dec 12, 2022
Automatic Video Captioning Evaluation Metric --- EMScore

Automatic Video Captioning Evaluation Metric --- EMScore Overview For an illustration, EMScore can be computed as: Installation modify the encode_text

Yaya Shi 17 Nov 28, 2022
Recurrent Variational Autoencoder that generates sequential data implemented with pytorch

Pytorch Recurrent Variational Autoencoder Model: This is the implementation of Samuel Bowman's Generating Sentences from a Continuous Space with Kim's

Daniil Gavrilov 347 Nov 14, 2022
PyTorch EO aims to make Deep Learning for Earth Observation data easy and accessible to real-world cases and research alike.

Pytorch EO Deep Learning for Earth Observation applications and research. 🚧 This project is in early development, so bugs and breaking changes are ex

earthpulse 28 Aug 25, 2022
Implementations of the algorithms in the paper Approximative Algorithms for Multi-Marginal Optimal Transport and Free-Support Wasserstein Barycenters

Implementations of the algorithms in the paper Approximative Algorithms for Multi-Marginal Optimal Transport and Free-Support Wasserstein Barycenters

Johannes von Lindheim 3 Oct 29, 2022