Training and Evaluation Code for Neural Volumes

Overview

Neural Volumes

This repository contains training and evaluation code for the paper Neural Volumes. The method learns a 3D volumetric representation of objects & scenes that can be rendered and animated from only calibrated multi-view video.

Neural Volumes

Citing Neural Volumes

If you use Neural Volumes in your research, please cite the paper:

@article{Lombardi:2019,
 author = {Stephen Lombardi and Tomas Simon and Jason Saragih and Gabriel Schwartz and Andreas Lehrmann and Yaser Sheikh},
 title = {Neural Volumes: Learning Dynamic Renderable Volumes from Images},
 journal = {ACM Trans. Graph.},
 issue_date = {July 2019},
 volume = {38},
 number = {4},
 month = jul,
 year = {2019},
 issn = {0730-0301},
 pages = {65:1--65:14},
 articleno = {65},
 numpages = {14},
 url = {http://doi.acm.org/10.1145/3306346.3323020},
 doi = {10.1145/3306346.3323020},
 acmid = {3323020},
 publisher = {ACM},
 address = {New York, NY, USA},
}

File Organization

The root directory contains several subdirectories and files:

data/ --- custom PyTorch Dataset classes for loading included data
eval/ --- utilities for evaluation
experiments/ --- location of input data and training and evaluation output
models/ --- PyTorch modules for Neural Volumes
render.py --- main evaluation script
train.py --- main training script

Requirements

  • Python (3.6+)
    • PyTorch (1.2+)
    • NumPy
    • Pillow
    • Matplotlib
  • ffmpeg (in PATH, needed to render videos)

How to Use

There are two main scripts in the root directory: train.py and render.py. The scripts take a configuration file for the experiment that defines the dataset used and the options for the model (e.g., the type of decoder that is used).

A sample set of input data is provided in the v0.1 release and can be downloaded here and extracted into the root directory of the repository. experiments/dryice1/data contains the input images and camera calibration data, and experiments/dryice1/experiment1 contains an example experiment configuration file (experiments/dryice1/experiment1/config.py).

To train the model:

python train.py experiments/dryice1/experiment1/config.py

To render a video of a trained model:

python render.py experiments/dryice1/experiment1/config.py Render

License

See the LICENSE file for details.

Comments
  • Training with our own data

    Training with our own data

    Hi,
    I have a few questions on how the data should be formatted and the data format of the provided dryice1.

    • The model expects world space coordinate in meters? i.e if my extrinsics are already in meters do I still need the world_scale=1/256. in config.py file?
    • The extrinsics are in world2cam and the rotation convention is like opencv? i.e, y-down,z-forward and x-right, assuming identity for pose.txt file?
    • how long do I need to train for about 200 frames? And in the config.py file it seems you are skipping some frames? This is ok to do for my own sequence as well?
    • in the KRT file, I see that there's 5 parameters above the RT matrix. This is the distortion correction in opencv format? But it is not used yes?
    • I did not visualize your cameras, so I am not sure how they are distributed. Is it gonna be a problem if I use 50 cameras equally distributed in a half-hemisphere and the subject is already at world origin and 3.5 meters from every cameras? My question is do I need to filter the training cameras so that the back side of subject that is not seen by input 3 cameras is excluded?
    • How do I choose the input cameras? I have a visualization of the cameras . Which camera config should I use? Is this more a question of which testing camera poses I intend to have, i.e narrower the testing cameras' range of view, the closer input training cameras can be? Config_0 is more orthogonal and Config_1 sees less of the backside.
    opened by zawlin 32
  • Some questions about coordination transformation

    Some questions about coordination transformation

    Hello, Thanks for releasing your code. I am impressed by your work. Now I hope to run your code with my our dataset. I have two questions.

    Firstly, I see the pose.txt is used in the code to put the objects in the center. If I use my own data, will the file still work?

    Secondly, I see the code set the raypos is among -1 and 1. Is it the matrix in this pose file that narrows the range to -1 to 1? My own dataset' range is different.

    Thirdly, does the code limit the scope of the template? Does it have to be between 0-255?

    Thanks a lot in advance!

    opened by maobenz 3
  • Location of the volume

    Location of the volume

    Hi there,

    I wonder whether the origin of the volume is (0,0,0)?

    I'm testing the method on a public dataset (http://people.csail.mit.edu/drdaniel/mesh_animation), and I know exactly where (0,0,0) is in the images. But the volume seems to float around the scene. This is the first preview for training process: prog_000001

    Each camera is pointing to the opposite side of the scene, so I expect the same for the volume location in images. But for some reason, they are on the same side in the images. Can you help?

    Thank you.

    opened by lochuynh1989 3
  • Any plan to release all data that presented in the paper?

    Any plan to release all data that presented in the paper?

    Hi @stephenlombardi ,

    Thanks for sharing this great work. I was wondering do you have any plan to release all the data that you used in the paper (apart from the dryice)?

    Best, Zirui

    opened by ziruiw-dev 2
  • Block-wise initialization scheme

    Block-wise initialization scheme

    Hi, is there any paper describing the used block-wise weight initialization scheme?

    https://github.com/facebookresearch/neuralvolumes/blob/8c5fad49b2b05b4b2e79917ee87299e7c1676d59/models/utils.py#L73

    opened by denkorzh 2
  • Is there a way to render a 3D file from this?

    Is there a way to render a 3D file from this?

    Hello, I was wondering if there is a way to export an .obj/,fbx file along with corresponding materials from this? If not, do you have any suggestions as to how to go about that if I were to try extend the code to incorporate that functionality?

    opened by arlorostirolla 1
  • How Can I train and render a Person Image

    How Can I train and render a Person Image

    Hi my name is Luan I am trying to render a Person Image but I am not being able to run can you create and for me a folder with the Setting setup to use a person image? Thank you.

    opened by LuanDalOrto 1
  • code for hybrid rendering (section 6.2) doesn't exist?

    code for hybrid rendering (section 6.2) doesn't exist?

    Hello,

    First of all, thank you for releasing the code for your seminal work. I really think neural volumes is one of the works that popularized differentiable rendering and inspired future works such as neural radiance fields.

    My question is whether this codebase includes the code for the hybrid rendering method outlined in section 6.2 of the paper. I'm trying to fit Neural Volumes to multi-view video of a full-body human being, similar to the 5th subfigure in Fig. 1 of the main paper, but after reading it more carefully it seems as though I would need to use hybrid rendering to be able to render the fine details of the human being.

    Could you

    1. confirm the existence of hybrid rendering in this codebase AND
    2. whether or not hybrid rendering was used to render the full-bodied human being in Fig. 1 of the main paper.

    Thank you in advance.

    opened by andrewsonga 1
  • Misaligned views in rendering

    Misaligned views in rendering

    Hi,

    I am working on MIT dataset to test the network. When I specify a camera to render, it looks fine throughout timeline. However, while rendering the rotating video, the cameras are misaligned as shown in attached screenshot. All cameras look like clustered at the center and views are spread around within the range cameras cover. Is it possible to be any error in KRT or configuration?

    Any suggestion is welcome. issue_MIT_5_cams

    opened by CorneliusHsiao 1
Releases(v0.1)
Owner
Meta Research
Meta Research
Optical machine for senses sensing using speckle and deep learning

# Senses-speckle [Remote Photonic Detection of Human Senses Using Secondary Speckle Patterns](https://doi.org/10.21203/rs.3.rs-724587/v1) paper Python

Zeev Kalyuzhner 0 Sep 26, 2021
WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

WarpDrive is a flexible, lightweight, and easy-to-use open-source reinforcement learning (RL) framework that implements end-to-end multi-agent RL on a single GPU (Graphics Processing Unit).

Salesforce 334 Jan 06, 2023
This is a yolo3 implemented via tensorflow 2.7

YoloV3 - an object detection algorithm implemented via TF 2.x source code In this article I assume you've already familiar with basic computer vision

2 Jan 17, 2022
PyTorch implementation of ARM-Net: Adaptive Relation Modeling Network for Structured Data.

A ready-to-use framework of latest models for structured (tabular) data learning with PyTorch. Applications include recommendation, CRT prediction, healthcare analytics, and etc.

48 Nov 30, 2022
Scalable, event-driven, deep-learning-friendly backtesting library

...Minimizing the mean square error on future experience. - Richard S. Sutton BTGym Scalable event-driven RL-friendly backtesting library. Build on

Andrew 922 Dec 27, 2022
Code & Data for the Paper "Time Masking for Temporal Language Models", WSDM 2022

Time Masking for Temporal Language Models This repository provides a reference implementation of the paper: Time Masking for Temporal Language Models

Guy Rosin 12 Jan 06, 2023
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022
A trusty face recognition research platform developed by Tencent Youtu Lab

Introduction TFace: A trusty face recognition research platform developed by Tencent Youtu Lab. It provides a high-performance distributed training fr

Tencent 956 Jan 01, 2023
[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore

[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore. The repository corresponds to the AI6101 of Semester 1, AY2021-2022, starting from 08/2021. The instructors of

AccSrd 1 Sep 22, 2022
The deployment framework aims to provide a simple, lightweight, fast integrated, pipelined deployment framework that ensures reliability, high concurrency and scalability of services.

savior是一个能够进行快速集成算法模块并支持高性能部署的轻量开发框架。能够帮助将团队进行快速想法验证(PoC),避免重复的去github上找模型然后复现模型;能够帮助团队将功能进行流程拆解,很方便的提高分布式执行效率;能够有效减少代码冗余,减少不必要负担。

Tao Luo 125 Dec 22, 2022
Can we learn gradients by Hamiltonian Neural Networks?

Can we learn gradients by Hamiltonian Neural Networks? This project was carried out as part of the Optimization for Machine Learning course (CS-439) a

2 Aug 22, 2022
[CVPR'20] TTSR: Learning Texture Transformer Network for Image Super-Resolution

TTSR Official PyTorch implementation of the paper Learning Texture Transformer Network for Image Super-Resolution accepted in CVPR 2020. Contents Intr

Multimedia Research 689 Dec 28, 2022
Uses OpenCV and Python Code to detect a face on the screen

Simple-Face-Detection This code uses OpenCV and Python Code to detect a face on the screen. This serves as an example program. Important prerequisites

Denis Woolley (CreepyD) 1 Feb 12, 2022
CLIPort: What and Where Pathways for Robotic Manipulation

CLIPort CLIPort: What and Where Pathways for Robotic Manipulation Mohit Shridhar, Lucas Manuelli, Dieter Fox CoRL 2021 CLIPort is an end-to-end imitat

246 Dec 11, 2022
MIRACLE (Missing data Imputation Refinement And Causal LEarning)

MIRACLE (Missing data Imputation Refinement And Causal LEarning) Code Author: Trent Kyono This repository contains the code used for the "MIRACLE: Cau

van_der_Schaar \LAB 15 Dec 29, 2022
Pipeline code for Sequential-GAM(Genome Architecture Mapping).

Sequential-GAM Pipeline code for Sequential-GAM(Genome Architecture Mapping). mapping whole_preprocess.sh include the whole processing of mapping. usa

3 Nov 03, 2022
Implemenets the Contourlet-CNN as described in C-CNN: Contourlet Convolutional Neural Networks, using PyTorch

C-CNN: Contourlet Convolutional Neural Networks This repo implemenets the Contourlet-CNN as described in C-CNN: Contourlet Convolutional Neural Networ

Goh Kun Shun (KHUN) 10 Nov 03, 2022
This is a repository for a semantic segmentation inference API using the OpenVINO toolkit

BMW-IntelOpenVINO-Segmentation-Inference-API This is a repository for a semantic segmentation inference API using the OpenVINO toolkit. It's supported

BMW TechOffice MUNICH 34 Nov 24, 2022
Understanding Convolutional Neural Networks from Theoretical Perspective via Volterra Convolution

nnvolterra Run Code Compile first: make compile Run all codes: make all Test xconv: make npxconv_test MNIST dataset needs to be downloaded, converted

1 May 24, 2022
Pytorch implementation of the unsupervised object discovery method LOST.

LOST Pytorch implementation of the unsupervised object discovery method LOST. More details can be found in the paper: Localizing Objects with Self-Sup

Valeo.ai 189 Dec 25, 2022