Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

Overview

AutoInt: Automatic Integration for Fast Neural Volume Rendering
CVPR 2021

Project Page | Video | Paper

Open Colab
PyTorch implementation of automatic integration.
AutoInt: Automatic Integration for Fast Neural Volume Rendering
David B. Lindell*, Julien N. P. Martel*, Gordon Wetzstein
Stanford University
*denotes equal contribution
in CVPR 2021

Quickstart

To get started quickly, we provide a collab link above. Otherwise, you can clone this repo and follow the below instructions.

To setup a conda environment, download example training data, begin the training process, and launch Tensorboard:

conda env create -f environment.yml
conda activate autoint 
cd experiment_scripts
python train_1d_integral.py
tensorboard --logdir=../logs --port=6006

This example will fit a grad network to a 1D signal and evaluate the integral. You can monitor the training in your browser at localhost:6006. You can also train a network on the sparse tomography problem presented in the paper with python train_sparse_tomography.py.

Autoint for Neural Rendering

Automatic integration can be used to learn closed form solutions to the volume rendering equation, which is an integral equation accumulates transmittance and emittance along rays to render an image. While conventional neural renderers require hundreds of samples along each ray to evaluate these integrals (and hence hundreds of costly forward passes through a network), AutoInt allows evaluating these integrals far fewer forward passes.

Training

To run AutoInt for neural rendering, first set up the conda environment with

conda env create -f environment.yml
conda activate autoint 

Then, download the datasets to the data folder. We allow training on any of three datasets. The synthetic Blender data from NeRF and the LLFF scenes are hosted here. The DeepVoxels data are hosted here.

Finally, use the provided config files in the experiment_scripts/configs folder to train on these datasets. For example, to train on a NeRF Blender dataset, run the following

python train_autoint_radiance_field.py --config ./configs/config_blender_tiny.ini
tensorboard --logdir=../logs/ --port=6006

This will train a small, low-resolution scene. To train scenes at high-resolution (requires a few days of training time), use the config_blender.ini, config_deepvoxels.ini, or config_llff.ini config files.

Rendering

Rendering from a trained model can be done with the following command.

python train_autoint_radiance_field.py --config /path/to/config/file --render_model ../logs/path/to/log/directory <epoch number> --render_ouput /path/to/output/folder

Here, the --render_model command indicates the log directory where the code saves the models and checkpoints. For example, this would be ../logs/blender_lego for the default Blender dataset. Then, the epoch number can be found by looking at numbers of the the saved checkpoint filenames in ../logs/blender_lego/checkpoints/. Finally, --render_output should specify a folder where the output rendered images will be generated.

Citation

@inproceedings{autoint2021,
  title={AutoInt: Automatic Integration for Fast Neural Volume Rendering},
  author={David B. Lindell and Julien N. P. Martel and Gordon Wetzstein},
  year={2021},
  booktitle={Proc. CVPR},
}
Owner
Stanford Computational Imaging Lab
Next-generation computational imaging and display systems.
Stanford Computational Imaging Lab
Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework

This repo is the official implementation of "Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework". @inproceedings{zhou2021insta

34 Dec 31, 2022
Auto HMM: Automatic Discrete and Continous HMM including Model selection

Auto HMM: Automatic Discrete and Continous HMM including Model selection

Chess_champion 29 Dec 07, 2022
Video Matting via Consistency-Regularized Graph Neural Networks

Video Matting via Consistency-Regularized Graph Neural Networks Project Page | Real Data | Paper Installation Our code has been tested on Python 3.7,

41 Dec 26, 2022
The world's largest toxicity dataset.

The Toxicity Dataset by Surge AI Saving the internet is fun. Combing through thousands of online comments to build a toxicity dataset isn't. That's wh

Surge AI 134 Dec 19, 2022
Human Pose Detection on EdgeTPU

Coral PoseNet Pose estimation refers to computer vision techniques that detect human figures in images and video, so that one could determine, for exa

google-coral 476 Dec 31, 2022
E2VID_ROS - E2VID_ROS: E2VID to a real-time system

E2VID_ROS Introduce We extend E2VID to a real-time system. Because Python ROS ca

Robin Shaun 7 Apr 17, 2022
Keras documentation, hosted live at keras.io

Keras.io documentation generator This repository hosts the code used to generate the keras.io website. Generating a local copy of the website pip inst

Keras 2k Jan 08, 2023
Compare GAN code.

Compare GAN This repository offers TensorFlow implementations for many components related to Generative Adversarial Networks: losses (such non-saturat

Google 1.8k Jan 05, 2023
Data labels and scripts for fastMRI.org

fastMRI+: Clinical pathology annotations for the fastMRI dataset The fastMRI dataset is a publicly available MRI raw (k-space) dataset. It has been us

Microsoft 51 Dec 22, 2022
A unofficial pytorch implementation of PAN(PSENet2): Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network

Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network Requirements pytorch 1.1+ torchvision 0.3+ pyclipper opencv3 gcc

zhoujun 400 Dec 26, 2022
Multiband spectro-radiometric satellite image analysis with K-means cluster algorithm

Multi-band Spectro Radiomertric Image Analysis with K-means Cluster Algorithm Overview Multi-band Spectro Radiomertric images are images comprising of

Chibueze Henry 6 Mar 16, 2022
Efficient Training of Audio Transformers with Patchout

PaSST: Efficient Training of Audio Transformers with Patchout This is the implementation for Efficient Training of Audio Transformers with Patchout Pa

165 Dec 26, 2022
GPU Programming with Julia - course at the Swiss National Supercomputing Centre (CSCS), ETH Zurich

Course Description The programming language Julia is being more and more adopted in High Performance Computing (HPC) due to its unique way to combine

Samuel Omlin 192 Jan 03, 2023
This is the official PyTorch implementation for "Mesa: A Memory-saving Training Framework for Transformers".

A Memory-saving Training Framework for Transformers This is the official PyTorch implementation for Mesa: A Memory-saving Training Framework for Trans

Zhuang AI Group 105 Dec 06, 2022
PyTorch implementation of Algorithm 1 of "On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models"

Code for On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models This repository will reproduce the main results from our pape

Mitch Hill 32 Nov 25, 2022
MBPO (paper: When to trust your model: Model-based policy optimization) in offline RL settings

offline-MBPO This repository contains the code of a version of model-based RL algorithm MBPO, which is modified to perform in offline RL settings Pape

LxzGordon 1 Oct 24, 2021
EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network

EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network This repo contains the official Pytorch implementaion code and conf

Hu Zhang 175 Jan 07, 2023
Python with OpenCV - MediaPip Framework Hand Detection

Python HandDetection Python with OpenCV - MediaPip Framework Hand Detection Explore the docs » Contact Me About The Project It is a Computer vision pa

2 Jan 07, 2022
Official implementation of Self-supervised Graph Attention Networks (SuperGAT), ICLR 2021.

SuperGAT Official implementation of Self-supervised Graph Attention Networks (SuperGAT). This model is presented at How to Find Your Friendly Neighbor

Dongkwan Kim 127 Dec 28, 2022
A comprehensive and up-to-date developer education platform for Urbit.

curriculum A comprehensive and up-to-date developer education platform for Urbit. This project organizes developer capabilities into a hierarchy of co

Sigilante 36 Oct 04, 2022