๐Ÿ˜ฎThe official implementation of "CoNeRF: Controllable Neural Radiance Fields" ๐Ÿ˜ฎ

Overview

CoNeRF: Controllable Neural Radiance Fields

arXiv MIT license Website Datasets

This is the official implementation for "CoNeRF: Controllable Neural Radiance Fields"

The codebase is based on HyperNeRF implemente in JAX, building on JaxNeRF.

Setup

The code can be run under any environment with Python 3.8 and above. (It may run with lower versions, but we have not tested it).

We recommend using Miniconda and setting up an environment:

conda create --name conerf python=3.8

Next, install the required packages:

pip install -r requirements.txt

Install the appropriate JAX distribution for your environment by following the instructions here. For example:

# For CUDA version 11.1
pip install --upgrade "jax[cuda111]" -f https://storage.googleapis.com/jax-releases/jax_releases.html

Dataset

Basic structure

The dataset uses the same format as Nerfies for the image extraction and camera estimation.

For annotations, we create an additional file annotations.yml consisting of attribute values and their corresponding frames, and a folder with [frame_id].json files (only annotated frames are required to have a corresponding .json file) where each *.json file is a segmentation mask created with LabelMe. In summary, each dataset has to have the following structure:

<dataset>
    โ”œโ”€โ”€ annotations
    โ”‚   โ””โ”€โ”€ ${item_id}.json
    โ”œโ”€โ”€ annotations.yml
    โ”œโ”€โ”€ camera
    โ”‚   โ””โ”€โ”€ ${item_id}.json
    โ”œโ”€โ”€ camera-paths
    โ”œโ”€โ”€ colmap
    โ”œโ”€โ”€ rgb
    โ”‚   โ”œโ”€โ”€ ${scale}x
    โ”‚   โ””โ”€โ”€ โ””โ”€โ”€ ${item_id}.png
    โ”œโ”€โ”€ metadata.json
    โ”œโ”€โ”€ dataset.json
    โ”œโ”€โ”€ scene.json
    โ””โ”€โ”€ mapping.yml

The mapping.yml file can be created manually and serves to map class indices to class names which were created with LabelMe. It has the following format:

<index-from-0>: <class-name>

for example:

0: left eye
1: right eye

The annotations.yml can be created manually as well (though we encourage using the provided notebook for this task) and has the following format:

- class: <id>
  frame: <number>
  value: <attribute-value> # between -1 and 1

for example:

- class: 0 # corresponding to left eye
  frame: 128
  value: -1
- class: 1 # corresponding to right eye
  frame: 147
  value: 1
- class: 2 # corresponding to mouth
  frame: 147
  value: -1 

Principles of annotating the data

  • Our framework works well with just a bunch of annotations (for extreme points as an example). For our main face visualizations, we used just 2 annotations per attribute.
  • We highly recommend annotating these frames that are extremes of possible controllability, for example, fully eye closed will be -1 value and fully open eye will +1 value. Though it is not necessary to be exact in extremes, the more accurate annotations, the more accurate controllability you can expect
  • Each attribute can be annotated independently, i.e., there is no need to look for frames that have exactly extreme values of all attributes. For example, left eye=-1 and left eye=+1 values can be provided in frames 28 and 47, while right eye=-1 and right eye=+1 can be provided in any other frames.
  • Masks should be quite rough oversized, it is generally better to have bigger than smaller annotations.
  • The general annotation pipeline looks like this:
  1. Find set of frames that consist of extreme attributions (e.g. closed eye, open eye etc.).
  2. Provide necessary values in for attributes to be controlled in annotations.yml.
  3. Set names for these attributes (necessary for the masking part).
  4. Run LabelMe.
  5. Save annotated frames in annotations/.

Now you can run the training! Also, check out our datasets (52GB of data) to avoid any preprocessing steps on your own.

We tried our best to make our CoNeRF codebase to be general for novel view synthesis validation dataset (conerf/datasets/nerfies.py file) but we mainly focused on the interpolation task. If you have an access to the novel view synthesis rig as used in NeRFies or HyperNeRF, and you find out that something doesn't work, please leave an issue.

Providing value annotations

We extended the basic notebook used in NeRFies and HyperNeRF for processing the data so that you can annotate necessary images with attributes. Please check out notebooks/Capture_Processing.ipynb for more details. The notebook (despite all the files from NeRFies) will also generate <dataset>/annotations.yml and <dataset>/mapping.yml files.

Providing masking annotations

We adapted data loading class to handle annotations from LabelMe (we used its docker version). Example annotation for one of our datasets looks like this:

example-annotation

The program generates *.json files in File->Output Dir which should be located in <dataset>/annotations/ folder.

Training

After preparing a dataset, you can train a Nerfie by running:

export DATASET_PATH=/path/to/dataset
export EXPERIMENT_PATH=/path/to/save/experiment/to
python train.py \
    --base_folder $EXPERIMENT_PATH \
    --gin_bindings="data_dir='$DATASET_PATH'" \
    --gin_configs configs/test_local_attributes.gin

To plot telemetry to Tensorboard and render checkpoints on the fly, also launch an evaluation job by running:

python eval.py \
    --base_folder $EXPERIMENT_PATH \
    --gin_bindings="data_dir='$DATASET_PATH'" \
    --gin_configs configs/test_local_attributes.gin

The two jobs should use a mutually exclusive set of GPUs. This division allows the training job to run without having to stop for evaluation.

Configuration

  • We use Gin for configuration.
  • We provide a couple preset configurations.
  • Please refer to config.py for documentation on what each configuration does.
  • Preset configs:
    • baselines/: All configs that were used to perform quantitative evaluation in the experiments, including baseline methods. The _proj suffix denotes a method that uses a learnable projection.
      • ours.gin: The full CoNeRF architecture with masking.
      • hypernerf_ap[_proj].gin: The axis-aligned plane configuration for HyperNeRF.
      • hypernerf_ds[_proj].gin: The deformable surface configuration for HyperNeRF.
      • nerf_latent[_proj].gin: The configuration for a simple baselines where we concatenate a learnable latent with each coordinate (resembles HyperNeRF AP without the warping field).
      • nerfies[_proj].gin: The configuration for the NeRFies model.
      • nerf.gin: The configuration for the simplest NeRF architecture.
    • full-hd/, hd/ and post/: We repurposed our baselines/ours.gin configuration for training for different resolutions and different sampling parameters that increase the quality of the generated images. Using post/ours.gin required us to use 4x A100 GPU for 2 weeks to make the training converge.

Synthetic dataset

We generated the synthetic dataset using Kubric. You can find the generation script here. After generating the dataset, you can run prepare_kubric_dataset.py to canonicalize its format to the same one that works with CoNeRF. The dataset is already attached in the provided zip file.

Additional scripts

All scripts below are used as the ones for training, they need $EXPERIMENT_PATH and $DATASET_PATH to be specified. They save the results into $EXPERIMENT_PATH.

  • render_changing_attributes.py: Renders each of changing attributes under a fixed camera.
  • render_video.py: Renders changing view under a fixed set of attributes.
  • render_all.py: Renders dynamically changing attributes and the camera parameters.
  • train_lr.py: Estimates parameters of the linear regression. The estimated model maps highly dimensional embedding into controllable attributes.

Additional notes

  • We have used notebooks/Results.ipynb to generate tables/visualizations for the article. While it may not particularily useful for you case, we have left it so you can copy or reuse some of its snippets. It's especially useful because it shows how to extract data from tensorboards.
  • We removed some of notebooks that were available in the HyperNeRF's codebase (ex. for training) but were no longer applicable to CoNeRF. We highly recommend using available scripts. If you have ever managed to adapt HyperNeRF's notebooks, please leave a pull request.

Citing

If you find our work useful, please consider citing:

@inproceedings{kania2022conerf,
  title     = {{CoNeRF: Controllable Neural Radiance Fields}},
  author    = {Kania, Kacper and Yi, Kwang Moo and Kowalski, Marek and Trzci{\'n}ski, Tomasz and Tagliasacchi, Andrea},
  booktitle   = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year      = {2022}
}
Owner
Kacper Kania
PhDing in Neural Human Rendering ... ๐Ÿ‘€
Kacper Kania
Convnet transfer - Code for paper How transferable are features in deep neural networks?

How transferable are features in deep neural networks? This repository contains source code necessary to reproduce the results presented in the follow

Jason Yosinski 143 Sep 13, 2022
The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection .

GCoNet The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection . Trained model Download final_gconet.pth

Qi Fan 46 Nov 17, 2022
Source-to-Source Debuggable Derivatives in Pure Python

Tangent Tangent is a new, free, and open-source Python library for automatic differentiation. Existing libraries implement automatic differentiation b

Google 2.2k Jan 01, 2023
A python package for generating, analyzing and visualizing building shadows

pybdshadow Introduction pybdshadow is a python package for generating, analyzing and visualizing building shadows from large scale building geographic

Qing Yu 13 Nov 30, 2022
Portfolio analytics for quants, written in Python

QuantStats: Portfolio analytics for quants QuantStats Python library that performs portfolio profiling, allowing quants and portfolio managers to unde

Ran Aroussi 2.7k Jan 08, 2023
JAX bindings to the Flatiron Institute Non-uniform Fast Fourier Transform (FINUFFT) library

JAX bindings to FINUFFT This package provides a JAX interface to (a subset of) the Flatiron Institute Non-uniform Fast Fourier Transform (FINUFFT) lib

Dan Foreman-Mackey 32 Oct 15, 2022
Rainbow DQN implementation that outperforms the paper's results on 40% of games using 20x less data ๐ŸŒˆ

Rainbow ๐ŸŒˆ An implementation of Rainbow DQN which outperforms the paper's (Hessel et al. 2017) results on 40% of tested games while using 20x less dat

Dominik Schmidt 31 Dec 21, 2022
The reference baseline of final exam for XMU machine learning course

Mini-NICO Baseline The baseline is a reference method for the final exam of machine learning course. Requirements Installation we use /python3.7 /torc

JoaquinChou 3 Dec 29, 2021
ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning. In ICCV, 2021.

ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning This repository contains the code for our ICCV 202

sangho.lee 28 Nov 08, 2022
Understanding and Overcoming the Challenges of Efficient Transformer Quantization

Transformer Quantization This repository contains the implementation and experiments for the paper presented in Yelysei Bondarenko1, Markus Nagel1, Ti

83 Dec 30, 2022
This repository is related to an Arabic tutorial, within the tutorial we discuss the common data structure and algorithms and their worst and best case for each, then implement the code using Python.

Data Structure and Algorithms with Python This repository is related to the Arabic tutorial here, within the tutorial we discuss the common data struc

Mohamed Ayman 33 Dec 02, 2022
Codes accompanying the paper "Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning" (NeurIPS 2021 Spotlight

Implicit Constraint Q-Learning This is a pytorch implementation of ICQ on Datasets for Deep Data-Driven Reinforcement Learning (D4RL) and ICQ-MA on SM

42 Dec 23, 2022
CLNTM - Contrastive Learning for Neural Topic Model

Contrastive Learning for Neural Topic Model This repository contains the impleme

Thong Thanh Nguyen 25 Nov 24, 2022
PyTorch implementation of Glow

glow-pytorch PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions (https://arxiv.org/abs/1807.03039) Usage: python train.p

Kim Seonghyeon 433 Dec 27, 2022
Code for the KDD 2021 paper 'Filtration Curves for Graph Representation'

Filtration Curves for Graph Representation This repository provides the code from the KDD'21 paper Filtration Curves for Graph Representation. Depende

Machine Learning and Computational Biology Lab 16 Oct 16, 2022
Repository for the AugmentedPCA Python package.

Overview This Python package provides implementations of Augmented Principal Component Analysis (AugmentedPCA) - a family of linear factor models that

Billy Carson 6 Dec 07, 2022
Generalized Data Weighting via Class-level Gradient Manipulation

Generalized Data Weighting via Class-level Gradient Manipulation This repository is the official implementation of Generalized Data Weighting via Clas

18 Nov 12, 2022
Implementation of "Semi-supervised Domain Adaptive Structure Learning"

Semi-supervised Domain Adaptive Structure Learning - ASDA This repo contains the source code and dataset for our ASDA paper. Illustration of the propo

3 Dec 13, 2021
Neural Network Libraries

Neural Network Libraries Neural Network Libraries is a deep learning framework that is intended to be used for research, development and production. W

Sony 2.6k Dec 30, 2022
Implementation of self-attention mechanisms for general purpose. Focused on computer vision modules. Ongoing repository.

Self-attention building blocks for computer vision applications in PyTorch Implementation of self attention mechanisms for computer vision in PyTorch

AI Summer 962 Dec 23, 2022