[ICCV21] Code for RetrievalFuse: Neural 3D Scene Reconstruction with a Database

Overview

RetrievalFuse

Paper | Project Page | Video

RetrievalFuse: Neural 3D Scene Reconstruction with a Database
Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai
ICCV2021

This repository contains the code for the ICCV 2021 paper RetrievalFuse, a novel approach for 3D reconstruction from low resolution distance field grids and from point clouds.

In contrast to traditional generative learned models which encode the full generative process into a neural network and can struggle with maintaining local details at the scene level, we introduce a new method that directly leverages scene geometry from the training database.

File and Folders


Broad code structure is as follows:

File / Folder Description
config/super_resolution Super-resolution experiment configs
config/surface_reconstruction Surface reconstruction experiment configs
config/base Defaults for configurations
config/config_handler.py Config file parser
data/splits Training and validation splits for different datasets
dataset/scene.py SceneHandler class for managing access to scene data samples
dataset/patched_scene_dataset.py Pytorch dataset class for scene data
external/ChamferDistancePytorch For calculating rough chamfer distance between prediction and target while training
model/attention.py Attention, folding and unfolding modules
model/loss.py Loss functions
model/refinement.py Refinement network
model/retrieval.py Retrieval network
model/unet.py U-Net model used as a backbone in refinement network
runs/ Checkpoint and visualizations for experiments dumped here
trainer/train_retrieval.py Lightning module for training retrieval network
trainer/train_refinement.py Lightning module for training refinement network
util/arguments.py Argument parsing (additional arguments apart from those in config)
util/filesystem_logger.py For copying source code for each run in the experiment log directory
util/metrics.py Rough metrics for logging during training
util/mesh_metrics.py Final metrics on meshes
util/retrieval.py Script to dump retrievals once retrieval networks have been trained; needed for training refinement.
util/visualizations.py Utility scripts for visualizations

Further, the data/ directory has the following layout

data                    # root data directory
├── sdf_008             # low-res (8^3) distance fields
    ├── 
   
         
        ├── 
    
     
        ├── 
     
      
        ├── 
      
       
        ...
    ├── 
       
         ... ├── sdf_016 # low-res (16^3) distance fields ├── 
        
          ├── 
         
           ├── 
          
            ├── 
           
             ... ├── 
            
              ... ├── sdf_064 # high-res (64^3) distance fields ├── 
             
               ├── 
              
                ├── 
               
                 ├── 
                
                  ... ├── 
                 
                   ... ├── pc_20K # point cloud inputs ├── 
                  
                    ├── 
                   
                     ├── 
                    
                      ├── 
                     
                       ... ├── 
                      
                        ... ├── splits # train/val splits ├── size # data needed by SceneHandler class (autocreated on first run) ├── occupancy # data needed by SceneHandler class (autocreated on first run) 
                      
                     
                    
                   
                  
                 
                
               
              
             
            
           
          
         
        
       
      
     
    
   

Dependencies


Install the dependencies using pip ```bash pip install -r requirements.txt ``` Be sure that you pull the `ChamferDistancePytorch` submodule in `external`.

Data Preparation


For ShapeNetV2 and Matterport, get the appropriate meshes from the datasets. For 3DFRONT get the 3DFUTURE meshes and 3DFRONT scripts. For getting 3DFRONT meshes use our fork of 3D-FRONT-ToolBox to create room meshes.

Once you have the meshes, use our fork of sdf-gen to create distance field low-res inputs and high-res targets. For creating point cloud inputs simply use trimesh.sample.sample_surface (check util/misc/sample_scene_point_clouds). Place the processed data in appropriate directories:

  • data/sdf_008/ or data/sdf_016/ for low-res inputs

  • data/pc_20K/ for point clouds inputs

  • data/sdf_064/ for targets

Training the Retrieval Network


To train retrieval networks use the following command:

python trainer/train_retrieval.py --config config/<config> --val_check_interval 5 --experiment retrieval --wandb_main --sanity_steps 1

We provide some sample configurations for retrieval.

For super-resolution, e.g.

  • config/super_resolution/ShapeNetV2/retrieval_008_064.yaml
  • config/super_resolution/3DFront/retrieval_008_064.yaml
  • config/super_resolution/Matterport3D/retrieval_016_064.yaml

For surface-reconstruction, e.g.

  • config/surface_reconstruction/ShapeNetV2/retrieval_128_064.yaml
  • config/surface_reconstruction/3DFront/retrieval_128_064.yaml
  • config/surface_reconstruction/Matterport3D/retrieval_128_064.yaml

Once trained, create the retrievals for train/validation set using the following commands:

python util/retrieval.py  --mode map --retrieval_ckpt <trained_retrieval_ckpt> --config <retrieval_config>
python util/retrieval.py --mode compose --retrieval_ckpt <trained_retrieval_ckpt> --config <retrieval_config> 

Training the Refinement Network


Use the following command to train the refinement network

python trainer/train_refinement.py --config <config> --val_check_interval 5 --experiment refinement --sanity_steps 1 --wandb_main --retrieval_ckpt <retrieval_ckpt>

Again, sample configurations for refinement are provided in the config directory.

For super-resolution, e.g.

  • config/super_resolution/ShapeNetV2/refinement_008_064.yaml
  • config/super_resolution/3DFront/refinement_008_064.yaml
  • config/super_resolution/Matterport3D/refinement_016_064.yaml

For surface-reconstruction, e.g.

  • config/surface_reconstruction/ShapeNetV2/refinement_128_064.yaml
  • config/surface_reconstruction/3DFront/refinement_128_064.yaml
  • config/surface_reconstruction/Matterport3D/refinement_128_064.yaml

Visualizations and Logs


Visualizations and checkpoints are dumped in the `runs/` directory. Logs are uploaded to the user's [Weights&Biases](https://wandb.ai/site) dashboard.

Citation


If you find our work useful in your research, please consider citing:
@inproceedings{siddiqui2021retrievalfuse,
  title = {RetrievalFuse: Neural 3D Scene Reconstruction with a Database},
  author = {Siddiqui, Yawar and Thies, Justus and Ma, Fangchang and Shan, Qi and Nie{\ss}ner, Matthias and Dai, Angela},
  booktitle = {Proc. International Conference on Computer Vision (ICCV)},
  month = oct,
  year = {2021},
  doi = {},
  month_numeric = {10}
}

License


The code from this repository is released under the MIT license.
Owner
Yawar Nihal Siddiqui
Yawar Nihal Siddiqui
Source code for the paper "SEPP: Similarity Estimation of Predicted Probabilities for Defending and Detecting Adversarial Text" PACLIC 2021

Adversarial text generator Refer to "adversarial_text_generator"[https://github.com/quocnsh/SEPP_generator] project for generating adversarial texts A

0 Oct 05, 2021
Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020

Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020 BibTeX @INPROCEEDINGS{punnappurath2020modeling, author={Abhi

Abhijith Punnappurath 22 Oct 01, 2022
Trading Strategies for Freqtrade

Freqtrade Strategies Strategies for Freqtrade, developed primarily in a partnership between @werkkrew and @JimmyNixx from the Freqtrade Discord. Use t

Bryan Chain 242 Jan 07, 2023
This repo is developed for Strong Baseline For Vehicle Re-Identification in Track 2 Ai-City-2021 Challenges

A STRONG BASELINE FOR VEHICLE RE-IDENTIFICATION This paper is accepted to the IEEE Conference on Computer Vision and Pattern Recognition Workshop(CVPR

Cybercore Co. Ltd 78 Dec 29, 2022
Pyramid addon for OpenAPI3 validation of requests and responses.

Validate Pyramid views against an OpenAPI 3.0 document Peace of Mind The reason this package exists is to give you peace of mind when providing a REST

Pylons Project 79 Dec 30, 2022
A PyTorch Implementation of Single Shot Scale-invariant Face Detector.

S³FD: Single Shot Scale-invariant Face Detector A PyTorch Implementation of Single Shot Scale-invariant Face Detector. Eval python wider_eval_pytorch.

carwin 235 Jan 07, 2023
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers

hierarchical-transformer-1d Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers In Progress!! 2021.

MyungHoon Jin 7 Nov 06, 2022
Code for paper 'Hand-Object Contact Consistency Reasoning for Human Grasps Generation' at ICCV 2021

GraspTTA Hand-Object Contact Consistency Reasoning for Human Grasps Generation (ICCV 2021). Project Page with Videos Demo Quick Results Visualization

Hanwen Jiang 47 Dec 09, 2022
A booklet on machine learning systems design with exercises

Machine Learning Systems Design Read this booklet here. This booklet covers four main steps of designing a machine learning system: Project setup Data

Chip Huyen 7.6k Jan 08, 2023
Aggragrating Nested Transformer Official Jax Implementation

NesT is a simple method, which aggragrates nested local transformers on image blocks. The idea makes vision transformers attain better accuracy, data efficiency, and convergence on the ImageNet bench

Google Research 169 Dec 20, 2022
Facebook AI Image Similarity Challenge: Descriptor Track

Facebook AI Image Similarity Challenge: Descriptor Track This repository contains the code for our solution to the Facebook AI Image Similarity Challe

Sergio MP 17 Dec 14, 2022
Train SN-GAN with AdaBelief

SNGAN-AdaBelief Train a state-of-the-art spectral normalization GAN with AdaBelief https://github.com/juntang-zhuang/Adabelief-Optimizer Acknowledgeme

Juntang Zhuang 10 Jun 11, 2022
Large-Scale Pre-training for Person Re-identification with Noisy Labels (LUPerson-NL)

LUPerson-NL Large-Scale Pre-training for Person Re-identification with Noisy Labels (LUPerson-NL) The repository is for our CVPR2022 paper Large-Scale

43 Dec 26, 2022
CvT2DistilGPT2 is an encoder-to-decoder model that was developed for chest X-ray report generation.

CvT2DistilGPT2 Improving Chest X-Ray Report Generation by Leveraging Warm-Starting This repository houses the implementation of CvT2DistilGPT2 from [1

The Australian e-Health Research Centre 21 Dec 28, 2022
Deep learning for spiking neural networks

A deep learning library for spiking neural networks. Norse aims to exploit the advantages of bio-inspired neural components, which are sparse and even

Electronic Vision(s) Group — BrainScaleS Neuromorphic Hardware 59 Nov 28, 2022
Digan - Official PyTorch implementation of Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

DIGAN (ICLR 2022) Official PyTorch implementation of "Generating Videos with Dyn

Sihyun Yu 147 Dec 31, 2022
This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models

Mixture of Volumetric Primitives -- Training and Evaluation This repository contains code to train and render Mixture of Volumetric Primitives (MVP) m

Meta Research 125 Dec 29, 2022
Codes for SIGIR'22 Paper 'On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation'

OD-Rec Codes for SIGIR'22 Paper 'On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation' Paper, saved teacher models and Andro

Xin Xia 11 Nov 22, 2022
Ppq - A powerful offline neural network quantization tool with custimized IR

PPL Quantization Tool(PPL 量化工具) PPL Quantization Tool (PPQ) is a powerful offlin

605 Jan 03, 2023
Code release for Universal Domain Adaptation(CVPR 2019)

Universal Domain Adaptation Code release for Universal Domain Adaptation(CVPR 2019) Requirements python 3.6+ PyTorch 1.0 pip install -r requirements.t

THUML @ Tsinghua University 229 Dec 23, 2022