The repository forked from NVlabs uses our data. (Differentiable rasterization applied to 3D model simplification tasks)

Overview

nvdiffmodeling [origin_code]

Teaser image

Differentiable rasterization applied to 3D model simplification tasks, as described in the paper:

Appearance-Driven Automatic 3D Model Simplification
Jon Hasselgren, Jacob Munkberg, Jaakko Lehtinen, Miika Aittala and Samuli Laine
https://research.nvidia.com/publication/2021-04_Appearance-Driven-Automatic-3D
https://arxiv.org/abs/2104.03989

License

Copyright © 2021, NVIDIA Corporation. All rights reserved.

This work is made available under the Nvidia Source Code License.

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing

Citation

@inproceedings{Hasselgren2021,
  title     = {Appearance-Driven Automatic 3D Model Simplification},
  author    = {Jon Hasselgren and Jacob Munkberg and Jaakko Lehtinen and Miika Aittala and Samuli Laine},
  booktitle = {Eurographics Symposium on Rendering},
  year      = {2021}
}

Installation

Requirements:

Tested in Anaconda3 with Python 3.6 and PyTorch 1.8.

One time setup (Windows)

  1. Install Microsoft Visual Studio 2019+ with Microsoft Visual C++.
  2. Install Cuda 10.2 or above. Note: Install CUDA toolkit from https://developer.nvidia.com/cuda-toolkit (not through anaconda)
  3. Install the appropriate version of PyTorch compatible with the installed Cuda toolkit. Below is an example with Cuda 11.1
conda create -n dmodel python=3.6
activate dmodel
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge
conda install imageio
pip install PyOpenGL glfw
  1. Install nvdiffrast in the dmodel conda env. Follow the installation instructions.

Every new command prompt

activate dmodel

Examples

Sphere to cow example:

python train.py --config configs/spot.json

The results will be stored in the out folder. The Spot model was created and released into the public domain by Keenan Crane.

Additional assets can be downloaded here [205MB]. Unzip and place the subfolders in the project data folder, e.g., data\skull. All assets are copyright of their respective authors, see included license files for further details.

Included examples

  • building.json - Our data
  • skull.json - Joint normal map and shape optimization on a skull
  • ewer.json - Ewer model from a reduced mesh as initial guess
  • gardenina.json - Aggregate geometry example
  • hibiscus.json - Aggregate geometry example
  • figure_brushed_gold_64.json - LOD example, trained against a supersampled reference
  • figure_displacement.json - Joint shape, normal map, and displacement map example

The json files that end in _paper.json are configs with the settings used for the results in the paper. They take longer and require a GPU with sufficient memory.

Server usage (through Docker)

  • Build docker image (run the command from the code root folder). docker build -f docker/Dockerfile -t diffmod:v1 . Requires a driver that supports Cuda 10.1 or newer.

  • Start an interactive docker container: docker run --gpus device=0 -it --rm -v /raid:/raid -it diffmod:v1 bash

  • Detached docker: docker run --gpus device=1 -d -v /raid:/raid -w=[path to the code] diffmod:v1 python train.py --config configs/spot.json

Owner
Qiujie (Jay) Dong
Computer Vision & Computer Graphics & Machine Learning & 3D mesh segmentation
Qiujie (Jay) Dong
DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency

[CVPR19] DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency (Oral paper) Authors: Kuang-Jui Hsu, Yen-Yu Lin, Yung-Yu Chuang PDF:

Kuang-Jui Hsu 139 Dec 22, 2022
Configure SRX interfaces with Scrapli

Configure SRX interfaces with Scrapli Overview This example will show how to configure interfaces on Juniper's SRX firewalls. In addition to the Pytho

Calvin Remsburg 1 Jan 07, 2022
The Codebase for Causal Distillation for Language Models.

Causal Distillation for Language Models Zhengxuan Wu*,Atticus Geiger*, Josh Rozner, Elisa Kreiss, Hanson Lu, Thomas Icard, Christopher Potts, Noah D.

Zen 20 Dec 31, 2022
Repository for the paper "From global to local MDI variable importances for random forests and when they are Shapley values"

From global to local MDI variable importances for random forests and when they are Shapley values Antonio Sutera ( Antonio Sutera 3 Feb 23, 2022

Reinforcement learning algorithms in RLlib

raylab Reinforcement learning algorithms in RLlib and PyTorch. Installation pip install raylab Quickstart Raylab provides agents and environments to b

Ângelo 50 Sep 08, 2022
Bagua is a flexible and performant distributed training algorithm development framework.

Bagua is a flexible and performant distributed training algorithm development framework.

786 Dec 17, 2022
A Fast Monotone Rotating Shallow Water model

pyRSW A Fast Monotone Rotating Shallow Water model How fast? As fast as a sustained 2 Gflop/s per core on a 2.5 GHz cpu (or 2048 Gflop/s with 1024 cor

Guillaume Roullet 13 Sep 28, 2022
ShuttleNet: Position-aware Fusion of Rally Progress and Player Styles for Stroke Forecasting in Badminton (AAAI'22)

ShuttleNet: Position-aware Rally Progress and Player Styles Fusion for Stroke Forecasting in Badminton (AAAI 2022) Official code of the paper ShuttleN

Wei-Yao Wang 11 Nov 30, 2022
HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks

HiFiGAN Denoiser This is a Unofficial Pytorch implementation of the paper HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep F

Rishikesh (ऋषिकेश) 134 Dec 27, 2022
Breaking the Curse of Space Explosion: Towards Efficient NAS with Curriculum Search

Breaking the Curse of Space Explosion: Towards Effcient NAS with Curriculum Search Pytorch implementation for "Breaking the Curse of Space Explosion:

guoyong 17 Jan 03, 2023
Image Fusion Transformer

Image-Fusion-Transformer Platform Python 3.7 Pytorch =1.0 Training Dataset MS-COCO 2014 (T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ram

Vibashan VS 68 Dec 23, 2022
Keras-1D-NN-Classifier

Keras-1D-NN-Classifier This code is based on the reference codes linked below. reference 1, reference 2 This code is for 1-D array data classification

Jae-Hoon Shim 6 May 18, 2021
code for "Self-supervised edge features for improved Graph Neural Network training",

Self-supervised edge features for improved Graph Neural Network training Data availability: Here is a link to the raw data for the organoids dataset.

Neal Ravindra 23 Dec 02, 2022
SMCA replication There are no extra compiled components in SMCA DETR and package dependencies are minimal

Usage There are no extra compiled components in SMCA DETR and package dependencies are minimal, so the code is very simple to use. We provide instruct

22 May 06, 2022
Leibniz is a python package which provide facilities to express learnable partial differential equations with PyTorch

Leibniz is a python package which provide facilities to express learnable partial differential equations with PyTorch

Beijing ColorfulClouds Technology Co.,Ltd. 16 Aug 07, 2022
MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)

MetaShift: A Dataset of Datasets for Evaluating Distribution Shifts and Training Conflicts This repo provides the PyTorch source code of our paper: Me

88 Jan 04, 2023
Image-Scaling Attacks and Defenses

Image-Scaling Attacks & Defenses This repository belongs to our publication: Erwin Quiring, David Klein, Daniel Arp, Martin Johns and Konrad Rieck. Ad

Erwin Quiring 163 Nov 21, 2022
Implementation of the final project of the course DDA6309 Probabilistic Graphical Model

Task-aware Joint CWS and POS (TCwsPos) This is the implementation of the final project of the course DDA6309 Probabilistic Graphical Models, The Chine

Peng 1 Dec 26, 2021
Code for a real-time distributed cooperative slam(RDC-SLAM) system for ROS compatible platforms.

RDC-SLAM This repository contains code for a real-time distributed cooperative slam(RDC-SLAM) system for ROS compatible platforms. The system takes in

40 Nov 19, 2022
Code for reproducing key results in the paper "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets"

Status: Archive (code is provided as-is, no updates expected) InfoGAN Code for reproducing key results in the paper InfoGAN: Interpretable Representat

OpenAI 1k Dec 19, 2022