Nsdf: A mesh SDF with just some code we can directly paste into our raymarcher

Related tags

Deep Learningnsdf
Overview

imgs/bunny.png

nsdf

Representing SDFs of arbitrary meshes has been a bit tricky so far. Expressing the mesh SDF as a combination of simpler analytical SDFs is usually not possible, but we could either use pre-computed SDF 3D textures or use acceleration structures with triangle mesh directly. The downside with those is that they're not as plug-and-play as analytical SDFs, because you need to push additional data to the shader (which is not really possible in something like Shadertoy). Wouldn't it be cool to have a way of representing a mesh SDF with just some code we can directly paste into our raymarcher, as we do with simple analytical SDFs?

Over the past few years, another promising option for representing SDFs of arbitrary meshes came to existence - neural approximations of SDFs (let's call them nsdfs):

Are these nsdfs usable outside of "lab"? The networks described in the papers are either too big (millions of parameters) to be represented purely in code, or require additional 3d textures as inputs (again millions of parameters). So, can we make them into copy-pastable distance functions which are usable in Shadertoy? Yes, yes we can:

imgs/dragon_big_loop.gif

See in action on Shadertoy

This is a quite large nsdf of Stanford dragon running in Shadertoy, at ~25fps on 3080RTX in 640x360 resolution. Not perfect, but not bad at all.

The nsdf function in shader looks something like this:

float nsdf(vec3 x) {
    vec4 x_e_0 = mat3x4(vec4(-0.6761706471443176, -0.5204018950462341, -0.725279688835144, 0.6860896944999695), vec4(0.4600033164024353, 2.345594644546509, 0.4790898859500885, -1.7588382959365845), vec4(0.0854012668132782, 0.11334510892629623, 1.3206489086151123, 1.0468124151229858)) * x * 5.0312042236328125;vec4 x_0_0 = sin(x_e_0);vec4 x_0_12 = cos(x_e_0);vec4 x_e_1 = mat3x4(vec4(-1.151658296585083, 0.3811194896697998, -1.270230770111084, -0.28512871265411377), vec4(-0.4783991575241089, 1.5332365036010742, -1.1580479145050049, -0.038533274084329605), vec4(1.764098882675171, -0.8132078647613525, 0.607886552810669, -0.9051652550697327)) .....
)

The second line continues for much, much longer and it would take up most of the space on this README.

imgs/monkey_big_loop.gif

There's actually no magic to make it work, it's enough to just train a smaller network with fourier features as inputs.

Surprisingly (not!), the smaller the network, the lower the detail of the resulting model (but on the flip side, the model looks more stylized):

  • 32 fourier features, 2 hidden layers of 16 neurons
  • should work in real time on most modern-ish gpus

imgs/bunny_small_loop.gif

  • 64 fourier features, 2 hidden layers of 64 neurons
  • 3080RTX can still run this at 60FPS at 640x360)
  • Note that it takes a few seconds to compile the shader

imgs/bunny_normal_loop.gif

  • 96 fourier features, 1 hidden layer of 96 neurons
  • ~25 fps at 640x360 on 3080RTX
  • Note that it can take tens of seconds to compile the shader

imgs/bunny_big_loop.gif

Using sigmoid as activation function

Replacing ReLU with Sigmoid as the activation function makes the model produce SDF with smoother, but less detailed surface.

imgs/bunny_normal_smooth_loop.gif

Generating your own nsdf

To generate your own nsdf, you first have to train a nsdf model:

python train.py $YOUR_MESH_FILE --output $OUTPUT_MODEL_FILE --model_size {small, normal, bigly}

Once the model is trained, you can generate GLSL nsdf function:

python generate_glsl.py $OUTPUT_MODEL_FILE

Then you can just copy-paste the generated code into your raymarcher.

WARNING: The "bigly" models can crash your browser if your gpu is not enough.

Setup

Following pip packages are required for training:

mesh-to-sdf
numpy
torch
trimesh

(you can just run pip install -r requirements.txt)

Notes:

  • The nsdf function is defined only in [-1, 1] cube, you have to handle evaluation outside of that range.
  • Related to above, I handle evaluating outside [-1, 1] cube by first checking for distance to the unit cube itself, and only after reaching that cube, nsdf is used. This has positive performance impact, so keep that in mind when reading FPS numbers above.
  • For smaller models, it might be the best to train multiple models and select the best one since there's visible variance in the quality.
Owner
Jan Ivanecky
Jan Ivanecky
An official source code for "Augmentation-Free Self-Supervised Learning on Graphs"

Augmentation-Free Self-Supervised Learning on Graphs An official source code for Augmentation-Free Self-Supervised Learning on Graphs paper, accepted

Namkyeong Lee 59 Dec 01, 2022
[ICCV 2021 Oral] NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo

NerfingMVS Project Page | Paper | Video | Data NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo Yi Wei, Shaohui

Yi Wei 369 Dec 24, 2022
A light and fast one class detection framework for edge devices. We provide face detector, head detector, pedestrian detector, vehicle detector......

A Light and Fast Face Detector for Edge Devices Big News: LFD, which is a big update of LFFD, now is released (2021.03.09). It is strongly recommended

YonghaoHe 1.3k Dec 25, 2022
FAVD: Featherweight Assisted Vulnerability Discovery

FAVD: Featherweight Assisted Vulnerability Discovery This repository contains the replication package for the paper "Featherweight Assisted Vulnerabil

secureIT 4 Sep 16, 2022
Unofficial PyTorch code for BasicVSR

Dependencies and Installation The code is based on BasicSR, Please install the BasicSR framework first. Pytorch=1.51 Training cd ./code CUDA_VISIBLE_

Long 59 Dec 06, 2022
[CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Awadallah, Zhangyang Wang

The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy Codes for this paper: [CVPR 2022] The Pr

VITA 16 Nov 26, 2022
[内测中]前向式Python环境快捷封装工具,快速将Python打包为EXE并添加CUDA、NoAVX等支持。

QPT - Quick packaging tool 快捷封装工具 GitHub主页 | Gitee主页 QPT是一款可以“模拟”开发环境的多功能封装工具,最短只需一行命令即可将普通的Python脚本打包成EXE可执行程序,并选择性添加CUDA和NoAVX的支持,尽可能兼容更多的用户环境。 感觉还可

QPT Family 545 Dec 28, 2022
Accelerating BERT Inference for Sequence Labeling via Early-Exit

Sequence-Labeling-Early-Exit Code for ACL 2021 paper: Accelerating BERT Inference for Sequence Labeling via Early-Exit Requirement: Please refer to re

李孝男 23 Oct 14, 2022
Attentional Focus Modulates Automatic Finger‑tapping Movements

"Attentional Focus Modulates Automatic Finger‑tapping Movements", in Scientific Reports

Xingxun Jiang 1 Dec 02, 2021
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models

DSEE Codes for [Preprint] DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models Xuxi Chen, Tianlong Chen, Yu Cheng, Weizhu Ch

VITA 4 Dec 27, 2021
Pomodoro timer that acknowledges the inexorable, infinite passage of time

Pomodouroboros Most pomodoro trackers assume you're going to start them. But time and tide wait for no one - the great pomodoro of the cosmos is cold

Glyph 66 Dec 13, 2022
Repository for the "Gotta Go Fast When Generating Data with Score-Based Models" paper

Gotta Go Fast When Generating Data with Score-Based Models This repo contains the official implementation for the paper Gotta Go Fast When Generating

Alexia Jolicoeur-Martineau 89 Nov 09, 2022
piSTAR Lab is a modular platform built to make AI experimentation accessible and fun. (pistar.ai)

piSTAR Lab WARNING: This is an early release. Overview piSTAR Lab is a modular deep reinforcement learning platform built to make AI experimentation a

piSTAR Lab 0 Aug 01, 2022
A Unified Generative Framework for Various NER Subtasks.

This is the code for ACL-ICJNLP2021 paper A Unified Generative Framework for Various NER Subtasks. Install the package in the requirements.txt, then u

177 Jan 05, 2023
Disagreement-Regularized Imitation Learning

Due to a normalization bug the expert trajectories have lower performance than the rl_baseline_zoo reported experts. Please see the following link in

Kianté Brantley 25 Apr 28, 2022
torchbearer: A model fitting library for PyTorch

Note: We're moving to PyTorch Lightning! Read about the move here. From the end of February, torchbearer will no longer be actively maintained. We'll

632 Dec 13, 2022
An extremely simple, intuitive, hardware-friendly, and well-performing network structure for LiDAR semantic segmentation on 2D range image. IROS21

FIDNet_SemanticKITTI Motivation Implementing complicated network modules with only one or two points improvement on hardware is tedious. So here we pr

YimingZhao 54 Dec 12, 2022
Pi-NAS: Improving Neural Architecture Search by Reducing Supernet Training Consistency Shift (ICCV 2021)

Π-NAS This repository provides the evaluation code of our submitted paper: Pi-NAS: Improving Neural Architecture Search by Reducing Supernet Training

Jiqi Zhang 18 Aug 18, 2022
Attentive Implicit Representation Networks (AIR-Nets)

Attentive Implicit Representation Networks (AIR-Nets) Preprint | Supplementary | Accepted at the International Conference on 3D Vision (3DV) teaser.mo

29 Dec 07, 2022
git《Investigating Loss Functions for Extreme Super-Resolution》(CVPR 2020) GitHub:

Investigating Loss Functions for Extreme Super-Resolution NTIRE 2020 Perceptual Extreme Super-Resolution Submission. Our method ranked first and secon

Sejong Yang 0 Oct 17, 2022