Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021

Overview

Learning Generative Models of Textured 3D Meshes from Real-World Images

This is the reference implementation of "Learning Generative Models of Textured 3D Meshes from Real-World Images", accepted at ICCV 2021.

Dario Pavllo, Jonas Kohler, Thomas Hofmann, Aurelien Lucchi. Learning Generative Models of Textured 3D Meshes from Real-World Images. In IEEE/CVF International Conference on Computer Vision (ICCV), 2021.

This work is a follow-up of Convolutional Generation of Textured 3D Meshes, in which we learn a GAN for generating 3D triangle meshes and the corresponding texture maps using 2D supervision. In this work, we relax the requirement for keypoints in the pose estimation step, and generalize the approach to unannotated collections of images and new categories/datasets such as ImageNet.

Setup

Instructions on how to set up dependencies, datasets, and pretrained models can be found in SETUP.md

Quick start

In order to test our pretrained models, the minimal setup described in SETUP.md is sufficient. No dataset setup is required. We provide an interface for evaluating FID scores, as well as an interface for exporting a sample of generated 3D meshes (both as a grid of renderings and as .obj meshes).

Exporting a sample

You can export a sample of generated meshes using --export-sample. Here are some examples:

python run_generation.py --name pretrained_imagenet_car_singletpl --dataset imagenet_car --gpu_ids 0 --batch_size 10 --export_sample --how_many 40
python run_generation.py --name pretrained_imagenet_airplane_singletpl --dataset imagenet_airplane --gpu_ids 0 --batch_size 10 --export_sample --how_many 40
python run_generation.py --name pretrained_imagenet_elephant_singletpl --dataset imagenet_elephant --gpu_ids 0 --batch_size 10 --export_sample --how_many 40
python run_generation.py --name pretrained_cub_singletpl --dataset cub --gpu_ids 0 --batch_size 10 --export_sample --how_many 40
python run_generation.py --name pretrained_all_singletpl --dataset all --conditional_class --gpu_ids 0 --batch_size 10 --export_sample --how_many 40

This will generate a sample of 40 meshes, render them from random viewpoints, and export the final result to the output directory as a png image. In addition, the script will export the meshes as .obj files (along with material and texture). These can be imported into Blender or other modeling tools. You can switch between the single-template and multi-template settings by appending either _singletpl or _multitpl to the experiment name.

Evaluating FID on pretrained models

You can evaluate the FID of a model by specifying --evaluate. For the models trained to generate a single category (setting A):

python run_generation.py --name pretrained_cub_singletpl --dataset cub --gpu_ids 0,1,2,3 --batch_size 64 --evaluate
python run_generation.py --name pretrained_p3d_car_singletpl --dataset p3d_car --gpu_ids 0,1,2,3 --batch_size 64 --evaluate
python run_generation.py --name pretrained_imagenet_zebra --dataset imagenet_zebra_singletpl --gpu_ids 0,1,2,3 --batch_size 64 --evaluate

For the conditional models trained to generate all classes (setting B), you can specify the category to evaluate (e.g. motorcycle):

python run_generation.py --name pretrained_all_singletpl --dataset all --conditional_class --gpu_ids 0,1,2,3 --batch_size 64 --evaluate --filter_class motorcycle

As before, you can switch between the single-template and multi-template settings by appending either _singletpl or _multitpl to the experiment name. You can of course also adjust the number of GPUs and batch size to suit your computational resources. For evaluation, 16 elements per GPU is a sensible choice. You can also tune the number of data-loading threads using the --num_workers argument (default: 4 threads). Note that the FID will exhibit a small variance depending on the chosen batch size.

Training

See TRAINING.md for the instructions on how to generate the pseudo-ground-truth dataset and train a new model from scratch. The documentation also provides instructions on how to run the pose estimation steps and run the pipeline from scratch on a custom dataset.

Citation

If you use this work in your research, please consider citing our paper(s):

@inproceedings{pavllo2021textured3dgan,
  title={Learning Generative Models of Textured 3D Meshes from Real-World Images},
  author={Pavllo, Dario and Kohler, Jonas and Hofmann, Thomas and Lucchi, Aurelien},
  booktitle={IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}

@inproceedings{pavllo2020convmesh,
  title={Convolutional Generation of Textured 3D Meshes},
  author={Pavllo, Dario and Spinks, Graham and Hofmann, Thomas and Moens, Marie-Francine and Lucchi, Aurelien},
  booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2020}
}

License and Acknowledgments

Our work is licensed under the MIT license. For more details, see LICENSE. This repository builds upon convmesh and includes third-party libraries which may be subject to their respective licenses: Synchronized-BatchNorm-PyTorch, the data loader from CMR, and FID evaluation code from pytorch-fid.

Comments
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • how to test with the picture

    how to test with the picture

    I am very appreciated with your work.But I am wondering how can I test with my own picture. For example,I input an image of a car,and directly get the .obj and .png

    opened by lisentao 1
  • caffe2 error for detectron

    caffe2 error for detectron

    Hi,

    I am trying to test the code on a custom dataset. I downloaded seg_every_thing in the root, copied detections_vg3k.py to tools of the former. Built detectron from scratch, but still it gives me: AssertionError: Detectron ops lib not found; make sure that your Caffe2 version includes Detectron module There is no make file in the Ops lib of detectron. How can I fix this?

    opened by sinAshish 2
  • Person mesh and reconstruction reconstructing texture

    Person mesh and reconstruction reconstructing texture

    Thanks for your great work ... Wanna work on person class to create mesh as well as corresponding texture. can you refer dataset and steps to reach out..?

    opened by sharoseali 0
  • training on custom dataset

    training on custom dataset

    Thank you for your great work! currently, I'm following your work and trying to train on custom datasets. When I move on the data preparation part, I found the model weights in seg_every_thing repo are no long avaiable. I wonder is it possible for you to share the weights ('lib/datasets/data/trained_models/33219850_model_final_coco2vg3k_seg.pkl') used in tools/detection_tool_vg3k.py with us? Looking forward to your reply! Thanks~

    opened by pingping-lu 1
Releases(v1.0)
Owner
Dario Pavllo
PhD Student @ ETH Zurich
Dario Pavllo
Back to Basics: Efficient Network Compression via IMP

Back to Basics: Efficient Network Compression via IMP Authors: Max Zimmer, Christoph Spiegel, Sebastian Pokutta This repository contains the code to r

IOL Lab @ ZIB 1 Nov 19, 2021
An implementation for the ICCV 2021 paper Deep Permutation Equivariant Structure from Motion.

Deep Permutation Equivariant Structure from Motion Paper | Poster This repository contains an implementation for the ICCV 2021 paper Deep Permutation

72 Dec 27, 2022
OpenCV, MediaPipe Pose Estimation, Affine Transform for Icon Overlay

Yoga Pose Identification and Icon Matching Project Goal Detect yoga poses performed by a user and overlay a corresponding icon image. Running the main

Anna Garverick 1 Dec 03, 2021
Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Roxbili 5 Nov 19, 2022
tinykernel - A minimal Python kernel so you can run Python in your Python

tinykernel - A minimal Python kernel so you can run Python in your Python

fast.ai 37 Dec 02, 2022
Small repo describing how to use Hugging Face's Wav2Vec2 with PyCTCDecode

🤗 Transformers Wav2Vec2 + PyCTCDecode Introduction This repo shows how 🤗 Transformers can be used in combination with kensho-technologies's PyCTCDec

Patrick von Platen 102 Oct 22, 2022
Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

MGANs Training & Testing code (torch), pre-trained models and supplementary materials for "Precomputed Real-Time Texture Synthesis with Markovian Gene

290 Nov 15, 2022
RADIal is available now! Check the download section

Latest news: RADIal is available now! Check the download section. However, because we are currently working on the data anonymization, we provide for

valeo.ai 55 Jan 03, 2023
Pytorch implementation of various High Dynamic Range (HDR) Imaging algorithms

Deep High Dynamic Range Imaging Benchmark This repository is the pytorch impleme

Tianhong Dai 5 Nov 16, 2022
Official implementation of Representer Point Selection via Local Jacobian Expansion for Post-hoc Classifier Explanation of Deep Neural Networks and Ensemble Models at NeurIPS 2021

Representer Point Selection via Local Jacobian Expansion for Classifier Explanation of Deep Neural Networks and Ensemble Models This repository is the

Yi(Amy) Sui 2 Dec 01, 2021
Learning Time-Critical Responses for Interactive Character Control

Learning Time-Critical Responses for Interactive Character Control Abstract This code implements the paper Learning Time-Critical Responses for Intera

Movement Research Lab 227 Dec 31, 2022
An implementation of IMLE-Net: An Interpretable Multi-level Multi-channel Model for ECG Classification

IMLE-Net: An Interpretable Multi-level Multi-channel Model for ECG Classification The repostiory consists of the code, results and data set links for

12 Dec 26, 2022
PyTorch Implementation of Exploring Explicit Domain Supervision for Latent Space Disentanglement in Unpaired Image-to-Image Translation.

DosGAN-PyTorch PyTorch Implementation of Exploring Explicit Domain Supervision for Latent Space Disentanglement in Unpaired Image-to-Image Translation

40 Nov 30, 2022
The implementation of DeBERTa

DeBERTa: Decoding-enhanced BERT with Disentangled Attention This repository is the official implementation of DeBERTa: Decoding-enhanced BERT with Dis

Microsoft 1.2k Jan 06, 2023
PyTorch implementation of a Real-ESRGAN model trained on custom dataset

Real-ESRGAN PyTorch implementation of a Real-ESRGAN model trained on custom dataset. This model shows better results on faces compared to the original

Sber AI 160 Jan 04, 2023
Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression

Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression YOLOv5 with alpha-IoU losses implemented in PyTorch. Example r

Jacobi(Jiabo He) 147 Dec 05, 2022
Code for the paper "Query Embedding on Hyper-relational Knowledge Graphs"

Query Embedding on Hyper-Relational Knowledge Graphs This repository contains the code used for the experiments in the paper Query Embedding on Hyper-

DimitrisAlivas 19 Jul 26, 2022
Fast and exact ILP-based solvers for the Minimum Flow Decomposition (MFD) problem, and variants of it.

MFD-ILP Fast and exact ILP-based solvers for the Minimum Flow Decomposition (MFD) problem, and variants of it. The solvers are implemented using Pytho

Algorithmic Bioinformatics Group @ University of Helsinki 4 Oct 23, 2022
Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017

FaderNetworks PyTorch implementation of Fader Networks (NIPS 2017). Fader Networks can generate different realistic versions of images by modifying at

Facebook Research 753 Dec 23, 2022