We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.

Overview

An Effective Loss Function for Generating 3D Models from Single 2D Image without Rendering

Papers with code | Paper

Nikola Zubić   Pietro Lio  

University of Novi Sad   University of Cambridge

AIAI 2021

Citation

Besides AIAI 2021, our paper is in a Springer's book entitled "Artificial Intelligence Applications and Innovations": link

Please, cite our paper if you find this code useful for your research.

@article{zubic2021effective,
  title={An Effective Loss Function for Generating 3D Models from Single 2D Image without Rendering},
  author={Zubi{\'c}, Nikola and Li{\`o}, Pietro},
  journal={arXiv preprint arXiv:2103.03390},
  year={2021}
}

Prerequisites

  • Download code:
    Git clone the code with the following command:

    git clone https://github.com/NikolaZubic/2dimageto3dmodel.git
    
  • Open the project with Conda Environment (Python 3.7)

  • Install packages:

    conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch
    

    Then git clone Kaolin library in the root (2dimageto3dmodel) folder with the following commit and run the following commands:

    cd kaolin
    python setup.py install
    pip install --no-dependencies nuscenes-devkit opencv-python-headless scikit-learn joblib pyquaternion cachetools
    pip install packaging
    

Run the program

Run the following commands from the root/code/ (2dimageto3dmodel/code/) directory:

python main.py --dataset cub --batch_size 16 --weights pretrained_weights_cub --save_results

for the CUB Birds Dataset.

python main.py --dataset p3d --batch_size 16 --weights pretrained_weights_p3d --save_results

for the Pascal 3D+ Dataset.

The results will be saved at 2dimageto3dmodel/code/results/ path.

Continue training

To continue the training process:
Run the following commands (without --save_results) from the root/code/ (2dimageto3dmodel/code/) directory:

python main.py --dataset cub --batch_size 16 --weights pretrained_weights_cub

for the CUB Birds Dataset.

python main.py --dataset p3d --batch_size 16 --weights pretrained_weights_p3d

for the Pascal 3D+ Dataset.

License

MIT

Acknowledgment

This idea has been built based on the architecture of Insafutdinov & Dosovitskiy.
Poisson Surface Reconstruction was used for Point Cloud to 3D Mesh transformation.
The GAN architecture (used for texture mapping) is a mixture of Xian's TextureGAN and Li's GAN.

Comments
  • Where is cmr_data?

    Where is cmr_data?

    Keep running into this issue from cmr_data.p3d import P3dDataset and from cmr_data.p3d import CUBDataset

    but you do not have these files in your repo. I tried using cub_200_2011_dataset.py but it does not take in the same number of arguments as the CUBDataset class used in run_reconstruction.py.

    opened by achhabria7 6
  • ModuleNotFoundError: No module named 'kaolin.graphics'

    ModuleNotFoundError: No module named 'kaolin.graphics'

    Pascal 3D+ dataset with 4722 images is successfully loaded.

    Traceback (most recent call last): File "main.py", line 149, in <module> from rendering.renderer import Renderer File "/home/ujjawal/my_work/object_recon/2d3d/code/rendering/renderer.py", line 1, in <module> from kaolin.graphics.dib_renderer.rasterizer import linear_rasterizer ModuleNotFoundError: No module named kaolin.graphics

    I also downloaded the graphics folder from here https://github.com/NVIDIAGameWorks/kaolin/tree/e7e513173bd4159ae45be6b3e156a3ad156a3eb9 and tried to place in the graphics folder in the kaolin folder locally and here is the error Traceback (most recent call last): File "main.py", line 149, in <module> from rendering.renderer import Renderer File "/home/ujjawal/my_work/object_recon/2d3d/code/rendering/renderer.py", line 1, in <module> from kaolin.graphics.dib_renderer.rasterizer import linear_rasterizer File "/usr/local/lib/python3.6/dist-packages/kaolin-0.9.0-py3.6-linux-x86_64.egg/kaolin/graphics/__init__.py", line 2, in <module> File "/usr/local/lib/python3.6/dist-packages/kaolin-0.9.0-py3.6-linux-x86_64.egg/kaolin/graphics/nmr/__init__.py", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/kaolin-0.9.0-py3.6-linux-x86_64.egg/kaolin/graphics/nmr/rasterizer.py", line 30, in <module> ImportError: cannot import name rasterize_cuda

    opened by ujjawalcse 6
  • No module named 'models.reconstruction'

    No module named 'models.reconstruction'

    Dear NikolaZubic :
    Thanks for you updated the code recently. Did you put the reconstruction.py in the models folder?When I run “python run_reconstruction.py --name pretrained_reconstruction_cub --dataset cub --batch_size 10 --generate_pseudogt” it display
    No module named 'models.reconstruction.

    opened by lw0210 2
  • inference with single RGB pictures

    inference with single RGB pictures

    Hi, I am interested with your work, it is wonderful, and I want to use my own picture to test the model, could you provided the pretrained model and inference scripts.

    opened by 523997931 2
  • can't find the pseudogt_512*512\.npz file

    can't find the pseudogt_512*512\.npz file

    Dear NikolaZubic: I want to quote your paper, but I can't find the pseudogt_512512.npz file and can't reproduce it. Can you give me the pseudogt_512512.npz file and help me reproduce it? Thanks

    opened by Yangfuha 1
  • ValueError: Training a model requires the pseudo-ground-truth to be setup beforehand.

    ValueError: Training a model requires the pseudo-ground-truth to be setup beforehand.

    I recently read your paper and was very interested in it . I want to reproduce the code of this paper. When I followed your instructions, I found it difficult for me to run the commands(python main.py --dataset cub --batch_size 16 --weights pretrained_weights_cub and python main.py --dataset p3d --batch_size 16 --weights pretrained_weights_p3d.).And the program displayed a value error that training a model requires the pseudo-ground-truth to be setup beforehand. And I don’t know how to solve the problem, so I turn to you for help.I'm sorry to bother you, but I'really eager to solve the problem. I hope to get your reply.Thank you!

    opened by lw0210 1
  • Added step: switch to the correct correct Kaolin branch

    Added step: switch to the correct correct Kaolin branch

    This step will help others to avoid the "ModuleNotFoundError: No module named kaolin.graphics" error.

    Fix to issue: https://github.com/NikolaZubic/2dimageto3dmodel/issues/2

    opened by ricklentz 1
  • Shapenet V2 not training

    Shapenet V2 not training

    Great work guys. I was able to run the code on CUB dataset. But when I tried to run training_test_shape_net.py on Shape Net v2 chair class I'm getting errors because of missing files, unmatched file names, etc.

    So it would be helpful if you provide Shapenet Dataset Folder structure and files(images, masks) description or a sample folder and clear instructions for training the model shapenet dataset. And also if possible give pre-trained weights for the Shape net dataset models

    Thank you

    opened by girishdhegde 0
  • Pretrained model

    Pretrained model

    Hi, I find it hard to understand how to train the model on ShapeNet. It would be very helpful if you can provide a pretrained model on ShapeNet planes (I need it to test the performance in my project). If the pretrained models are not available, it would also be helpful to introduce me of how to train the model on ShapeNet.

    opened by YYYYYHC 0
  • How can I train on the boat set of the Pascal 3D+ dataset

    How can I train on the boat set of the Pascal 3D+ dataset

    I find the data of trainning such as "python run_reconstruction.py --name pretrained_reconstruction_p3d --dataset p3d --optimize_z0 --batch_size 50 --tensorboard" using the data of car.mat in sfm and data folder.Even if I rename the .mat to boat.mat and using the boat imageNet in Pascal 3D+ dataset,I find the shape of the result is more like a car not a boat.So I am wondering how to train the boat set.

    opened by lisentao 0
  • Custom Dataset

    Custom Dataset

    Hi!

    Love the work you guys have done. I am currently conducting a research. Could you please tell me how I would train on a custom dataset and how I would infer an image or create a 3d model out an image with pretrained weights that you have provided?

    opened by mahnoor-fatima-saad 0
  • How do I make my own dataset?

    How do I make my own dataset?

    Dear NikolaZubic: I want to use my own data set to replace the cub or P3D data set for training. Do you have any attention or requirements for images when making data sets?

    opened by lw0210 0
Releases(metadata)
Owner
Nikola Zubić
Interested in Artificial intelligence, Visual Computing and Cognitive science. For future AI projects: @reinai
Nikola Zubić
Project page for our ICCV 2021 paper "The Way to my Heart is through Contrastive Learning"

The Way to my Heart is through Contrastive Learning: Remote Photoplethysmography from Unlabelled Video This is the official project page of our ICCV 2

36 Jan 06, 2023
TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors

TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors This package provides a simulator for vision-based

Facebook Research 255 Dec 27, 2022
A (PyTorch) imbalanced dataset sampler for oversampling low frequent classes and undersampling high frequent ones.

Imbalanced Dataset Sampler Introduction In many machine learning applications, we often come across datasets where some types of data may be seen more

Ming 2k Jan 08, 2023
FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

HKBU High Performance Machine Learning Lab 6 Nov 18, 2022
Reference code for the paper CAMS: Color-Aware Multi-Style Transfer.

CAMS: Color-Aware Multi-Style Transfer Mahmoud Afifi1, Abdullah Abuolaim*1, Mostafa Hussien*2, Marcus A. Brubaker1, Michael S. Brown1 1York University

Mahmoud Afifi 36 Dec 04, 2022
A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes iGibson is a simulation environment providing fast visual rend

Stanford Vision and Learning Lab 493 Jan 04, 2023
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 08, 2022
The story of Chicken for Club Bing

Chicken Story tl;dr: The time when Microsoft banned my entire country for cheating at Club Bing. (A lot of the details are from memory so I've recreat

Eyal 142 May 16, 2022
This repo is the code release of EMNLP 2021 conference paper "Connect-the-Dots: Bridging Semantics between Words and Definitions via Aligning Word Sense Inventories".

Connect-the-Dots: Bridging Semantics between Words and Definitions via Aligning Word Sense Inventories This repo is the code release of EMNLP 2021 con

12 Nov 22, 2022
PSGAN running with ncnn⚡妆容迁移/仿妆⚡Imitation Makeup/Makeup Transfer⚡

PSGAN running with ncnn⚡妆容迁移/仿妆⚡Imitation Makeup/Makeup Transfer⚡

WuJinxuan 144 Dec 26, 2022
Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks

This is an implementation of Volodymyr Mnih's dissertation methods on his Massachusetts road & building dataset and my original methods that are publi

Shunta Saito 255 Sep 07, 2022
This is the first released system towards complex meters` detection and recognition, which is implemented by computer vision techniques.

A three-stage detection and recognition pipeline of complex meters in wild This is the first released system towards detection and recognition of comp

Yan Shu 19 Nov 28, 2022
Train an imgs.ai model on your own dataset

imgs.ai is a fast, dataset-agnostic, deep visual search engine for digital art history based on neural network embeddings.

Fabian Offert 5 Dec 21, 2021
PyContinual (An Easy and Extendible Framework for Continual Learning)

PyContinual (An Easy and Extendible Framework for Continual Learning) Easy to Use You can sumply change the baseline, backbone and task, and then read

176 Jan 05, 2023
Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, documentation, and smooth video creation.

VQGAN-CLIP-GENERATOR Overview This is a package (with available notebook) for running VQGAN+CLIP locally, with a focus on ease of use, good documentat

Ryan Hamilton 98 Dec 30, 2022
PyTorch implementation of MLP-Mixer

PyTorch implementation of MLP-Mixer MLP-Mixer: an all-MLP architecture composed of alternate token-mixing and channel-mixing operations. The token-mix

Duo Li 33 Nov 27, 2022
Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms

LESA Introduction This repository contains the official implementation of Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Cont

Chenglin Yang 20 Dec 31, 2021
FedML: A Research Library and Benchmark for Federated Machine Learning

FedML: A Research Library and Benchmark for Federated Machine Learning 📄 https://arxiv.org/abs/2007.13518 News 2021-02-01 (Award): #NeurIPS 2020# Fed

FedML-AI 2.3k Jan 08, 2023
Python library for analysis of time series data including dimensionality reduction, clustering, and Markov model estimation

deeptime Releases: Installation via conda recommended. conda install -c conda-forge deeptime pip install deeptime Documentation: deeptime-ml.github.io

495 Dec 28, 2022
Fast and robust clustering of point clouds generated with a Velodyne sensor.

Depth Clustering This is a fast and robust algorithm to segment point clouds taken with Velodyne sensor into objects. It works with all available Velo

Photogrammetry & Robotics Bonn 957 Dec 21, 2022