Dynamic Environments with Deformable Objects (DEDO)

Related tags

Deep Learningdedo
Overview

DEDO  - Dynamic Environments with Deformable Objects

DEDO - Dynamic Environments with Deformable Objects

DEDO is a lightweight and customizable suite of environments with deformable objects. It is aimed for researchers in the machine learning, reinforcement learning, robotics and computer vision communities. The suite provides a set of every day tasks that involve deformables, such as hanging cloth, dressing a person, and buttoning buttons. We provide examples for integrating two popular reinforcement learning libraries: StableBaselines3 and RLlib. We also provide reference implementaionts for training a various Variational Autoencoder variants with our environment. DEDO is easy to set up and has few dependencies, it is highly parallelizable and supports a wide range of customizations: loading custom objects and textures, adjusting material properties.

<<<<<<< HEAD

Note: updates for this repo are in progress (until the presentation at NeurIPS2021 in mid-December).

@inproceedings{dedo2021,
  title={Dynamic Environments with Deformable Objects},
  author={Rika Antonova and Peiyang Shi and Hang Yin and Zehang Weng and Danica Kragic},
  booktitle={Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track},
  year={2021},
}

d221b6994e8189457ea6f0513e6807824d11bb29 Table of Contents:
Installation
GettingStarted
Tasks
Use with RL
Use with VAE
Customization

Please refer to Wiki for the full documentation

Installation

Optional initial step: create a new conda environment with conda create --name dedo python=3.7 and activate it with conda activate dedo. Conda is not strictly needed, alternatives like virtualenv can be used; a direct install without using virtual environments is ok as well.

git clone https://github.com/contactrika/dedo
cd dedo
pip install numpy  # important: Nessasary for compiling numpy-enabled PyBullet
pip install -e .

Python3.7 is recommended as we have encountered that on some OS + CPU combo, PyBullet could not be compiled with Numpy enabled in Pip Python 3.8. To enable recording/logging videos install ffmpeg:

sudo apt-get install ffmpeg

See more in Installation Guide in wiki

Getting started

To get started, one can run one of the following commands to visualize the tasks through a hard-coded policy.

python -m dedo.demo --env=HangGarment-v1 --viz --debug
  • dedo.demo is the demo module
  • --env=HangGarment-v1 specifies the environment
  • --viz enables the GUI
  • ---debug outputs additional information in the console
  • --cam_resolution 400 specifies the size of the output window

See more in Usage-guide

Tasks

See more in Task Overview

We provide a set of 10 tasks involving deformable objects, most tasks contains 5 handmade deformable objects. There are also two procedurally generated tasks, ButtonProc and HangProcCloth, in which the deformable objects are procedurally generated. Furthermore, to improve generalzation, the v0 of each task will randomizes textures and meshes.

All tasks have -v1 and -v2 with a particular choice of meshes and textures that is not randomized. Most tasks have versions up to -v5 with additional mesh and texture variations.

Tasks with procedurally generated cloth (ButtonProc and HangProcCloth) generate random cloth objects for all versions (but randomize textures only in v0).

HangBag

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangBag-v1 --viz

HangBag-v0: selects one of 108 bag meshes; randomized textures

HangBag-v[1-3]: three bag versions with textures shown below:

images/imgs/hang_bags_annotated.jpg

HangGarment

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangGarment-v1 --viz

HangGarment-v0: hang garment with randomized textures (a few examples below):

HangGarment-v[1-5]: 5 apron meshes and texture combos shown below:

images/imgs/hang_garments_5.jpg

HangGarment-v[6-10]: 5 shirt meshes and texture combos shown below:

images/imgs/hang_shirts_5.jpg

HangProcCloth

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangProcCloth-v1 --viz

HangProcCloth-v0: random textures, procedurally generated cloth with 1 and 2 holes.

HangProcCloth-v[1-2]: same, but with either 1 or 2 holes

images/imgs/hang_proc_cloth.jpg

Buttoning

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Button-v1 --viz

ButtonProc-v0: randomized textures and procedurally generated cloth with 2 holes, randomized hole/button positions.

ButtonProc-v[1-2]: procedurally generated cloth, 1 or two holes.

images/imgs/button_proc.jpg

Button-v0: randomized textures, but fixed cloth and button positions.

Button-v1: fixed cloth and button positions with one texture (see image below):

images/imgs/button.jpg

Hoop

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Hoop-v1 --viz

Hoop-v0: randomized textures Hoop-v1: pre-selected textures images/imgs/hoop_and_lasso.jpg

Lasso

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Lasso-v1 --viz

Lasso-v0: randomized textures Lasso-v1: pre-selected textures

DressBag

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=DressBag-v1 --viz

DressBag-v0, DressBag-v[1-5]: demo for -v1 shown below

images/imgs/dress_bag.jpg

Visualizations of the 5 backpack mesh and texture variants for DressBag-v[1-5]:

images/imgs/backpack_meshes.jpg

DressGarment

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=DressGarment-v1 --viz

DressGarment-v0, DressGarment-v[1-5]: demo for -v1 shown below

images/imgs/dress_garment.jpg

Mask

python -m dedo.demo_preset --env=Mask-v1 --viz

Mask-v0, Mask-v[1-5]: a few texture variants shown below: images/imgs/dress_garment.jpg

RL Examples

dedo/run_rl_sb3.py gives an example of how to train an RL algorithm from Stable Baselines 3:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

dedo/run_rllib.py gives an example of how to train an RL algorithm using RLLib:

python -m dedo.run_rllib --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

For documentation, please refer to Arguments Reference page in wiki

To launch the Tensorboard:

tensorboard --logdir=/tmp/dedo --bind_all --port 6006 \
  --samples_per_plugin images=1000

SVAE Examples

dedo/run_svae.py gives an example of how to train various flavors of VAE:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

dedo/run_rllib.py gives an example of how to train an RL algorithm from Stable Baselines 3:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

To launch the Tensorboard:

tensorboard --logdir=/tmp/dedo --bind_all --port 6006 \
  --samples_per_plugin images=1000

Customization

To load custom object you would first have to fill an entry in DEFORM_INFO in task_info.py. The key should the the .obj file path relative to data/:

DEFORM_INFO = {
...
    # An example of info for a custom item.
    'bags/custom.obj': {
        'deform_init_pos': [0, 0.47, 0.47],
        'deform_init_ori': [np.pi/2, 0, 0],
        'deform_scale': 0.1,
        'deform_elastic_stiffness': 1.0,
        'deform_bending_stiffness': 1.0,
        'deform_true_loop_vertices': [
            [0, 1, 2, 3]  # placeholder, since we don't know the true loops
        ]
    },

Then you can use --override_deform_obj flag:

python -m dedo.demo --env=HangBag-v0 --cam_resolution 200 --viz --debug \
    --override_deform_obj bags/custom.obj

For items not in DEFORM_DICT you will need to specify sensible defaults, for example:

python -m dedo.demo --env=HangGarment-v0 --viz --debug \
  --override_deform_obj=generated_cloth/generated_cloth.obj \
  --deform_init_pos 0.02 0.41 0.63 --deform_init_ori 0 0 1.5708

Example of scaling up the custom mesh objects:

python -m dedo.demo --env=HangGarment-v0 --viz --debug \
   --override_deform_obj=generated_cloth/generated_cloth.obj \
   --deform_init_pos 0.02 0.41 0.55 --deform_init_ori 0 0 1.5708 \
   --deform_scale 2.0 --anchor_init_pos -0.10 0.40 0.70 \
   --other_anchor_init_pos 0.10 0.40 0.70

See more in Customization Wiki

Additonal Assets

BGarment dataset is adapter from Berkeley Garment Library

Sewing dataset is adapted from Generating Datasets of 3D Garments with Sewing Patterns

You might also like...
PyTorch implementation of Deformable Convolution

Deformable Convolutional Networks in PyTorch This repo is an implementation of Deformable Convolution. Ported from author's MXNet implementation. Buil

Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

PyTorch implementation of Deformable Convolution
PyTorch implementation of Deformable Convolution

PyTorch implementation of Deformable Convolution !!!Warning: There is some issues in this implementation and this repo is not maintained any more, ple

A multi-scale unsupervised learning for deformable image registration

A multi-scale unsupervised learning for deformable image registration Shuwei Shao, Zhongcai Pei, Weihai Chen, Wentao Zhu, Xingming Wu and Baochang Zha

Some code of the implements of Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network

3D-GMPDCNN Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network PyTorch implementation of "Geological Modeling Usin

MoCoPnet - Deformable 3D Convolution for Video Super-Resolution
MoCoPnet - Deformable 3D Convolution for Video Super-Resolution

Deformable 3D Convolution for Video Super-Resolution Pytorch implementation of l

3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021)

3DDUNET This is the code for 3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021) Conference Paper Link Dataset We use SMOID dataset

Selfplay In MultiPlayer Environments
Selfplay In MultiPlayer Environments

This project allows you to train AI agents on custom-built multiplayer environments, through self-play reinforcement learning.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

Comments
  • Adding Point Cloud Observations to DEDO

    Adding Point Cloud Observations to DEDO

    This PR adds point cloud (pcd) rendering to the DEDO. Summary of changes:

    • Point cloud data extracted from sim environment based on a set of object ids that we want to retain
    • Depth cameras are instantiated using a cameraConfig class, which abstracts out the various camera configurations needed.
    • The cameraConfig class loads camera configs from JSON (for easy loading & sharing of camera configs), or directly by instantiation (if you know how you want to dynamically set your camera).
    • Some sample JSON camera configs are provided (4 total)
    • Unprojecting from depth image to point cloud is vectorized, so rendering point cloud observations adds negligible runtime to overall pipeline process time (should benchmark this?).
    • The original deform_env had to be adjusted so that the deformable object would have ID 0. For some reason, pybullet only renders the deformable if this is true.

    Known issues:

    • The floor has disappeared from the visual.
    opened by edwin-pan 3
  • Enables base motion on fetch robot with 1 anchor

    Enables base motion on fetch robot with 1 anchor

    Changes allow the fetch robot to move towards the hanger with an apron.

    Google Doc that explains the changes: https://docs.google.com/document/d/18_9K29K4N6atvtqUxIqKhq6Bt0YSPhQgfWldUWdyvLM/edit?usp=sharing

    There are some TODO's: related to removing some hardcoded values and improving the results

    opened by Nishantjannu 0
Releases(v0.1)
  • v0.1(Jan 11, 2022)

    This is the initial release of the code and functionality presented at the 35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks in December 2021.

    Source code(tar.gz)
    Source code(zip)
Owner
Rika
Sim-to-real with Reinforcement Learning, Variational Inference, Bayesian Optimization
Rika
Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers

Official TensorFlow implementation of the unsupervised reconstruction model using zero-Shot Learned Adversarial TransformERs (SLATER). (https://arxiv.

ICON Lab 22 Dec 22, 2022
Calculates carbon footprint based on fuel mix and discharge profile at the utility selected. Can create graphs and tabular output for fuel mix based on input file of series of power drawn over a period of time.

carbon-footprint-calculator Conda distribution ~/anaconda3/bin/conda install anaconda-client conda-build ~/anaconda3/bin/conda config --set anaconda_u

Seattle university Renewable energy research 7 Sep 26, 2022
Contrastive Learning for Compact Single Image Dehazing, CVPR2021

AECR-Net Contrastive Learning for Compact Single Image Dehazing, CVPR2021. Official Pytorch based implementation. Paper arxiv Pytorch Version TODO: mo

glassy 253 Jan 01, 2023
Train Dense Passage Retriever (DPR) with a single GPU

Gradient Cached Dense Passage Retrieval Gradient Cached Dense Passage Retrieval (GC-DPR) - is an extension of the original DPR library. We introduce G

Luyu Gao 92 Jan 02, 2023
Code for the paper "Training GANs with Stronger Augmentations via Contrastive Discriminator" (ICLR 2021)

Training GANs with Stronger Augmentations via Contrastive Discriminator (ICLR 2021) This repository contains the code for reproducing the paper: Train

Jongheon Jeong 174 Dec 29, 2022
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration (NeurIPS 2021) PyTorch implementation of the paper: CoFiNet: Reli

76 Jan 03, 2023
Designing a Practical Degradation Model for Deep Blind Image Super-Resolution (ICCV, 2021) (PyTorch) - We released the training code!

Designing a Practical Degradation Model for Deep Blind Image Super-Resolution Kai Zhang, Jingyun Liang, Luc Van Gool, Radu Timofte Computer Vision Lab

Kai Zhang 804 Jan 08, 2023
Boundary IoU API (Beta version)

Boundary IoU API (Beta version) Bowen Cheng, Ross Girshick, Piotr Dollár, Alexander C. Berg, Alexander Kirillov [arXiv] [Project] [BibTeX] This API is

Bowen Cheng 177 Dec 29, 2022
This is a project based on retinaface face detection, including ghostnet and mobilenetv3

English | 简体中文 RetinaFace in PyTorch Chinese detailed blog:https://zhuanlan.zhihu.com/p/379730820 Face recognition with masks is still robust---------

pogg 59 Dec 21, 2022
PyTorch trainer and model for Sequence Classification

PyTorch-trainer-and-model-for-Sequence-Classification After cloning the repository, modify your training data so that the training data is a .csv file

NhanTieu 2 Dec 09, 2022
Automate issue discovery for your projects against Lightning nightly and releases.

Automated Testing for Lightning EcoSystem Projects Automate issue discovery for your projects against Lightning nightly and releases. You get CPUs, Mu

Pytorch Lightning 41 Dec 24, 2022
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022
A comprehensive list of published machine learning applications to cosmology

ml-in-cosmology This github attempts to maintain a comprehensive list of published machine learning applications to cosmology, organized by subject ma

George Stein 290 Dec 29, 2022
coldcuts is an R package to automatically generate and plot segmentation drawings in R

coldcuts coldcuts is an R package that allows you to draw and plot automatically segmentations from 3D voxel arrays. The name is inspired by one of It

2 Sep 03, 2022
Attention-guided gan for synthesizing IR images

SI-AGAN Attention-guided gan for synthesizing IR images This repository contains the Tensorflow code for "Pedestrian Gender Recognition by Style Trans

1 Oct 25, 2021
Y. Zhang, Q. Yao, W. Dai, L. Chen. AutoSF: Searching Scoring Functions for Knowledge Graph Embedding. IEEE International Conference on Data Engineering (ICDE). 2020

AutoSF The code for our paper "AutoSF: Searching Scoring Functions for Knowledge Graph Embedding" and this paper has been accepted by ICDE2020. News:

AutoML Research 64 Dec 17, 2022
Implementation of Fast Transformer in Pytorch

Fast Transformer - Pytorch Implementation of Fast Transformer in Pytorch. This only work as an encoder. Yannic video AI Epiphany Install $ pip install

Phil Wang 167 Dec 27, 2022
HyperaPy: An automatic hyperparameter optimization framework ⚡🚀

hyperpy HyperPy: An automatic hyperparameter optimization framework Description HyperPy: Library for automatic hyperparameter optimization. Build on t

Sergio Mora 7 Sep 06, 2022
Datasets for new state-of-the-art challenge in disentanglement learning

High resolution disentanglement datasets This repository contains the Falcor3D and Isaac3D datasets, which present a state-of-the-art challenge for co

NVIDIA Research Projects 37 May 26, 2022
Make a Turtlebot3 follow a figure 8 trajectory and create a robot arm and make it follow a trajectory

HW2 - ME 495 Overview Part 1: Makes the robot move in a figure 8 shape. The robot starts moving when launched on a real turtlebot3 and can be paused a

Devesh Bhura 0 Oct 21, 2022