RGB-stacking 🛑 🟩 🔷 for robotic manipulation

Overview

RGB-stacking 🛑 🟩 🔷 for robotic manipulation

BLOG | PAPER | VIDEO

Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes,
Alex X. Lee*, Coline Devin*, Yuxiang Zhou*, Thomas Lampe*, Konstantinos Bousmalis*, Jost Tobias Springenberg*, Arunkumar Byravan, Abbas Abdolmaleki, Nimrod Gileadi, David Khosid, Claudio Fantacci, Jose Enrique Chen, Akhil Raju, Rae Jeong, Michael Neunert, Antoine Laurens, Stefano Saliceti, Federico Casarini, Martin Riedmiller, Raia Hadsell, Francesco Nori.
In Conference on Robot Learning (CoRL), 2021.

The RGB environment

This repository contains an implementation of the simulation environment described in the paper "Beyond Pick-and-Place: Tackling robotic stacking of diverse shapes". Note that this is a re-implementation of the environment (to remove dependencies on internal libraries). As a result, not all the features described in the paper are available at this point. Noticeably, domain randomization is not included in this release. We also aim to provide reference performance metrics of trained policies on this environment in the near future.

In this environment, the agent controls a robot arm with a parallel gripper above a basket, which contains three objects — one red, one green, and one blue, hence the name RGB. The agent's task is to stack the red object on top of the blue object, within 20 seconds, while the green object serves as an obstacle and distraction. The agent controls the robot using a 4D Cartesian controller. The controlled DOFs are x, y, z and rotation around the z axis. The simulation is a MuJoCo environment built using the Modular Manipulation (MoMa) framework.

Corresponding method

The RGB-stacking paper "Beyond Pick-and-Place: Tackling robotic stacking of diverse shapes" also contains a description and thorough evaluation of our initial solution to both the 'Skill Mastery' (training on the 5 designated test triplets and evaluating on them) and the 'Skill Generalization' (training on triplets of training objects and evaluating on the 5 test triplets). Our approach was to first train a state-based policy in simulation via a standard RL algorithm (we used MPO) followed by interactive distillation of the state-based policy into a vision-based policy (using a domain randomized version of the environment) that we then deployed to the robot via zero-shot sim-to-real transfer. We finally improved the policy further via offline RL based on data collected from the sim-to-real policy (we used CRR). For details on our method and the results please consult the paper.

Installing and visualizing the environment

Please ensure that you have a working MuJoCo200 installation and a valid MuJoCo licence.

  1. Clone this repository:

    git clone https://github.com/deepmind/rgb_stacking.git
    cd rgb_stacking
  2. Prepare a Python 3 environment - venv is recommended.

    python3 -m venv rgb_stacking_venv
    source rgb_stacking_venv/bin/activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Run the environment viewer:

    python -m rgb_stacking.main

Step 2-4 can also be done by running the run.sh script:

./run.sh

Specifying the object triplet

The default environment will load with Triplet 4 (see Sect. 3.2.1 in the paper). If you wish to use a different triplet you can use the following commands:

from rgb_stacking import environment

env = environment.rgb_stacking(object_triplet=NAME_OF_SET)

The possible NAME_OF_SET are:

  • rgb_test_triplet{i} where i is one of 1, 2, 3, 4, 5: Loads test triplet i.
  • rgb_test_random: Randomly loads one of the 5 test triplets.
  • rgb_train_random: Triplet comprised of blocks from the training set.
  • rgb_heldout_random: Triplet comprised of blocks from the held-out set.

For more information on the blocks and the possible options, please refer to the rgb_objects repository.

Specifying the observation space

By default, the observations exposed by the environment are only the ones we used for training our state-based agents. To use another set of observations please use the following code snippet:

from rgb_stacking import environment

env = environment.rgb_stacking(
    observations=environment.ObservationSet.CHOSEN_SET)

The possible CHOSEN_SET are:

  • STATE_ONLY: Only the state observations, used for training expert policies from state in simulation (stage 1).
  • VISION_ONLY: Only image observations.
  • ALL: All observations.
  • INTERACTIVE_IMITATION_LEARNING: Pair of image observations and a subset of proprioception observations, used for interactive imitation learning (stage 2).
  • OFFLINE_POLICY_IMPROVEMENT: Pair of image observations and a subset of proprioception observations, used for the one-step offline policy improvement (stage 3).

Real RGB-Stacking Environment: CAD models and assembly instructions

The CAD model of the setup is available in onshape.

We also provide the following documents for the assembly of the real cell:

  • Assembly instructions for the basket.
  • Assembly instructions for the robot.
  • Assembly instructions for the cell.
  • The bill of materials of all the necessary parts.
  • A diagram with the wiring of cell.

The RGB-objects themselves can be 3D-printed using the STLs available in the rgb_objects repository.

Citing

If you use rgb_stacking in your work, please cite the accompanying paper:

@inproceedings{lee2021rgbstacking,
    title={Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes},
    author={Alex X. Lee and
            Coline Devin and
            Yuxiang Zhou and
            Thomas Lampe and
            Konstantinos Bousmalis and
            Jost Tobias Springenberg and
            Arunkumar Byravan and
            Abbas Abdolmaleki and
            Nimrod Gileadi and
            David Khosid and
            Claudio Fantacci and
            Jose Enrique Chen and
            Akhil Raju and
            Rae Jeong and
            Michael Neunert and
            Antoine Laurens and
            Stefano Saliceti and
            Federico Casarini and
            Martin Riedmiller and
            Raia Hadsell and
            Francesco Nori},
    booktitle={Conference on Robot Learning (CoRL)},
    year={2021},
    url={https://openreview.net/forum?id=U0Q8CrtBJxJ}
}
Owner
DeepMind
DeepMind
Starter kit for getting started in the Music Demixing Challenge.

Music Demixing Challenge - Starter Kit 👉 Challenge page This repository is the Music Demixing Challenge Submission template and Starter kit! Clone th

AIcrowd 106 Dec 20, 2022
Pytorch implementation of

EfficientTTS Unofficial Pytorch implementation of "EfficientTTS: An Efficient and High-Quality Text-to-Speech Architecture"(arXiv). Disclaimer: Somebo

Liu Songxiang 109 Nov 16, 2022
PyTorch implementation of SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching

SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching This is the official PyTorch implementation of SMODICE: Versatile Offline I

Jason Ma 14 Aug 30, 2022
Spectral normalization (SN) is a widely-used technique for improving the stability and sample quality of Generative Adversarial Networks (GANs)

Why Spectral Normalization Stabilizes GANs: Analysis and Improvements [paper (NeurIPS 2021)] [paper (arXiv)] [code] Authors: Zinan Lin, Vyas Sekar, Gi

Zinan Lin 32 Dec 16, 2022
Nested cross-validation is necessary to avoid biased model performance in embedded feature selection in high-dimensional data with tiny sample sizes

Pruner for nested cross-validation - Sphinx-Doc Nested cross-validation is necessary to avoid biased model performance in embedded feature selection i

1 Dec 15, 2021
A highly efficient, fast, powerful and light-weight anime downloader and streamer for your favorite anime.

AnimDL - Download & Stream Your Favorite Anime AnimDL is an incredibly powerful tool for downloading and streaming anime. Core features Abuses the dev

KR 759 Jan 08, 2023
Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module

Invariant Point Attention - Pytorch Implementation of Invariant Point Attention as a standalone module, which was used in the structure module of Alph

Phil Wang 113 Jan 05, 2023
Solving SMPL/MANO parameters from keypoint coordinates.

Minimal-IK A simple and naive inverse kinematics solver for MANO hand model, SMPL body model, and SMPL-H body+hand model. Briefly, given joint coordin

Yuxiao Zhou 305 Dec 30, 2022
iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis

iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis Andreas Bl

CompVis Heidelberg 36 Dec 25, 2022
Gated-Shape CNN for Semantic Segmentation (ICCV 2019)

GSCNN This is the official code for: Gated-SCNN: Gated Shape CNNs for Semantic Segmentation Towaki Takikawa, David Acuna, Varun Jampani, Sanja Fidler

859 Dec 26, 2022
Reimplementation of NeurIPS'19: "Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting" by Shu et al.

[Re] Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting Reimplementation of NeurIPS'19: "Meta-Weight-Net: Learning an Explicit Mapping

Robert Cedergren 1 Mar 13, 2020
Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks This repository contains the code and data for the corresp

Friederike Metz 7 Apr 23, 2022
Source code for our paper "Empathetic Response Generation with State Management"

Source code for our paper "Empathetic Response Generation with State Management" this repository is maintained by both Jun Gao and Yuhan Liu Model Ove

Yuhan Liu 3 Oct 08, 2022
Official implementation for the paper "SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization".

SAPE Project page Paper Official implementation for the paper "SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization". Environment Cre

36 Dec 09, 2022
Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge

Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge This is an implementation of the paper,

Mutian He 19 Oct 14, 2022
This repository contains tutorials for the py4DSTEM Python package

py4DSTEM Tutorials This repository contains tutorials for the py4DSTEM Python package. For more information about py4DSTEM, including installation ins

11 Dec 23, 2022
Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks

pix2vox [Demonstration video] Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks. Generated samples Single-category generation M

Takumi Moriya 232 Nov 14, 2022
Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

Language Emergence in Multi Agent Dialog Code for the Paper Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M.

Karan Desai 105 Nov 25, 2022
ADSPM: Attribute-Driven Spontaneous Motion in Unpaired Image Translation

ADSPM: Attribute-Driven Spontaneous Motion in Unpaired Image Translation This repository provides a PyTorch implementation of ADSPM. Requirements Pyth

24 Jul 24, 2022
Implementation of the paper "Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning"

Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning This is the implementation of the paper "Self-Promoted Prototype Refinement

Kai Zhu 78 Dec 02, 2022