Code & Experiments for "LILA: Language-Informed Latent Actions" to be presented at the Conference on Robot Learning (CoRL) 2021.

Related tags

Deep Learninglila
Overview

LILA

LILA: Language-Informed Latent Actions

Code and Experiments for Language-Informed Latent Actions (LILA), for using natural language to guide assistive teleoperation.

This code bundles code that can be deployed on a Franka Emika Panda Arm, including utilities for processing collected demonstrations (you can find our actual demo data in the data/ directory!), training various LILA and Imitation Learning models, and running live studies.


Quickstart

Assumes lila is the current working directory! This repository also comes with out-of-the-box linting and strict pre-commit checking... should you wish to turn off this functionality you can omit the pre-commit install lines below. If you do choose to use these features, you can run make autoformat to automatically clean code, and make check to identify any violations.

Repository Structure

High-level overview of repository file-tree:

  • conf - Quinine Configurations (.yaml) for various runs (used in lieu of argparse or typed-argument-parser)
  • environments - Serialized Conda Environments for running on CPU. Other architectures/CUDA toolkit environments can be added here as necessary.
  • robot/ - Core libfranka robot control code -- simple joint velocity controll w/ Gripper control.
  • src/ - Source Code - has all utilities for preprocessing, Lightning Model definitions, utilities.
    • preprocessing/ - Preprocessing Code for creating Torch Datasets for Training LILA/Imitation Models.
    • models/ - Lightning Modules for LILA-FiLM and Imitation-FiLM Architectures.
  • train.py - Top-Level (main) entry point to repository, for training and evaluating models. Run this first, pointing it at the appropriate configuration in conf/!.
  • Makefile - Top-level Makefile (by default, supports conda serialization, and linting). Expand to your needs.
  • .flake8 - Flake8 Configuration File (Sane Defaults).
  • .pre-commit-config.yaml - Pre-Commit Configuration File (Sane Defaults).
  • pyproject.toml - Black and isort Configuration File (Sane Defaults).+ README.md - You are here!
  • README.md - You are here!
  • LICENSE - By default, research code is made available under the MIT License.

Local Development - CPU (Mac OS & Linux)

Note: Assumes that conda (Miniconda or Anaconda are both fine) is installed and on your path. Use the -cpu environment file.

conda env create -f environments/environment-cpu.yaml
conda activate lila
pre-commit install

GPU Development - Linux w/ CUDA 11.0

conda env create -f environments/environment-gpu.yaml  # Choose CUDA Kernel based on Hardware - by default used 11.0!
conda activate lila
pre-commit install

Note: This codebase should work naively for all PyTorch > 1.7, and any CUDA version; if you run into trouble building this repository, please file an issue!


Training LILA or Imitation Models

To train models using the already collected demonstrations.

# LILA
python train.py --config conf/lila-config.yaml

# No-Language Latent Actions
python train.py --config conf/no-lang-config.yaml

# Imitatation Learning (Behavioral Cloning w/ DART-style Augmentation)
python train.py --config conf/imitation-config.yaml

This will dump models to runs/{lila-final, no-lang-final, imitation-final}/. These paths are hard-coded in the respective teleoperation/execution files below; if you change these paths, be sure to change the below files as well!

Teleoperating with LILA or End-Effector Control

First, make sure to add the custom Velocity Controller written for the Franka Emika Panda Robot Arm (written using Libfranka) to ~/libfranka/examples on your robot control box. The controller can be found in robot/libfranka/lilaVelocityController.cpp.

Then, make sure to update the path of the model trained in the previous step (for LILA) in teleoperate.py. Finally, you can drop into controlling the robot with a LILA model (and Joystick - make sure it's plugged in!) with:

# LILA Control
python teleoperate.py

# For No-Language Control, just change the arch!
python teleoperate.py --arch no-lang

# Pure End-Effector Control is also implemented by Default
python teleoperate.py --arch endeff

Running Imitation Learning

Add the Velocity Controller as described above. Then, make sure to update the path to the trained model in imitate.py and run the following:

python imitate.py

Collecting Kinesthetic Demonstrations

Each lab (and corresponding robot) is built with a different stack, and different preferred ways of recording Kinesthetic demonstrations. We have a rudimentary script record.py that shows how we do this using sockets, and the default libfranka readState.cpp built-in script. This script dumps demonstrations that can be immediately used to train latent action models.

Start-Up from Scratch

In case the above conda environment loading does not work for you, here are the concrete package dependencies required to run LILA:

conda create --name lila python=3.8
conda activate lila
conda install pytorch torchvision torchaudio -c pytorch
conda install ipython jupyter
conda install pytorch-lightning -c conda-forge

pip install black flake8 isort matplotlib pre-commit pygame quinine transformers typed-argument-parser wandb
Owner
Sidd Karamcheti
PhD Student at Stanford & Research Intern at Hugging Face 🤗
Sidd Karamcheti
Official implementation of "Generating 3D Molecules for Target Protein Binding"

Generating 3D Molecules for Target Protein Binding This is the official implementation of the GraphBP method proposed in the following paper. Meng Liu

DIVE Lab, Texas A&M University 74 Dec 07, 2022
An attempt at the implementation of GLOM, Geoffrey Hinton's paper for emergent part-whole hierarchies from data

GLOM TensorFlow This Python package attempts to implement GLOM in TensorFlow, which allows advances made by several different groups transformers, neu

Rishit Dagli 32 Feb 21, 2022
Readings for "A Unified View of Relational Deep Learning for Polypharmacy Side Effect, Combination Therapy, and Drug-Drug Interaction Prediction."

Polypharmacy - DDI - Synergy Survey The Survey Paper This repository accompanies our survey paper A Unified View of Relational Deep Learning for Polyp

AstraZeneca 79 Jan 05, 2023
[NeurIPS 2021 Spotlight] Code for Learning to Compose Visual Relations

Learning to Compose Visual Relations This is the pytorch codebase for the NeurIPS 2021 Spotlight paper Learning to Compose Visual Relations. Demo Imag

Nan Liu 88 Jan 04, 2023
Implementation of average- and worst-case robust flatness measures for adversarial training.

Relating Adversarially Robust Generalization to Flat Minima This repository contains code corresponding to the MLSys'21 paper: D. Stutz, M. Hein, B. S

David Stutz 13 Nov 27, 2022
Pre-trained models for a Cascaded-FCN in caffe and tensorflow that segments

Cascaded-FCN This repository contains the pre-trained models for a Cascaded-FCN in caffe and tensorflow that segments the liver and its lesions out of

300 Nov 22, 2022
An MQA (Studio, originalSampleRate) identifier for lossless flac files written in Python.

An MQA (Studio, originalSampleRate) identifier for "lossless" flac files written in Python.

Daniel 10 Oct 03, 2022
Discord Multi Tool that focuses on design and easy usage

Multi-Tool-v1.0 Discord Multi Tool that focuses on design and easy usage Delete webhook Block all friends Spam webhook Modify webhook Webhook info Tok

Lodi#0001 24 May 23, 2022
Semi-supervised Domain Adaptation via Minimax Entropy

Semi-supervised Domain Adaptation via Minimax Entropy (ICCV 2019) Install pip install -r requirements.txt The code is written for Pytorch 0.4.0, but s

Vision and Learning Group 243 Jan 09, 2023
Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)

Learning to Adapt Structured Output Space for Semantic Segmentation Pytorch implementation of our method for adapting semantic segmentation from the s

Yi-Hsuan Tsai 782 Dec 30, 2022
Repository for self-supervised landmark discovery

self-supervised-landmarks Repository for self-supervised landmark discovery Requirements pytorch pynrrd (for 3d images) Usage The use of this models i

Riddhish Bhalodia 2 Apr 18, 2022
Python based Advanced AI Assistant

Knick is a virtual artificial intelligence project, fully developed in python. The objective of this project is to develop a virtual assistant that can handle our minor, intermediate as well as heavy

19 Nov 15, 2022
Code for "Single-view robot pose and joint angle estimation via render & compare", CVPR 2021 (Oral).

Single-view robot pose and joint angle estimation via render & compare Yann Labbé, Justin Carpentier, Mathieu Aubry, Josef Sivic CVPR: Conference on C

Yann Labbé 51 Oct 14, 2022
StarGAN v2 - Official PyTorch Implementation (CVPR 2020)

StarGAN v2 - Official PyTorch Implementation StarGAN v2: Diverse Image Synthesis for Multiple Domains Yunjey Choi*, Youngjung Uh*, Jaejun Yoo*, Jung-W

Clova AI Research 3.1k Jan 09, 2023
A spherical CNN for weather forecasting

DeepSphere-Weather - Deep Learning on the sphere for weather/climate applications. The code in this repository provides a scalable and flexible framew

DeepSphere 47 Dec 25, 2022
CUDA Python Low-level Bindings

CUDA Python Low-level Bindings

NVIDIA Corporation 529 Jan 03, 2023
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Microsoft 119 Jan 02, 2023
Metrics to evaluate quality and efficacy of synthetic datasets.

An Open Source Project from the Data to AI Lab, at MIT Metrics for Synthetic Data Generation Projects Website: https://sdv.dev Documentation: https://

The Synthetic Data Vault Project 129 Jan 03, 2023
Source code of AAAI 2022 paper "Towards End-to-End Image Compression and Analysis with Transformers".

Towards End-to-End Image Compression and Analysis with Transformers Source code of our AAAI 2022 paper "Towards End-to-End Image Compression and Analy

37 Dec 21, 2022
Federated Learning Based on Dynamic Regularization

Federated Learning Based on Dynamic Regularization This is implementation of Federated Learning Based on Dynamic Regularization. Requirements Please i

39 Jan 07, 2023