Code & Experiments for "LILA: Language-Informed Latent Actions" to be presented at the Conference on Robot Learning (CoRL) 2021.

Related tags

Deep Learninglila
Overview

LILA

LILA: Language-Informed Latent Actions

Code and Experiments for Language-Informed Latent Actions (LILA), for using natural language to guide assistive teleoperation.

This code bundles code that can be deployed on a Franka Emika Panda Arm, including utilities for processing collected demonstrations (you can find our actual demo data in the data/ directory!), training various LILA and Imitation Learning models, and running live studies.


Quickstart

Assumes lila is the current working directory! This repository also comes with out-of-the-box linting and strict pre-commit checking... should you wish to turn off this functionality you can omit the pre-commit install lines below. If you do choose to use these features, you can run make autoformat to automatically clean code, and make check to identify any violations.

Repository Structure

High-level overview of repository file-tree:

  • conf - Quinine Configurations (.yaml) for various runs (used in lieu of argparse or typed-argument-parser)
  • environments - Serialized Conda Environments for running on CPU. Other architectures/CUDA toolkit environments can be added here as necessary.
  • robot/ - Core libfranka robot control code -- simple joint velocity controll w/ Gripper control.
  • src/ - Source Code - has all utilities for preprocessing, Lightning Model definitions, utilities.
    • preprocessing/ - Preprocessing Code for creating Torch Datasets for Training LILA/Imitation Models.
    • models/ - Lightning Modules for LILA-FiLM and Imitation-FiLM Architectures.
  • train.py - Top-Level (main) entry point to repository, for training and evaluating models. Run this first, pointing it at the appropriate configuration in conf/!.
  • Makefile - Top-level Makefile (by default, supports conda serialization, and linting). Expand to your needs.
  • .flake8 - Flake8 Configuration File (Sane Defaults).
  • .pre-commit-config.yaml - Pre-Commit Configuration File (Sane Defaults).
  • pyproject.toml - Black and isort Configuration File (Sane Defaults).+ README.md - You are here!
  • README.md - You are here!
  • LICENSE - By default, research code is made available under the MIT License.

Local Development - CPU (Mac OS & Linux)

Note: Assumes that conda (Miniconda or Anaconda are both fine) is installed and on your path. Use the -cpu environment file.

conda env create -f environments/environment-cpu.yaml
conda activate lila
pre-commit install

GPU Development - Linux w/ CUDA 11.0

conda env create -f environments/environment-gpu.yaml  # Choose CUDA Kernel based on Hardware - by default used 11.0!
conda activate lila
pre-commit install

Note: This codebase should work naively for all PyTorch > 1.7, and any CUDA version; if you run into trouble building this repository, please file an issue!


Training LILA or Imitation Models

To train models using the already collected demonstrations.

# LILA
python train.py --config conf/lila-config.yaml

# No-Language Latent Actions
python train.py --config conf/no-lang-config.yaml

# Imitatation Learning (Behavioral Cloning w/ DART-style Augmentation)
python train.py --config conf/imitation-config.yaml

This will dump models to runs/{lila-final, no-lang-final, imitation-final}/. These paths are hard-coded in the respective teleoperation/execution files below; if you change these paths, be sure to change the below files as well!

Teleoperating with LILA or End-Effector Control

First, make sure to add the custom Velocity Controller written for the Franka Emika Panda Robot Arm (written using Libfranka) to ~/libfranka/examples on your robot control box. The controller can be found in robot/libfranka/lilaVelocityController.cpp.

Then, make sure to update the path of the model trained in the previous step (for LILA) in teleoperate.py. Finally, you can drop into controlling the robot with a LILA model (and Joystick - make sure it's plugged in!) with:

# LILA Control
python teleoperate.py

# For No-Language Control, just change the arch!
python teleoperate.py --arch no-lang

# Pure End-Effector Control is also implemented by Default
python teleoperate.py --arch endeff

Running Imitation Learning

Add the Velocity Controller as described above. Then, make sure to update the path to the trained model in imitate.py and run the following:

python imitate.py

Collecting Kinesthetic Demonstrations

Each lab (and corresponding robot) is built with a different stack, and different preferred ways of recording Kinesthetic demonstrations. We have a rudimentary script record.py that shows how we do this using sockets, and the default libfranka readState.cpp built-in script. This script dumps demonstrations that can be immediately used to train latent action models.

Start-Up from Scratch

In case the above conda environment loading does not work for you, here are the concrete package dependencies required to run LILA:

conda create --name lila python=3.8
conda activate lila
conda install pytorch torchvision torchaudio -c pytorch
conda install ipython jupyter
conda install pytorch-lightning -c conda-forge

pip install black flake8 isort matplotlib pre-commit pygame quinine transformers typed-argument-parser wandb
Owner
Sidd Karamcheti
PhD Student at Stanford & Research Intern at Hugging Face 🤗
Sidd Karamcheti
Bayesian Optimization using GPflow

Note: This package is for use with GPFlow 1. For Bayesian optimization using GPFlow 2 please see Trieste, a joint effort with Secondmind. GPflowOpt GP

GPflow 257 Dec 26, 2022
Simple-Neural-Network From Scratch in Python

Simple-Neural-Network From Scratch in Python This is a simple Neural Network created without any Machine Learning Libraries. The only dependencies are

Aum Shah 1 Dec 28, 2021
Code for ICE-BeeM paper - NeurIPS 2020

ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA This repository contains code to run and reproduce the experiments

Ilyes Khemakhem 65 Dec 22, 2022
Differentiable Surface Triangulation

Differentiable Surface Triangulation This is our implementation of the paper Differentiable Surface Triangulation that enables optimization for any pe

61 Dec 07, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

AugMax: Adversarial Composition of Random Augmentations for Robust Training Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, an

VITA 112 Nov 07, 2022
A port of muP to JAX/Haiku

MUP for Haiku This is a (very preliminary) port of Yang and Hu et al.'s μP repo to Haiku and JAX. It's not feature complete, and I'm very open to sugg

18 Dec 30, 2022
The-Secret-Sharing-Schemes - This interactive script demonstrates the Secret Sharing Schemes algorithm

The-Secret-Sharing-Schemes This interactive script demonstrates the Secret Shari

Nishaant Goswamy 1 Jan 02, 2022
Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training This project hosts the code for implementing the DenseCL algorithm for se

Xinlong Wang 491 Jan 03, 2023
It is a simple library to speed up CLIP inference up to 3x (K80 GPU)

CLIP-ONNX It is a simple library to speed up CLIP inference up to 3x (K80 GPU) Usage Install clip-onnx module and requirements first. Use this trick !

Gerasimov Maxim 93 Dec 20, 2022
Export CenterPoint PonintPillars ONNX Model For TensorRT

CenterPoint-PonintPillars Pytroch model convert to ONNX and TensorRT Welcome to CenterPoint! This project is fork from tianweiy/CenterPoint. I impleme

CarkusL 149 Dec 13, 2022
Computational Methods Course at UdeA. Forked and size reduced from:

Computational Methods for Physics & Astronomy Book version at: https://restrepo.github.io/ComputationalMethods by: Sebastian Bustamante 2014/2015 Dieg

Diego Restrepo 11 Sep 10, 2022
Keras like implementation of Deep Learning architectures from scratch using numpy.

Mini-Keras Keras like implementation of Deep Learning architectures from scratch using numpy. How to contribute? The project contains implementations

MANU S PILLAI 5 Oct 10, 2021
Everything you want about DP-Based Federated Learning, including Papers and Code. (Mechanism: Laplace or Gaussian, Dataset: femnist, shakespeare, mnist, cifar-10 and fashion-mnist. )

Differential Privacy (DP) Based Federated Learning (FL) Everything about DP-based FL you need is here. (所有你需要的DP-based FL的信息都在这里) Code Tip: the code o

wenzhu 83 Dec 24, 2022
Official PyTorch implementation of Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations

Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations Zhenyu Jiang, Yifeng Zhu, Maxwell Svetlik, Kuan Fang, Yu

UT-Austin Robot Perception and Learning Lab 63 Jan 03, 2023
Code for our paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021

SimCLS Code for our paper: "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021 1. How to Install Requirements

Yixin Liu 150 Dec 12, 2022
Robust, modular and efficient implementation of advanced Hamiltonian Monte Carlo algorithms

AdvancedHMC.jl AdvancedHMC.jl provides a robust, modular and efficient implementation of advanced HMC algorithms. An illustrative example for Advanced

The Turing Language 167 Jan 01, 2023
This repository contains a toolkit for collecting, labeling and tracking object keypoints

This repository contains a toolkit for collecting, labeling and tracking object keypoints. Object keypoints are semantic points in an object's coordinate frame.

ETHZ ASL 13 Dec 12, 2022
sense-py-AnishaBaishya created by GitHub Classroom

Compute Statistics Here we compute statistics for a bunch of numbers. This project uses the unittest framework to test functionality. Pass the tests T

1 Oct 21, 2021
Data and code for the paper "Importance of Kernel Bandwidth in Quantum Machine Learning"

Reproducibility materials for "Importance of Kernel Bandwidth in Quantum Machine Learning" Repo structure: code contains Python scripts used to genera

Ruslan Shaydulin 3 Oct 23, 2022
Efficient neural networks for analog audio effect modeling

micro-TCN Efficient neural networks for audio effect modeling

Christian Steinmetz 94 Dec 29, 2022