Lipschitz-constrained Unsupervised Skill Discovery

Related tags

Deep LearningLSD
Overview

Lipschitz-constrained Unsupervised Skill Discovery

This repository is the official implementation of

The implementation is based on Unsupervised Skill Discovery with Bottleneck Option Learning and garage.

Visit our project page for more results including videos.

Requirements

Examples

Install requirements:

pip install -r requirements.txt
pip install -e .
pip install -e garaged

Ant with 2-D continuous skills:

python tests/main.py --run_group EXP --env ant --max_path_length 200 --dim_option 2 --common_lr 0.0001 --seed 0 --normalizer_type ant_preset --use_gpu 1 --traj_batch_size 20 --n_parallel 8 --n_epochs_per_eval 5000 --n_thread 1 --model_master_dim 1024 --record_metric_difference 0 --n_epochs_per_tb 100 --n_epochs_per_save 50000 --n_epochs_per_pt_save 5000 --n_epochs_per_pkl_update 1000 --eval_record_video 1 --n_epochs 200001 --spectral_normalization 1 --n_epochs_per_log 50 --discrete 0 --num_random_trajectories 200 --sac_discount 0.99 --alpha 0.01 --sac_lr_a -1 --lr_te 3e-05 --sac_scale_reward 0 --max_optimization_epochs 1 --trans_minibatch_size 2048 --trans_optimization_epochs 4 --eval_plot_axis -50 50 -50 50

Ant with 16 discrete skills:

python tests/main.py --run_group EXP --env ant --max_path_length 200 --dim_option 16 --common_lr 0.0001 --seed 0 --normalizer_type ant_preset --use_gpu 1 --traj_batch_size 20 --n_parallel 8 --n_epochs_per_eval 5000 --n_thread 1 --model_master_dim 1024 --record_metric_difference 0 --n_epochs_per_tb 100 --n_epochs_per_save 50000 --n_epochs_per_pt_save 5000 --n_epochs_per_pkl_update 1000 --eval_record_video 1 --n_epochs 200001 --spectral_normalization 1 --n_epochs_per_log 50 --discrete 1 --num_random_trajectories 200 --sac_discount 0.99 --alpha 0.003 --sac_lr_a -1 --lr_te 3e-05 --sac_scale_reward 0 --max_optimization_epochs 1 --trans_minibatch_size 2048 --trans_optimization_epochs 4 --eval_plot_axis -50 50 -50 50

Humanoid with 2-D continuous skills:

python tests/main.py --run_group EXP --env humanoid --max_path_length 1000 --dim_option 2 --common_lr 0.0003 --seed 0 --normalizer_type humanoid_preset --use_gpu 1 --traj_batch_size 5 --n_parallel 8 --n_epochs_per_eval 5000 --n_thread 1 --model_master_dim 1024 --record_metric_difference 0 --n_epochs_per_tb 100 --n_epochs_per_save 50000 --n_epochs_per_pt_save 5000 --n_epochs_per_pkl_update 1000 --eval_record_video 1 --n_epochs 200001 --spectral_normalization 1 --n_epochs_per_log 50 --discrete 0 --video_skip_frames 3 --num_random_trajectories 200 --sac_discount 0.99 --alpha 0.03 --sac_lr_a -1 --lr_te 0.0001 --lsd_alive_reward 0.03 --sac_scale_reward 0 --max_optimization_epochs 1 --trans_minibatch_size 2048 --trans_optimization_epochs 4 --sac_replay_buffer 1 --te_max_optimization_epochs 1 --te_trans_optimization_epochs 2

Humanoid with 16 discrete skills:

python tests/main.py --run_group EXP --env humanoid --max_path_length 1000 --dim_option 16 --common_lr 0.0003 --seed 0 --normalizer_type humanoid_preset --use_gpu 1 --traj_batch_size 5 --n_parallel 8 --n_epochs_per_eval 5000 --n_thread 1 --model_master_dim 1024 --record_metric_difference 0 --n_epochs_per_tb 100 --n_epochs_per_save 50000 --n_epochs_per_pt_save 5000 --n_epochs_per_pkl_update 1000 --eval_record_video 1 --n_epochs 200001 --spectral_normalization 1 --n_epochs_per_log 50 --discrete 1 --video_skip_frames 3 --num_random_trajectories 200 --sac_discount 0.99 --alpha 0.03 --sac_lr_a -1 --lr_te 0.0001 --lsd_alive_reward 0.03 --sac_scale_reward 0 --max_optimization_epochs 1 --trans_minibatch_size 2048 --trans_optimization_epochs 4 --sac_replay_buffer 1 --te_max_optimization_epochs 1 --te_trans_optimization_epochs 2

HalfCheetah with 8 discrete skills:

python tests/main.py --run_group EXP --env half_cheetah --max_path_length 200 --dim_option 8 --common_lr 0.0001 --seed 0 --normalizer_type half_cheetah_preset --use_gpu 1 --traj_batch_size 20 --n_parallel 8 --n_epochs_per_eval 5000 --n_thread 1 --model_master_dim 1024 --record_metric_difference 0 --n_epochs_per_tb 100 --n_epochs_per_save 50000 --n_epochs_per_pt_save 5000 --n_epochs_per_pkl_update 1000 --eval_record_video 1 --n_epochs 200001 --spectral_normalization 1 --n_epochs_per_log 50 --discrete 1 --num_random_trajectories 200 --sac_discount 0.99 --alpha 0.01 --sac_lr_a -1 --lr_te 3e-05 --sac_scale_reward 0 --max_optimization_epochs 1 --trans_minibatch_size 2048 --trans_optimization_epochs 4
Owner
Seohong Park
Seohong Park
AntroPy: entropy and complexity of (EEG) time-series in Python

AntroPy is a Python 3 package providing several time-efficient algorithms for computing the complexity of time-series. It can be used for example to e

Raphael Vallat 153 Dec 27, 2022
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch.

SE3 Transformer - Pytorch Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. May be needed for replicating Alphafold2 resu

Phil Wang 207 Dec 23, 2022
Codebase for "ProtoAttend: Attention-Based Prototypical Learning."

Codebase for "ProtoAttend: Attention-Based Prototypical Learning." Authors: Sercan O. Arik and Tomas Pfister Paper: Sercan O. Arik and Tomas Pfister,

47 2 May 17, 2022
Technical Analysis library in pandas for backtesting algotrading and quantitative analysis

bta-lib - A pandas based Technical Analysis Library bta-lib is pandas based technical analysis library and part of the backtrader family. Links Main P

DRo 393 Dec 20, 2022
This repo is a C++ version of yolov5_deepsort_tensorrt. Packing all C++ programs into .so files, using Python script to call C++ programs further.

yolov5_deepsort_tensorrt_cpp Introduction This repo is a C++ version of yolov5_deepsort_tensorrt. And packing all C++ programs into .so files, using P

41 Dec 27, 2022
Phy-Q: A Benchmark for Physical Reasoning

Phy-Q: A Benchmark for Physical Reasoning Cheng Xue*, Vimukthini Pinto*, Chathura Gamage* Ekaterina Nikonova, Peng Zhang, Jochen Renz School of Comput

29 Dec 19, 2022
SuRE Evaluation: A Supplementary Material

SuRE Evaluation: A Supplementary Material This repository contains supplementary material regarding the evaluations presented in the paper Visual Expl

NYU Visualization Lab 0 Dec 14, 2021
Implementation of "Learning Multi-Granular Hypergraphs for Video-Based Person Re-Identification"

hypergraph_reid Implementation of "Learning Multi-Granular Hypergraphs for Video-Based Person Re-Identification" If you find this help your research,

62 Dec 21, 2022
Underwater image enhancement

LANet Our work proposes an adaptive learning attention network (LANet) to solve the problem of color casts and low illumination in underwater images.

LiuShiBen 7 Sep 14, 2022
This code is a toolbox that uses Torch library for training and evaluating the ERFNet architecture for semantic segmentation.

ERFNet This code is a toolbox that uses Torch library for training and evaluating the ERFNet architecture for semantic segmentation. NEW!! New PyTorch

Edu 104 Jan 05, 2023
Part-aware Measurement for Robust Multi-View Multi-Human 3D Pose Estimation and Tracking

Part-aware Measurement for Robust Multi-View Multi-Human 3D Pose Estimation and Tracking Part-Aware Measurement for Robust Multi-View Multi-Human 3D P

19 Oct 27, 2022
This repository implements variational graph auto encoder by Thomas Kipf.

Variational Graph Auto-encoder in Pytorch This repository implements variational graph auto-encoder by Thomas Kipf. For details of the model, refer to

DaehanKim 215 Jan 02, 2023
Code for Environment Dynamics Decomposition (ED2).

ED2 Code for Environment Dynamics Decomposition (ED2). Installation Follow the installation in MBPO and Dreamer. Usage First follow the SD2 method for

0 Aug 10, 2021
🔥 Cannlytics-powered artificial intelligence 🤖

Cannlytics AI 🔥 Cannlytics-powered artificial intelligence 🤖 🏗️ Installation 🏃‍♀️ Quickstart 🧱 Development 🦾 Automation 💸 Support 🏛️ License ?

Cannlytics 3 Nov 11, 2022
This repository includes the code of the sequence-to-sequence model for discontinuous constituent parsing described in paper Discontinuous Grammar as a Foreign Language.

Discontinuous Grammar as a Foreign Language This repository includes the code of the sequence-to-sequence model for discontinuous constituent parsing

Daniel Fernández-González 2 Apr 07, 2022
Contrastive Multi-View Representation Learning on Graphs

Contrastive Multi-View Representation Learning on Graphs This work introduces a self-supervised approach based on contrastive multi-view learning to l

Kaveh 208 Dec 23, 2022
This GitHub repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.'

About Repository This repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.' About Code

Arun Verma 1 Nov 09, 2021
An implementation of shampoo

shampoo.pytorch An implementation of shampoo, proposed in Shampoo : Preconditioned Stochastic Tensor Optimization by Vineet Gupta, Tomer Koren and Yor

Ryuichiro Hataya 69 Sep 10, 2022
A lossless neural compression framework built on top of JAX.

Kompressor Branch CI Coverage main (active) main development A neural compression framework built on top of JAX. Install setup.py assumes a compatible

Rosalind Franklin Institute 2 Mar 14, 2022
Efficiently computes derivatives of numpy code.

Note: Autograd is still being maintained but is no longer actively developed. The main developers (Dougal Maclaurin, David Duvenaud, Matt Johnson, and

Formerly: Harvard Intelligent Probabilistic Systems Group -- Now at Princeton 6.1k Jan 08, 2023