Learning High-Speed Flight in the Wild

Overview

Learning High-Speed Flight in the Wild

This repo contains the code associated to the paper Learning Agile Flight in the Wild. For more information, please check the project webpage.

Cover

Paper, Video, and Datasets

If you use this code in an academic context, please cite the following publication:

Paper: Learning High-Speed Flight in the Wild

Video (Narrated): YouTube

Datasets: Zenodo

Science Paper: DOI

@inproceedings{Loquercio2021Science,
  title={Learning High-Speed Flight in the Wild},
    author={Loquercio, Antonio and Kaufmann, Elia and Ranftl, Ren{\'e} and M{\"u}ller, Matthias and Koltun, Vladlen and Scaramuzza, Davide},
      booktitle={Science Robotics}, 
      year={2021}, 
      month={October}, 
} 

Installation

Requirements

The code was tested with Ubuntu 20.04, ROS Noetic, Anaconda v4.8.3., and gcc/g++ 7.5.0. Different OS and ROS versions are possible but not supported.

Before you start, make sure that your compiler versions match gcc/g++ 7.5.0. To do so, use the following commands:

sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-7 100
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 100

Step-by-Step Procedure

Use the following commands to create a new catkin workspace and a virtual environment with all the required dependencies.

export ROS_VERSION=noetic
mkdir agile_autonomy_ws
cd agile_autonomy_ws
export CATKIN_WS=./catkin_aa
mkdir -p $CATKIN_WS/src
cd $CATKIN_WS
catkin init
catkin config --extend /opt/ros/$ROS_VERSION
catkin config --merge-devel
catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS=-fdiagnostics-color
cd src

git clone [email protected]:uzh-rpg/agile_autonomy.git
vcs-import < agile_autonomy/dependencies.yaml
cd rpg_mpl_ros
git submodule update --init --recursive

#install extra dependencies (might need more depending on your OS)
sudo apt-get install libqglviewer-dev-qt5

# Install external libraries for rpg_flightmare
sudo apt install -y libzmqpp-dev libeigen3-dev libglfw3-dev libglm-dev

# Install dependencies for rpg_flightmare renderer
sudo apt install -y libvulkan1 vulkan-utils gdb

# Add environment variables (Careful! Modify path according to your local setup)
echo 'export RPGQ_PARAM_DIR=/home/
   
   catkin_aa/src/rpg_flightmare' >> ~/.bashrc

Now open a new terminal and type the following commands.

# Build and re-source the workspace
catkin build
. ../devel/setup.bash

# Create your learning environment
roscd planner_learning
conda create --name tf_24 python=3.7
conda activate tf_24
conda install tensorflow-gpu
pip install rospkg==1.2.3,pyquaternion,open3d,opencv-python

Now download the flightmare standalone available at this link, extract it and put in the flightrender folder.

Let's Fly!

Once you have installed the dependencies, you will be able to fly in simulation with our pre-trained checkpoint. You don't need necessarely need a GPU for execution. Note that if the network can't run at least at 15Hz, you won't be able to fly successfully.

Lauch the simulation! Open a terminal and type:

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
roslaunch agile_autonomy simulation.launch

Run the Network in an other terminal:

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
conda activate tf_24
python test_trajectories.py --settings_file=config/test_settings.yaml

Change execution speed or environment

You can change the average speed at which the policy will fly as well as the environment type by changing the following files.

Environment Change:

rosed agile_autonomy flightmare.yaml

Set either the spawn_trees or spawn_objects to true. Doing both at the same time is possible but would make the environment too dense for navigation. Also adapt the spacings parameter in test_settings.yaml to the environment.

Speed Change:

rosed agile_autonomy default.yaml

Edit the test_time_velocity and maneuver_velocity to the required speed. Note that the ckpt we provide will work for all speeds in the range [1,10] m/s. However, to reach the best performance at a specific speed, please consider finetuning the ckpt at the desired speed (see code below).

Train your own navigation policy

There are two ways in which you can train your own policy. One easy and one more involved. The trained checkpoint can then be used to control a physical platform (if you have one!).

Use pre-collected dataset

The first method, requiring the least effort, is to use a dataset that we pre-collected. The dataset can be found at this link. This dataset was used to train the model we provide and collected at an average speed of 7 m/s. To do this, adapt the file train_settings.yaml to point to the train and test folder and run:

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
conda activate tf_24
python train.py --settings_file=config/train_settings.yaml

Feel free to ablate the impact of each parameter!

Collect your own dataset

You can use the following commands to generate data in simulation and train your model on it. Note that training a policy from scratch could require a lot of data, and depending on the speed of your machine this could take several days. Therefore, we always recommend finetuning the provided checkpoint to your use case. As a general rule of thumb, you need a dataset with comparable size to ours to train a policy from scratch, but only 1/10th of it to finetune.

Generate data

To train or finetune a policy, use the following commands: Launch the simulation in one terminal

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
roslaunch agile_autonomy simulation.launch

Launch data collection (with dagger) in an other terminal

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
conda activate tf_24
python dagger_training.py --settings_file=config/dagger_settings.yaml

It is possible to change parameters (number of rollouts, dagger constants, tracking a global trajectory, etc. ) in the file dagger_settings.yaml. Keep in mind that if you change the network or input, you will need to adapt the file test_settings.yaml for compatibility.

When training from scratch, follow a pre-computed global trajectory to give consistent labels. To activate this, you need to put to true the flag perform_global_planning in default.yaml and label_generation.yaml. Note that this will make the simulation slower (a global plan has to be computed at each iteration). The network will not have access to this global plan, but only to the straight (possibly in collision) reference.

Visualize the Data

You can visualize the generated trajectories in open3d using the visualize_trajectories.py script.

python visualize_trajectories.py --data_dir /PATH/TO/rollout_21-09-21-xxxx --start_idx 0 --time_steps 100 --pc_cutoff_z 2.0 --max_traj_to_plot 100

The result should more or less look as the following:

Labels

Test the Network

To test the network you trained, adapt the test_settings.yaml with the new checkpoint path. You might consider putting back the flag perform_global_planning in default.yaml to false to make the simulation faster. Then follow the instructions in the above section (Let's Fly!) to test.

Ackowledgements

We would like to thank Yunlong Song and Selim Naji for their help with the implementations of the simulation environment. The code for global planning is strongly inspired by the one of Search-based Motion Planning for Aggressive Flight in SE(3).

Owner
Robotics and Perception Group
Robotics and Perception Group
Code for ACL 2019 Paper: "COMET: Commonsense Transformers for Automatic Knowledge Graph Construction"

To run a generation experiment (either conceptnet or atomic), follow these instructions: First Steps First clone, the repo: git clone https://github.c

Antoine Bosselut 575 Jan 01, 2023
Gems & Holiday Package Prediction

Predictive_Modelling Gems & Holiday Package Prediction This project is based on 2 cases studies : Gems Price Prediction and Holiday Package prediction

Avnika Mehta 1 Jan 27, 2022
Python scripts for performing object detection with the 1000 labels of the ImageNet dataset in ONNX.

Python scripts for performing object detection with the 1000 labels of the ImageNet dataset in ONNX. The repository combines a class agnostic object localizer to first detect the objects in the image

Ibai Gorordo 24 Nov 14, 2022
This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021.

inverse_attention This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021. Le

Firas Laakom 5 Jul 08, 2022
Pcos-prediction - Predicts the likelihood of Polycystic Ovary Syndrome based on patient attributes and symptoms

PCOS Prediction 🥼 Predicts the likelihood of Polycystic Ovary Syndrome based on

Samantha Van Seters 1 Jan 10, 2022
A real world application of a Recurrent Neural Network on a binary classification of time series data

What is this This is a real world application of a Recurrent Neural Network on a binary classification of time series data. This project includes data

Josep Maria Salvia Hornos 2 Jan 30, 2022
Codes for AAAI22 paper "Learning to Solve Travelling Salesman Problem with Hardness-Adaptive Curriculum"

Paper For more details, please see our paper Learning to Solve Travelling Salesman Problem with Hardness-Adaptive Curriculum which has been accepted a

14 Sep 30, 2022
Create animations for the optimization trajectory of neural nets

Animating the Optimization Trajectory of Neural Nets loss-landscape-anim lets you create animated optimization path in a 2D slice of the loss landscap

Logan Yang 81 Dec 25, 2022
Sequence-tagging using deep learning

Classification using Deep Learning Requirements PyTorch version = 1.9.1+cu111 Python version = 3.8.10 PyTorch-Lightning version = 1.4.9 Huggingface

Vineet Kumar 2 Dec 20, 2022
Code for paper entitled "Improving Novelty Detection using the Reconstructions of Nearest Neighbours"

NLN: Nearest-Latent-Neighbours A repository containing the implementation of the paper entitled Improving Novelty Detection using the Reconstructions

Michael (Misha) Mesarcik 4 Dec 14, 2022
A very impractical 3D rendering engine that runs in the python terminal.

Terminal-3D-Render A very impractical 3D rendering engine that runs in the python terminal. do NOT try to run this program using the standard python I

23 Dec 31, 2022
[ICCV2021] Official Pytorch implementation for SDGZSL (Semantics Disentangling for Generalized Zero-Shot Learning)

Semantics Disentangling for Generalized Zero-shot Learning This is the official implementation for paper Zhi Chen, Yadan Luo, Ruihong Qiu, Zi Huang, J

25 Dec 06, 2022
Automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azure

fwhr-calc-website This project is to automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azur

SoohyunPark 1 Feb 07, 2022
On Evaluation Metrics for Graph Generative Models

On Evaluation Metrics for Graph Generative Models Authors: Rylee Thompson, Boris Knyazev, Elahe Ghalebi, Jungtaek Kim, Graham Taylor This is the offic

13 Jan 07, 2023
Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection

Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection Main requirements torch = 1.0 torchvision = 0.2.0 Python 3 Environm

15 Apr 04, 2022
Self-supervised Product Quantization for Deep Unsupervised Image Retrieval - ICCV2021

Self-supervised Product Quantization for Deep Unsupervised Image Retrieval Pytorch implementation of SPQ Accepted to ICCV 2021 - paper Young Kyun Jang

Young Kyun Jang 71 Dec 27, 2022
⚾🤖⚾ Automatic baseball pitching overlay in realtime

âš¾ Automatically overlaying pitch motion and trajectory with machine learning! This project takes your baseball pitching clips and automatically genera

Tony Chou 240 Dec 05, 2022
PyTorch reimplementation of the Smooth ReLU activation function proposed in the paper "Real World Large Scale Recommendation Systems Reproducibility and Smooth Activations" [arXiv 2022].

Smooth ReLU in PyTorch Unofficial PyTorch reimplementation of the Smooth ReLU (SmeLU) activation function proposed in the paper Real World Large Scale

Christoph Reich 10 Jan 02, 2023
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

Core ML Tools Use coremltools to convert machine learning models from third-party libraries to the Core ML format. The Python package contains the sup

Apple 3k Jan 08, 2023
CvT2DistilGPT2 is an encoder-to-decoder model that was developed for chest X-ray report generation.

CvT2DistilGPT2 Improving Chest X-Ray Report Generation by Leveraging Warm-Starting This repository houses the implementation of CvT2DistilGPT2 from [1

The Australian e-Health Research Centre 21 Dec 28, 2022