This repository contains the code for designing risk bounded motion plans for car-like robot using Carla Simulator.

Overview

Nonlinear Risk Bounded Robot Motion Planning

This code simulates the bicycle dynamics of car by steering it on the road by avoiding another static car obstacle in a CARLA simulator. The ego_vehicle has to consider all the system and perception uncertainties to generate a risk-bounded motion plan and execute it with coherent risk assessment. Coherent risk assessment for a nonlinear robot like the car in this simulation is made possible using nonlinear model predictive control (NMPC) based steering law combined with Unscented Kalman filter for state estimation purpose. Finally, distributionally robust chance constraints applied using a temporal logic specifications evaluate the risk of a trajectory before being added to the sequence of trajectories forming a motion plan from the start to the destination.

Click the picture to watch the corresponding youtube video supporting our work

Motion Planning Using Carla Simulator

The code in this repository implements the algorithms and ideas from our following paper:

  1. V. Renganathan, S. Safaoui, A. Kothari, I. Shames, T. Summers, Risk Bounded Nonlinear Robot Motion Planning With Integrated Perception & Control, Submitted to the Special Issue on Risk-aware Autonomous Systems: Theory and Practice, Artificial Intelligence Journal, 2021.

Dependencies

  • Python 3.5+ (tested with 3.7.6)
  • Numpy
  • Scipy
  • Matplotlib
  • Casadi
  • Namedlist
  • Pickle
  • Carla

Installing

You will need the following two items to run the codes. After that there is no other formal package installation procedure; simply download this repository and run the Python files.

  • CARLA SIMULATOR VERSION: 0.9.10
  • UNREAL ENGINE VERSION: 4.24.3

Modules of an autonomy stack

There are two main modules for understanding this whole package

  1. First, a high level motion planner has to run and it will generate a reference trajectory for the car from start to the end
  2. Second, a low level tracking controller will enable the car to track the reference trajectory despite the realized noises.

Procedure to run the code

  1. Run the python code Generate_Monte_Carlo_Noises.py which will generate and load the required noise parameters and data required for simulation into pickle files
  2. Run the python code Run_Path_Planner.py
  3. The code will run for specified number of iterations and produces all required data
  4. Then load the cooresponding pickle file data in file main.py in the line number #488.
  5. Run the main.py file with the Carla executable being open already
  6. The simulation will run in the Carla simulator where the car will track the reference trajectory and results are stored in pickle files
  7. To see the tracking results, run the python file Tracked_Path_Plotter.py

Running Monte-Carlo Simulations

  1. Create a new folder called monte_carlo_results in the same directory where the python file monte_carlo_car.py resides.
  2. Update the trial_num at line #1554 in the file monte_carlo_car.py and run it while the Carla executable is open (It will automatically load the noise realizations corresponding to the trial_num from the pickle files)
  3. After the simulation is over, automatically the results are stored under the folder monte_carlo_results with a specific trial name
  4. Repeat the process by changing trial number in step 2 and run again.
  5. Once the all trials are completed, run the python file monte_carlo_results_plotter.py to plot the monte-carlo simulation results

Variations

  • Instead of Distributionally robust chance constraints, if you would like to have a simple Gaussian Chance Constraints, then change self.DRFlag = False in line 852 in the file DR_RRTStar_Planner.py
  • Choose your own state estimator UKF or EKF by commenting and uncommenting the corresponding estimator in lines 26-27 of file State_Estimator.py

Funding Acknowledgement

This work is partially supported by Defence Science and Technology Group, through agreement MyIP: ID10266 entitled Hierarchical Verification of Autonomy Architectures, the Australian Government, via grant AUSMURIB000001 associated with ONR MURI grant N00014-19-1-2571, and by the United States Air Force Office of Scientific Research under award number FA2386-19-1-4073.

Contributing Authors

  1. Venkatraman Renganathan - UT Dallas
  2. Sleiman Safaoui - UT Dallas
  3. Aadi Kothari - UT Dallas
  4. Benjamin Gravell - UT Dallas
  5. Dr. Iman Shames - Australian National University
  6. Dr. Tyler Summers - UT Dallas

Affiliation

TSummersLab - Control, Optimization & Networks Laboratory (CONLab)

OCR Streamlit App is used to extract text from images using python's easyocr, pytorch and streamlit packages

OCR-Streamlit-App OCR Streamlit App is used to extract text from images using python's easyocr, pytorch and streamlit packages OCR app gets an image a

Siva Prakash 5 Apr 05, 2022
The story of Chicken for Club Bing

Chicken Story tl;dr: The time when Microsoft banned my entire country for cheating at Club Bing. (A lot of the details are from memory so I've recreat

Eyal 142 May 16, 2022
ELSED: Enhanced Line SEgment Drawing

ELSED: Enhanced Line SEgment Drawing This repository contains the source code of ELSED: Enhanced Line SEgment Drawing the fastest line segment detecto

Iago Suรกrez 125 Dec 31, 2022
Python KNN model: Predicting a probability of getting a work visa. Tableau: Non-immigrant visas over the years.

The value of international students to the United States. Probability of getting a non-immigrant visa. Project timeline: Jan 2021 - April 2021 Project

Zinaida Dvoskina 2 Nov 21, 2021
This repository contain code on Novelty-Driven Binary Particle Swarm Optimisation for Truss Optimisation Problems.

This repository contain code on Novelty-Driven Binary Particle Swarm Optimisation for Truss Optimisation Problems. The main directory include the code

0 Dec 23, 2021
PyTorch implementation of the ExORL: Exploratory Data for Offline Reinforcement Learning

ExORL: Exploratory Data for Offline Reinforcement Learning This is an original PyTorch implementation of the ExORL framework from Don't Change the Alg

Denis Yarats 52 Jan 01, 2023
The implementation of the CVPR2021 paper "Structure-Aware Face Clustering on a Large-Scale Graph with 10^7 Nodes"

STAR-FC This code is the implementation for the CVPR 2021 paper "Structure-Aware Face Clustering on a Large-Scale Graph with 10^7 Nodes" ๐ŸŒŸ ๐ŸŒŸ . ๐ŸŽ“ Re

Shuai Shen 87 Dec 28, 2022
Python wrapper class for OpenVINO Model Server. User can submit inference request to OVMS with just a few lines of code

Python wrapper class for OpenVINO Model Server. User can submit inference request to OVMS with just a few lines of code.

Yasunori Shimura 7 Jul 27, 2022
Efficient Conformer: Progressive Downsampling and Grouped Attention for Automatic Speech Recognition

Efficient Conformer: Progressive Downsampling and Grouped Attention for Automatic Speech Recognition Official implementation of the Efficient Conforme

Maxime Burchi 145 Dec 30, 2022
Human pose estimation from video plays a critical role in various applications such as quantifying physical exercises, sign language recognition, and full-body gesture control.

Pose Detection Project Description: Human pose estimation from video plays a critical role in various applications such as quantifying physical exerci

Hassan Shahzad 2 Jan 17, 2022
Official repository for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'21, Oral Presentation)

Official PyTorch Implementation for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'2021, Oral Presentation) HOTR: End-to-

Kakao Brain 114 Nov 28, 2022
PyTorch code for DriveGAN: Towards a Controllable High-Quality Neural Simulation

PyTorch code for DriveGAN: Towards a Controllable High-Quality Neural Simulation

76 Dec 24, 2022
Implementation of the state-of-the-art vision transformers with tensorflow

ViT Tensorflow This repository contains the tensorflow implementation of the state-of-the-art vision transformers (a category of computer vision model

Mohammadmahdi NouriBorji 2 Mar 16, 2022
Code and models used in "MUSS Multilingual Unsupervised Sentence Simplification by Mining Paraphrases".

Multilingual Unsupervised Sentence Simplification Code and pretrained models to reproduce experiments in "MUSS: Multilingual Unsupervised Sentence Sim

Facebook Research 81 Dec 29, 2022
An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax

Simple Transformer An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax. Note: The only ex

29 Jun 16, 2022
Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data

Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data This is the official PyTorch implementation of the SeCo paper: @articl

ElementAI 101 Dec 12, 2022
Sub-Cluster AdaCos: Learning Representations for Anomalous Sound Detection.

Accompanying code for the paper Sub-Cluster AdaCos: Learning Representations for Anomalous Sound Detection.

Kevin Wilkinghoff 6 Dec 01, 2022
PFENet: Prior Guided Feature Enrichment Network for Few-shot Segmentation (TPAMI).

PFENet This is the implementation of our paper PFENet: Prior Guided Feature Enrichment Network for Few-shot Segmentation that has been accepted to IEE

DV Lab 230 Dec 31, 2022
Bald-to-Hairy Translation Using CycleGAN

GANiry: Bald-to-Hairy Translation Using CycleGAN Official PyTorch implementation of GANiry. GANiry: Bald-to-Hairy Translation Using CycleGAN, Fidan Sa

Fidan Samet 10 Oct 27, 2022
Code of the paper "Shaping Visual Representations with Attributes for Few-Shot Learning (ASL)".

Shaping Visual Representations with Attributes for Few-Shot Learning This code implements the Shaping Visual Representations with Attributes for Few-S

chx_nju 9 Sep 01, 2022