Code accompanying the NeurIPS 2021 paper "Generating High-Quality Explanations for Navigation in Partially-Revealed Environments"

Overview

Generating High-Quality Explanations for Navigation in Partially-Revealed Environments

This work presents an approach to explainable navigation under uncertainty.

This is the code release associated with the NeurIPS 2021 paper Generating High-Quality Explanations for Navigation in Partially-Revealed Environments. In this repository, we provide all the code, data, and simulation environments necessary to reproduce our results. These results include (1) training, (2) large-scale evaluation, (3) explaining robot behavior, and (4) interveneing-via-explaining. Here we show an example of an explanation automatically generated by our approach in one of our simulated environments, in which the green path on the ground indicates a likely route to the goal:

An example explanation automatically generated by our approach in our simulated 'Guided Maze' environment.

@inproceedings{stein2021xailsp,
  title = {Generating High-Quality Explanations for Navigation in Partially-Revealed Environments},
  author = {Gregory J. Stein},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year = 2021,
  keywords = {explainability; planning under uncertainty; subgoal-based planning; interpretable-by-design},
}

Getting Started

We use Docker (with the Nvidia runtime) and GNU Make to run our code, so both are required to run our code. First, docker must be installed by following the official docker install guide (the official docker install guide). Second, our docker environments will require that the NVIDIA docker runtime is installed (via nvidia-container-toolkit. Follow the install instructions on the nvidia-docker GitHub page to get it.

Generating Explanations

We have provided a make target that generates two explanations that correspond to those included in the paper. Running the following make targets in a command prompt will generate these:

# Build the repo
make build
# Generate explanation plots
make xai-explanations

For each, the planner is run for a set number of steps and an explanation is generated by the agent and its learned model to justify its behavior compared to what the oracle planner specifies as the action known to lead to the unseen goal. A plot will be generated for each of the explanations and added to ./data/explanations.

Re-Running Results Experiments

We also provide targets for re-running the results for each of our simulated experimental setups:

# Build the repo
make build

# Ensure data timestamps are in the correct order
# Only necessary on the first pass
make fix-target-timestamps

# Maze Environments
make xai-maze EXPERIMENT_NAME=base_allSG
make xai-maze EXPERIMENT_NAME=base_4SG SP_LIMIT_NUM=4
make xai-maze EXPERIMENT_NAME=base_0SG SP_LIMIT_NUM=0

# University Building (floorplan) Environments
make xai-floorplan EXPERIMENT_NAME=base_allSG
make xai-floorplan EXPERIMENT_NAME=base_4SG SP_LIMIT_NUM=4
make xai-floorplan EXPERIMENT_NAME=base_0SG SP_LIMIT_NUM=0

# Results Plotting
make xai-process-results

(This can also be done by running ./run.sh)

This code will build the docker container, do nothing (since the results already exist), and then print out the results. GNU Make is clever: it recognizes that the plots already exist in their respective locations for each of the experiments and, as such, it does not run any code. To save on space to meet the 100MB size requirements, the results images for each experiment have been downsampled to thumbnail size. If you would like to reproduce any of our results, delete the plots of interest in the results folder and rerun the above code; make will detect which plots have been deleted and reproduce them. All results plots can be found in their respective folder in ./data/results.

The make commands above can be augmented to run the trials in parallel, by adding -jN (where N is the number of trials to be run in parallel) to each of the Make commands. On our NVIDIA 2060 SUPER, we are limited by GPU RAM, and so we limit to N=4. Running with higher N is possible but sometimes our simulator tries to allocate memory that does not exist and will crash, requiring that the trial be rerun. It is in principle possible to also generate data and train the learned planners from scratch, though (for now) this part of the pipeline has not been as extensively tested; data generation consumes roughly 1.5TB of disk space, so be sure to have that space available if you wish to run that part of the pipeline. Even with 4 parallel trials, we estimate that running all the above code from scratch (including data generation, training, and evaluation) will take roughly 2 weeks, half of which is evaluation.

Code Organization

The src folder contains a number of python packages necessary for this paper. Most of the algorithmic code that reflects our primary research contributions is predominantly spread across three files:

  • xai.planners.subgoal_planner The SubgoalPlanner class is the one which encapsulates much of the logic for deciding where the robot should go including its calculation of which action it should take and what is the "next best" action. This class is the primary means by which the agent collects information and dispatches it elsewhere to make decisions.
  • xai.learning.models.exp_nav_vis_lsp The ExpVisNavLSP defines the neural network along with its loss terms used to train it. Also critical are the functions included in this and the xai.utils.data file for "updating" the policies to reflect the newly estimated subgoal properties even after the network has been retrained. This class also includes the functionality for computing the delta subgoal properties that primarily define our counterfactual explanations. Virtuall all of this functionality heavily leverages PyTorch, which makes it easy to compute the gradients of the expected cost for each of the policies.
  • xai.planners.explanation This file defines the Explanation class that stores the subgoal properties and their deltas (computed via ExpVisNavLSP) and composes these into a natural language explanation and a helpful visualization showing all the information necessary to understand the agent's decision-making process.
Owner
RAIL Group @ George Mason University
Code for the Robotic Anticipatory Intelligence & Learning (RAIL) Group at George Mason University
RAIL Group @ George Mason University
Code for NAACL 2021 full paper "Efficient Attentions for Long Document Summarization"

LongDocSum Code for NAACL 2021 paper "Efficient Attentions for Long Document Summarization" This repository contains data and models needed to reprodu

56 Jan 02, 2023
Generating Fractals on Starknet with Cairo

StarknetFractals Generating the mandelbrot set on Starknet Current Implementation generates 1 pixel of the fractal per call(). It takes a few minutes

Orland0x 10 Jul 16, 2022
This is the repository of shape matching algorithm Iterative Rotations and Assignments (IRA)

Description This is the repository of shape matching algorithm Iterative Rotations and Assignments (IRA), described in the publication [1]. Directory

MAMMASMIAS Consortium 6 Nov 14, 2022
some classic model used to segment the medical images like CT、X-ray and so on

github_project This is a project for medical image segmentation. This project includes common medical image segmentation models such as U-net, FCN, De

2 Mar 30, 2022
Pytorch implementation of AngularGrad: A New Optimization Technique for Angular Convergence of Convolutional Neural Networks

AngularGrad Optimizer This repository contains the oficial implementation for AngularGrad: A New Optimization Technique for Angular Convergence of Con

mario 124 Sep 16, 2022
Official PyTorch implementation of "Improving Face Recognition with Large AgeGaps by Learning to Distinguish Children" (BMVC 2021)

Inter-Prototype (BMVC 2021): Official Project Webpage This repository provides the official PyTorch implementation of the following paper: Improving F

Jungsoo Lee 16 Jun 30, 2022
1st ranked 'driver careless behavior detection' for AI Online Competition 2021, hosted by MSIT Korea.

2021AICompetition-03 본 repo 는 mAy-I Inc. 팀으로 참가한 2021 인공지능 온라인 경진대회 중 [이미지] 운전 사고 예방을 위한 운전자 부주의 행동 검출 모델] 태스크 수행을 위한 레포지토리입니다. mAy-I 는 과학기술정보통신부가 주최하

Junhyuk Park 9 Dec 01, 2022
Hard cater examples from Hopper ICLR paper

CATER-h Honglu Zhou*, Asim Kadav, Farley Lai, Alexandru Niculescu-Mizil, Martin Renqiang Min, Mubbasir Kapadia, Hans Peter Graf (*Contact: honglu.zhou

NECLA ML Group 6 May 11, 2021
Python library for analysis of time series data including dimensionality reduction, clustering, and Markov model estimation

deeptime Releases: Installation via conda recommended. conda install -c conda-forge deeptime pip install deeptime Documentation: deeptime-ml.github.io

495 Dec 28, 2022
Multi-Glimpse Network With Python

Multi-Glimpse Network Our code requires Python ≥ 3.8 Installation For example, venv + pip: $ python3 -m venv env $ source env/bin/activate (env) $ pyt

9 May 10, 2022
Breaching - Breaching privacy in federated learning scenarios for vision and text

Breaching - A Framework for Attacks against Privacy in Federated Learning This P

Jonas Geiping 139 Jan 03, 2023
OptaPlanner wrappers for Python. Currently significantly slower than OptaPlanner in Java or Kotlin.

OptaPy is an AI constraint solver for Python to optimize the Vehicle Routing Problem, Employee Rostering, Maintenance Scheduling, Task Assignment, School Timetabling, Cloud Optimization, Conference S

OptaPy 211 Jan 02, 2023
Source codes for Improved Few-Shot Visual Classification (CVPR 2020), Enhancing Few-Shot Image Classification with Unlabelled Examples

Source codes for Improved Few-Shot Visual Classification (CVPR 2020), Enhancing Few-Shot Image Classification with Unlabelled Examples (WACV 2022) and Beyond Simple Meta-Learning: Multi-Purpose Model

PLAI Group at UBC 42 Dec 06, 2022
CCNet: Criss-Cross Attention for Semantic Segmentation (TPAMI 2020 & ICCV 2019).

CCNet: Criss-Cross Attention for Semantic Segmentation Paper Links: Our most recent TPAMI version with improvements and extensions (Earlier ICCV versi

Zilong Huang 1.3k Dec 27, 2022
Convert BART models to ONNX with quantization. 3X reduction in size, and upto 3X boost in inference speed

fast-Bart Reduction of BART model size by 3X, and boost in inference speed up to 3X BART implementation of the fastT5 library (https://github.com/Ki6a

Siddharth Sharma 19 Dec 09, 2022
IGCN : Image-to-graph convolutional network

IGCN : Image-to-graph convolutional network IGCN is a learning framework for 2D/3D deformable model registration and alignment, and shape reconstructi

Megumi Nakao 7 Oct 27, 2022
A fast and easy to use, moddable, Python based Minecraft server!

PyMine PyMine - The fastest, easiest to use, Python-based Minecraft Server! Features Note: This list is not always up to date, and doesn't contain all

PyMine 144 Dec 30, 2022
GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot

GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub.

2.3k Jan 09, 2023
Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation.

Understanding Minimum Bayes Risk Decoding This repo provides code and documentation for the following paper: Müller and Sennrich (2021): Understanding

ZurichNLP 13 May 01, 2022
TensorFlow implementation of the algorithm in the paper "Decoupled Low-light Image Enhancement"

Decoupled Low-light Image Enhancement Shijie Hao1,2*, Xu Han1,2, Yanrong Guo1,2 & Meng Wang1,2 1Key Laboratory of Knowledge Engineering with Big Data

17 Apr 25, 2022