DI-smartcross - Decision Intelligence Platform for Traffic Crossing Signal Control

Overview

DI-smartcross

icon

DI-smartcross - Decision Intelligence Platform for Traffic Crossing Signal Control.

DI-smartcross is application platform under OpenDILab

Instruction

DI-smartcross is an open-source traffic crossing signal control platform. DI-smartcross applies several Reinforcement Learning policies training & evaluation for traffic signal control system in provided road nets.

DI-smartcross uses DI-engine, a Reinforcement Learning platform to build RL experiments. DI-smartcross uses SUMO (Simulation of Urban MObility) traffic simulator package to run signal control simulation.

DI-smartcross supports:

  • Single-Agent and Multi-Agent Reinforcement Learning
  • Synthetic and Real roadnet, Arterial and Grid network shape
  • Customizable observation, action and reward types
  • Easily achieve Multi-Environment Parallel, Actor-Learner Asynchronous Parallel when training with DI-engine

Installation

DI-smartcross supports SUMO version >= 1.6.0. Here we show an easy guide of installation with SUMO 1.8.0 on Linux.

Install sumo

  1. install required libraries and dependencies
sudo apt-get install cmake python g++ libxerces-c-dev libfox-1.6-dev libgdal-dev libproj-dev libgl2ps-dev swig
  1. download and unzip the installation package
tar xzf sumo-src-1.8.0.tar.gz
cd sumo-1.8.0
pwd 
  1. compile sumo
mkdir build/cmake-build
cd build/cmake-build
cmake ../..
make -j $(nproc)
  1. environment variables
echo 'export PATH=$HOME/sumo-1.8.0/bin:$PATH
export SUMO_HOME=$HOME/sumo-1.8.0' | tee -a $HOME/.bashrc
source ~/.bashrc
  1. check install
sumo

If success, the following message will be shown in the shell.

Eclipse SUMO sumo Version 1.8.0
  Build features: Linux-3.10.0-957.el7.x86_64 x86_64 GNU 5.3.1 Release Proj GUI SWIG GDAL GL2PS
  Copyright (C) 2001-2020 German Aerospace Center (DLR) and others; https://sumo.dlr.de
  License EPL-2.0: Eclipse Public License Version 2 <https://eclipse.org/legal/epl-v20.html>
  Use --help to get the list of options.

Install DI-smartcross

To install DI-smartcross, simply run pip install in the root folder of this repository. This will automatically insall DI-engine as well.

pip install -e . --user

Quick Start

Run training and evaluation

DI-smartcross supports DQN, Off-policy PPO and Rainbow DQN RL methods with multi-discrete actions for each crossing. A set of default DI-engine configs is provided for each policy. You can check the document of DI-engine to get detail instructions of these configs.

  • train RL policies
usage: sumo_train [-h] -d DING_CFG -e ENV_CFG [-s SEED] [--dynamic-flow]
                  [-cn COLLECT_ENV_NUM] [-en EVALUATE_ENV_NUM]
                  [--exp-name EXP_NAME]

DI-smartcross training script

optional arguments:
  -h, --help            show this help message and exit
  -d DING_CFG, --ding-cfg DING_CFG
                        DI-engine configuration path
  -e ENV_CFG, --env-cfg ENV_CFG
                        sumo environment configuration path
  -s SEED, --seed SEED  random seed for sumo
  --dynamic-flow        use dynamic route flow
  -cn COLLECT_ENV_NUM, --collect-env-num COLLECT_ENV_NUM
                        collector sumo env num for training
  -en EVALUATE_ENV_NUM, --evaluate-env-num EVALUATE_ENV_NUM
                        evaluator sumo env num for training
  --exp-name EXP_NAME   experiment name to save log and ckpt

Example of running DQN in wj3 env with default config.

sumo_train -e smartcross/envs/sumo_arterial_wj3_default_config.yaml -d entry/config/sumo_wj3_dqn_default_config.py
  • evaluate existing policies
usage: sumo_eval [-h] [-d DING_CFG] -e ENV_CFG [-s SEED]
                 [-p {random,fix,dqn,rainbow,ppo}] [--dynamic-flow]
                 [-n ENV_NUM] [--gui] [-c CKPT_PATH]

DI-smartcross training script

optional arguments:
  -h, --help            show this help message and exit
  -d DING_CFG, --ding-cfg DING_CFG
                        DI-engine configuration path
  -e ENV_CFG, --env-cfg ENV_CFG
                        sumo environment configuration path
  -s SEED, --seed SEED  random seed for sumo
  -p {random,fix,dqn,rainbow,ppo}, --policy-type {random,fix,dqn,rainbow,ppo}
                        RL policy type
  --dynamic-flow        use dynamic route flow
  -n ENV_NUM, --env-num ENV_NUM
                        sumo env num for evaluation
  --gui                 open gui for visualize
  -c CKPT_PATH, --ckpt-path CKPT_PATH
                        model ckpt path

Example of running random policy in wj3 env.

sumo_eval -p random -e smartcross/envs/sumo_arterial_wj3_default_config.yaml     

Environments

sumo env configuration

The configuration of sumo env is stored in a config .yaml file. You can take a look at the default config file to see how to modify env settings.

import yaml
from easy_dict import EasyDict
from smartcross.env import SumoEnv

with open('smartcross/envs/sumo_arterial_wj3_default_config.yaml') as f:
    cfg = yaml.safe_load(f)
cfg = EasyDict(cfg)
env = SumoEnv(config=cfg.env)

The env configuration consists of basic definition and observation\action\reward settings. The basic definition includes the cumo config file, episode length and light duration. The obs\action\reward define the detail setting of each contains.

env:
    sumocfg_path: 'arterial_wj3/rl_wj.sumocfg'
    max_episode_steps: 1500
    green_duration: 10
    yellow_duration: 3
    obs:
        ...
    action:
        ...
    reward:
        ...

Observation

We provide several types of observations of a traffic cross. If use_centrolized_obs is set True, the observation of each cross will be concatenated into one vector. The contents of observation can me modified by setting obs_type. The following observation is supported now.

  • phase: One-hot phase vector of current cross signal
  • lane_pos_vec: Lane occupancy in each grid position. The grid num can be set with lane_grid_num
  • traffic_volumn: Traffic volumn of each lane. Vehicle num / lane length * volumn ratio
  • queue_len: Vehicle waiting queue length of each lane. Waiting num / lane length * volumn ratio

Action

Sumo environment supports changing cross signal to target phase. The action space is set to multi-discrete for each cross to reduce action num.

Reward

Reward can be set with reward_type. Reward is calculated cross by cross. If use_centrolized_obs is set True, the reward of each cross will be summed up.

  • queue_len: Vehicle waiting queue num of each lane
  • wait_time: Wait time increment of vehicles in each lane
  • delay_time: Delay time of all vahicles in incomming and outgoing lanes
  • pressure: Pressure of a cross

Contributing

We appreciate all contributions to improve DI-smartcross, both algorithms and system designs.

License

DI-smartcross released under the Apache 2.0 license.

Citation

@misc{smartcross,
    title={{DI-smartcross: OpenDILab} Decision Intelligence platform for Traffic Crossing Signal Control},
    author={DI-smartcross Contributors},
    publisher = {GitHub},
    howpublished = {\url{`https://github.com/opendilab/DI-smartcross`}},
    year={2021},
}
Comments
  • style(hus): update email address

    style(hus): update email address

    Description

    Related Issue

    TODO

    Check List

    • [ ] merge the latest version source branch/repo, and resolve all the conflicts
    • [ ] pass style check
    • [ ] pass all the tests
    opened by TuTuHuss 0
  • update and fix typo in docs

    update and fix typo in docs

    Description

    Related Issue

    TODO

    Check List

    • [ ] merge the latest version source branch/repo, and resolve all the conflicts
    • [ ] pass style check
    • [ ] pass all the tests
    opened by RobinC94 0
  • update envs, docs and actions

    update envs, docs and actions

    Description

    Related Issue

    TODO

    Check List

    • [ ] merge the latest version source branch/repo, and resolve all the conflicts
    • [ ] pass style check
    • [ ] pass all the tests
    opened by RobinC94 0
  • Dev

    Dev

    Description

    Related Issue

    TODO

    Check List

    • [ ] merge the latest version source branch/repo, and resolve all the conflicts
    • [ ] pass style check
    • [ ] pass all the tests
    opened by RobinC94 0
  • Merge branch 'main' into dev

    Merge branch 'main' into dev

    Description

    None

    Related Issue

    TODO

    Check List

    • [ ] merge the latest version source branch/repo, and resolve all the conflicts
    • [ ] pass style check
    • [ ] pass all the tests
    opened by RobinC94 0
  • update readme

    update readme

    Description

    Related Issue

    TODO

    Check List

    • [ ] merge the latest version source branch/repo, and resolve all the conflicts
    • [ ] pass style check
    • [ ] pass all the tests
    opened by RobinC94 0
  • suit for 0.3.0

    suit for 0.3.0

    Description

    Related Issue

    TODO

    Check List

    • [ ] merge the latest version source branch/repo, and resolve all the conflicts
    • [ ] pass style check
    • [ ] pass all the tests
    opened by RobinC94 0
  • v0.1.0 update

    v0.1.0 update

    Description

    add cityflow env suit ding 0.3

    Related Issue

    TODO

    Check List

    • [ ] merge the latest version source branch/repo, and resolve all the conflicts
    • [ ] pass style check
    • [ ] pass all the tests
    opened by RobinC94 0
  • Dev: Version 0.0.1

    Dev: Version 0.0.1

    Description

    Related Issue

    TODO

    Check List

    • [ ] merge the latest version source branch/repo, and resolve all the conflicts
    • [ ] pass style check
    • [ ] pass all the tests
    opened by RobinC94 0
  • Dev: update obs helper, mappo; update configs

    Dev: update obs helper, mappo; update configs

    Description

    update obs helper, mappo; add arterial7; update configs

    Related Issue

    TODO

    Check List

    • [ ] merge the latest version source branch/repo, and resolve all the conflicts
    • [ ] pass style check
    • [ ] pass all the tests
    opened by RobinC94 0
  • add different settings for ppo

    add different settings for ppo

    Description

    Related Issue

    TODO

    Check List

    • [ ] merge the latest version source branch/repo, and resolve all the conflicts
    • [ ] pass style check
    • [ ] pass all the tests
    opened by kxzxvbk 0
Releases(v0.1.0)
Owner
OpenDILab
Open sourced Decision Intelligence (DI)
OpenDILab
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

El Bruno 3 Mar 30, 2022
On Uncertainty, Tempering, and Data Augmentation in Bayesian Classification

Understanding Bayesian Classification This repository hosts the code to reproduce the results presented in the paper On Uncertainty, Tempering, and Da

Sanyam Kapoor 18 Nov 17, 2022
Basics of 2D and 3D Human Pose Estimation.

Human Pose Estimation 101 If you want a slightly more rigorous tutorial and understand the basics of Human Pose Estimation and how the field has evolv

Sudharshan Chandra Babu 293 Dec 14, 2022
Unofficial pytorch implementation of 'Image Inpainting for Irregular Holes Using Partial Convolutions'

pytorch-inpainting-with-partial-conv Official implementation is released by the authors. Note that this is an ongoing re-implementation and I cannot f

Naoto Inoue 525 Jan 01, 2023
Speeding-Up Back-Propagation in DNN: Approximate Outer Product with Memory

Approximate Outer Product Gradient Descent with Memory Code for the numerical experiment of the paper Speeding-Up Back-Propagation in DNN: Approximate

2 Mar 02, 2022
Code for the paper SphereRPN: Learning Spheres for High-Quality Region Proposals on 3D Point Clouds Object Detection, ICIP 2021.

SphereRPN Code for the paper SphereRPN: Learning Spheres for High-Quality Region Proposals on 3D Point Clouds Object Detection, ICIP 2021. Authors: Th

Thang Vu 15 Dec 02, 2022
MINERVA: An out-of-the-box GUI tool for offline deep reinforcement learning

MINERVA is an out-of-the-box GUI tool for offline deep reinforcement learning, designed for everyone including non-programmers to do reinforcement learning as a tool.

Takuma Seno 80 Nov 06, 2022
Any-to-any voice conversion using synthetic specific-speaker speeches as intermedium features

MediumVC MediumVC is an utterance-level method towards any-to-any VC. Before that, we propose SingleVC to perform A2O tasks(Xi → Ŷi) , Xi means utter

谷下雨 47 Dec 25, 2022
Physics-informed Neural Operator for Learning Partial Differential Equation

PINO Physics-informed Neural Operator for Learning Partial Differential Equation Abstract: Machine learning methods have recently shown promise in sol

107 Jan 02, 2023
This is code to fit per-pixel environment map with spherical Gaussian lobes, using LBFGS optimization

Spherical Gaussian Optimization This is code to fit per-pixel environment map with spherical Gaussian lobes, using LBFGS optimization. This code has b

41 Dec 14, 2022
The code from the paper Character Transformations for Non-Autoregressive GEC Tagging

Character Transformations for Non-Autoregressive GEC Tagging Milan Straka, Jakub Náplava, Jana Straková Charles University Faculty of Mathematics and

ÚFAL 5 Dec 10, 2022
Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder

ASEGAN: Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder 中文版简介 Readme with English Version 介绍 基于SEGAN模型的改进版本,使用自主设计的非

Nitin 53 Nov 17, 2022
General Vision Benchmark, a project from OpenGVLab

Introduction We build GV-B(General Vision Benchmark) on Classification, Detection, Segmentation and Depth Estimation including 26 datasets for model e

174 Dec 27, 2022
Code for paper: Towards Tokenized Human Dynamics Representation

Video Tokneization Codebase for video tokenization, based on our paper Towards Tokenized Human Dynamics Representation. Prerequisites (tested under Py

Kenneth Li 20 May 31, 2022
VR-Caps: A Virtual Environment for Active Capsule Endoscopy

VR-Caps: A Virtual Environment for Capsule Endoscopy Overview We introduce a virtual active capsule endoscopy environment developed in Unity that prov

DeepMIA Lab 90 Dec 27, 2022
PyTorch implementation of ARM-Net: Adaptive Relation Modeling Network for Structured Data.

A ready-to-use framework of latest models for structured (tabular) data learning with PyTorch. Applications include recommendation, CRT prediction, healthcare analytics, and etc.

48 Nov 30, 2022
DNA-RECON { Automatic Web Reconnaissance Tool }

ABOUT TOOL : DNA-RECON is an automatic web reconnaissance tool written in python. This tool made for reconnaissance and information gathering with an

NIKUNJ BHATT 25 Aug 11, 2021
A 10000+ hours dataset for Chinese speech recognition

WenetSpeech Official website | Paper A 10000+ Hours Multi-domain Chinese Corpus for Speech Recognition Download Please visit the official website, rea

310 Jan 03, 2023
Implementation of average- and worst-case robust flatness measures for adversarial training.

Relating Adversarially Robust Generalization to Flat Minima This repository contains code corresponding to the MLSys'21 paper: D. Stutz, M. Hein, B. S

David Stutz 13 Nov 27, 2022
Semi-Supervised Learning with Ladder Networks in Keras. Get 98% test accuracy on MNIST with just 100 labeled examples !

Semi-Supervised Learning with Ladder Networks in Keras This is an implementation of Ladder Network in Keras. Ladder network is a model for semi-superv

Divam Gupta 101 Sep 07, 2022