Unicorn can be used for performance analyses of highly configurable systems with causal reasoning

Related tags

Deep Learningunicorn
Overview

Unicorn (EuroSys 2022)

Unicorn can be used for performance analyses of highly configurable systems with causal reasoning. Users or developers can query Unicorn for a performance task.

Overview

overview

Abstract

Modern computer systems are highly configurable, with the total variability space sometimes larger than the number of atoms in the universe. Understanding and reasoning about the performance behavior of highly configurable systems, due to a vast variability space, is challenging. State-of-the-art methods for performance modeling and analyses rely on predictive machine learning models, therefore, they become (i) unreliable in unseen environments (e.g., different hardware, workloads), and (ii) produce incorrect explanations. To this end, we propose a new method, called Unicorn, which (i) captures intricate interactions between configuration options across the software-hardware stack and (ii) describes how such interactions impact performance variations via causal inference. We evaluated Unicorn on six highly configurable systems, including three on-device machine learning systems, a video encoder, a database management system, and a data analytics pipeline. The experimental results indicate that Unicorn outperforms state-of-the-art performance optimization and debugging methods. Furthermore, unlike the existing methods, the learned causal performance models reliably predict performance for new environments.

Pre-requisites

  • python 3.6
  • json
  • pandas
  • numpy
  • flask
  • causalgraphicalmodels
  • causalnex
  • graphviz
  • py-causal
  • causality

Please run the following commands to have your system ready to run Unicorn:

git clone https://github.com/softsys4ai/unicorn.git
cd unicorn
pip install pandas
pip install numpy
pip install flask
pip install causalgraphicalmodels
pip install causalnex
pip install graphviz
pip install py-causal
pip install causality
pip install tensorflow-gpu==1.15
pip install keras
pip install torch==1.4.0 torchvision==0.5.0

How to use Unicorn

Unicorn can be used for performing different tasks such as performance optimization and performance debugging. Unicorn supports both offline and online modes. In the offline mode, Unicorn can be run on any device that uses previously measured configurations. In the online mode, the measurements are performed from NVIDIA Jetson Xavier, NVIDIA Jetson TX2, and NVIDIA Jetson TX1 devices directly. To collect measurements from these devices sudo privilege is required as it requires setting a device to a new configuration before measurement.

Debugging (offline)

Unicorn supports debugging and fixing single-objective and multi-objective performance faults. It also supports root cause analysis of these fixes such as determining accuracy, computing gain etc.

Single-objective debugging

To debug single-objective faults in the offline mode using Unicorn please use the following command:

python unicorn_debugging.py  -o objective -s softwaresystem -k hardwaresystem -m mode

Example

To debug single-objective latency faults for Xception in JETSON TX2 in the offline mode using Unicorn please use the following command:

python unicorn_debugging.py  -o inference_time -s Xception -k TX2 -m offline

To debug single-objective energy faults for Bert in JETSON Xavier in the offline mode using Unicorn please use the following command:

python unicorn_debugging.py  -o total_energy_consumption -s Bert -k Xavier -m offline

Multi-objective debugging

To debug multi-objective faults in the offline mode using Unicorn please use the following command:

python unicorn_debugging.py  -o objective1 -o objective2 -s softwaresystem -k hardwaresystem -m mode

Example

To debug multi-objective latency and energy faults for Deepspeech in JETSON TX2 in the offline mode using Unicorn please use the following command:

python unicorn_debugging.py  -o inference_time -o total_energy_consumption -s Deepspeech  -k TX2 -m offline

Optimization (offline)

Unicorn supports single-objective and multi-objective optimization..

Single-objective optimization

To run single-objective optimization in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o objective -s softwaresystem -k hardwaresystem -m mode

Example

To To run single-objective latency optimization for Xception in JETSON TX2 in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o inference_time -s Xception -k TX2 -m offline

To run single-objective energy optimization for Bert in JETSON Xavier in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o total_energy_consumption -s Bert -k Xavier -m offline

Multi-objective debugging

To run multi-objective optimization in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o objective1 -o objective2 -s softwaresystem -k hardwaresystem -m mode

Example

To run multi-objective latency and energy optimization for Deepspeech in JETSON TX2 in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o inference_time -o total_energy_consumption -s Deepspeech  -k TX2 -m offline

Transferability

Unicorn supports both single and multi-objective transferability. However, multi-objective transferability is not comprehensively investigated in this version. To determine single-objective transferability of Unicorn use the following command:

python unicorn_transferability.py  -o objective -s softwaresystem -k hardwaresystem

Example

To run single-objective latency transferability for Xception in JETSON TX2 in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o inference_time -s Xception -k TX2 -m offline

To run single-objective energy transferability for Bert in JETSON Xavier in the offline mode using Unicorn please use the following command:

python unicorn_optimization.py  -o total_energy_consumption -s Bert -k Xavier -m offline

Data generation

To run experiments on NVIDIA Jetson Xavier, NVIDIA Jetson TX2, and NVIDIA Jetson TX1 devices for a particular software a flask app is required to be launched. Please use the following command to start the app in the localhost.

python run_service.py softwaresystem

For example to initialize a flask app with Xception software system please use:

python run_service.py Image

Once the flask app is running and modelserver is ready then please use the following command to collect performance measurements for different configurations:

python run_params.py softwaresystem

Unicorn usage on a different dataset

To run Unicorn on your a different dataset you will only need unicorn_debugging.py and unicorn_optimization.py. In the online mode, to perform interventions using the recommended configuration you need to develop your own utilities (similar to run_params.py). Additionally, you need to make some changes in the etc/config.yml to use the configuration options and their values accordingly. The necessary steps are the following:

Step 1: Update init_dirin config.yml with the directory where initial data is stored.

Step 2: Update bug_dir in config.yml with the directory where bug data is stored.

Step 3: Update output_dir variable in the config.yml file where you want to save the output data.

Step 4: Update hardware_columns in the config.yml with the hardware configuration options you want to use.

Step 5: Update kernel_columns in the config.yml with the kernel configuration options you want to use.

Step 6: Update perf_columns in the config.yml with the events you want to track using perf. If you use any other monitoring tool you need to update it accordingly.

Step 7: Update measurment_colums in the config.yml based on the performance objectives you want to use for bug resolve.

Step 8: Update is_intervenable variables in the config.yml with the configuration options you want to use and based on your application change their values to True or False. True indicates the configuration options can be intervened upon and vice-versa for False.

Step 9: Update the option_values variables in the config.yml based on the allowable values your option can take.

At this stage you can run unicorn_debugging.py and unicorn_optimization.py with your own specification. Please notice that you also need to update the directories according to your software and hardware name in data directory. If you change the name of the variables in the config file or use a new config fille you need to make changes accordingly from in unicorn_debugging.py and unicorn_optimization.py.

How to cite

If you use Unicorn in your research or the dataset in this repository please cite the following:

@article{iqbalcadet,
  title={CADET: A Systematic Method For Debugging Misconfigurations using Counterfactual Reasoning},
  author={Iqbal, Md Shahriar and Krishna, Rahul and Javidian, Mohammad Ali and Ray, Baishakhi and Jamshidi, Pooyan}
}

Contacts

Please please feel free to contact via email if you find any issues or have any feedbacks. Thank you for using Unicorn.

Name Email
Md Shahriar Iqbal [email protected]

πŸ“˜   License

Unicorn is released under the under terms of the MIT License.

Comments
  • Evaluation of Source Environments

    Evaluation of Source Environments

    Need to determine the transfer learning pipeline. Determine the following: --- How good is the source modeling? --- How much update is needed? --- Explainability (what are the changes across environments) --- Experiments with different source budgets

    opened by iqbal128855 0
  • Structure Learning

    Structure Learning

    Enrich the causal models with Functional Causal Model (FCM) using CGNN and work with visualization for FCM Update causal model with Causal Interaction model and compare with CGNN. Comparison of CGNN, FCI (entropic calculation), and Causal Interaction model. If we use CGNN need to find the correct strategy - --- how to find the initial skeleton?

    opened by iqbal128855 0
  • Run MLPerf benchmark with Facebook DLRM.

    Run MLPerf benchmark with Facebook DLRM.

    Run MLPerf Benchmark with Facebook DLRM on different hardware (Jetson Xavier and TX2, Possibly on GPU cloud). Change software (RMC1, RMC2, and RMC3) and change workload (single stream, multi-stream and offline, varying number of queries for inference.)

    opened by iqbal128855 0
  • Run MLPerf benchmark with Facebook DLRM.

    Run MLPerf benchmark with Facebook DLRM.

    Run MLPerf Benchmark with Facebook DLRM on different hardware (Jetson Xavier and TX2, Possibly on GPU cloud). Change software (RMC1, RMC2, and RMC3) and change workload (single stream, multi-stream and offline, varying number of queries for inference.)

    opened by iqbal128855 0
  • Run Scalability experiments with Facebook DLRM systems.

    Run Scalability experiments with Facebook DLRM systems.

    --- Performance analysis of the Facebook DLRM systems with different configurations. Show how difficult it is to debug for misconfigurations in real-world production systems and discuss challenges. Discuss the richness in performance landscape (more complex behavior). --- Run CAUPER, BugDoc, SMAC, DeltaDebugging, Encore, and CBI on the DLRM fault dataset and evaluate using the ground truth dataset for both single and multi-objective performance faults. --- Show proof of scalability of CAUPER in Facebook DLRM system with a high number of allowable values taken by different configuration options. --- Write about the evaluation of Facebook DLRM systems. Analyze by 3 slices of latency, energy and heat.

    opened by iqbal128855 0
  • Update the ground truth datasets for each type of performance fault.

    Update the ground truth datasets for each type of performance fault.

    Update ground truth for each fault by using the configurations that provide 80% or more gain and recompute accuracy, precision, and recall with a confidence interval.

    opened by iqbal128855 0
  • Update Causal Structure Learning Algorithm.

    Update Causal Structure Learning Algorithm.

    -- Use FCI with the entropic approach to resolving edges. -- Breakdown computation efforts required for causal structure discovery, computing path causal effects, computing individual treatment effect, and measuring recommended configurations.

    opened by iqbal128855 0
  • More comparisons

    More comparisons

    | Method | Where? | When | link | |---|---|---|---| | βˆ†LDA | ECML | 2007 | http://pages.cs.wisc.edu/~jerryzhu/ssl/pub/rlda.pdf| |SmartConf | ASPLOS | 2018 | https://people.cs.uchicago.edu/~hankhoffmann/autoconf.pdf | | BestConfig | SoCC | 2017 | https://arxiv.org/pdf/1710.03439.pdf | | LEO | SIGARCH | 2015 | https://dl.acm.org/doi/pdf/10.1145/2786763.2694373 |

    opened by onkfotocer 0
  • Real world case study with a self-driving car system composition

    Real world case study with a self-driving car system composition

    Use Fig. 3 from here: https://www.bdti.com/InsideDSP/2017/03/14/NVIDIA to explain a real world scenario https://forums.developer.nvidia.com/t/cuda-performance-issue-on-tx2/50477 to show it works

    opened by onkfotocer 0
  • Policies for handing edge-type mismatches

    Policies for handing edge-type mismatches

    When are the policies applied?

    • bi-directed & no-edge β†’ we get a confidence score- whichever edge direction has the highest confidence use that direction.
    • Un-directed edge & no-edge β†’ no edge
    • Tail has a bubble and head has arrow β†’ keep the directed edge and remove the bubble
    • No-edge & edge β†’ edge
    • No-edge & no-edge β†’ no-edge

    When are the policies applied?

    Bubble/un-directed edge - selection variables Bi-directed edge - hidden variables

    When are the policies applied?

    1. Case 1: Greedy-- apply the above rules at every step
      • At each iteration there is a DAG (say DAG_t, DAG_t-1, ...)
      • If there are conflicts keep the counts of how many times an edge a->b, b->a, a--/--b, appears, use the one that the max count.
    2. Case 2: Apply in the end.
    Experiment 
    opened by rahlk 0
  • How to resolve bi-directed edges and cycles in the causal graph?

    How to resolve bi-directed edges and cycles in the causal graph?

    • [ ] Randomly -- not an appropriate answer for the reviewer
    • [ ] Use FCI/FGS/PC (besides expert knowledge) which makes much looser assumptions about causal sufficiency to inform NOTEARS
    opened by rahlk 0
Releases(EuroSys2022)
Owner
AISys Lab
Artificial Intelligence and Systems Laboratory
AISys Lab
Research using Cirq!

ReCirq Research using Cirq! This project contains modules for running quantum computing applications and experiments through Cirq and Quantum Engine.

quantumlib 230 Dec 29, 2022
Reinforcement Learning for Portfolio Management

qtrader Reinforcement Learning for Portfolio Management Why Reinforcement Learning? Learns the optimal action, rather than models the market. Adaptive

Angelos Filos 406 Jan 01, 2023
Official Pytorch implementation for AAAI2021 paper (RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning)

RSPNet Official Pytorch implementation for AAAI2021 paper "RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning" [Suppleme

35 Jun 24, 2022
COVID-VIT: Classification of Covid-19 from CT chest images based on vision transformer models

COVID-ViT COVID-VIT: Classification of Covid-19 from CT chest images based on vision transformer models This code is to response to te MIA-COV19 compe

17 Dec 30, 2022
Code for "Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency" paper

UNICORN πŸ¦„ Webpage | Paper | BibTex PyTorch implementation of "Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency" pap

118 Jan 06, 2023
[CVPR 2022] Back To Reality: Weak-supervised 3D Object Detection with Shape-guided Label Enhancement

Back To Reality: Weak-supervised 3D Object Detection with Shape-guided Label Enhancement Announcement πŸ”₯ We have not tested the code yet. We will fini

Xiuwei Xu 7 Oct 30, 2022
πŸ₯‡ LG-AI-Challenge 2022 1μœ„ μ†”λ£¨μ…˜ μž…λ‹ˆλ‹€.

LG-AI-Challenge-for-Plant-Classification Daconμ—μ„œ μ§„ν–‰λœ 농업 ν™˜κ²½ 변화에 λ”°λ₯Έ μž‘λ¬Ό 병해 진단 AI κ²½μ§„λŒ€νšŒ 에 λŒ€ν•œ μ½”λ“œμž…λ‹ˆλ‹€. (colab directory에 μ½”λ“œκ°€ 잘 정리 λ˜μ–΄μžˆμŠ΅λ‹ˆλ‹€.) Requirements python

siwooyong 10 Jun 30, 2022
Wordle Env: A Daily Word Environment for Reinforcement Learning

Wordle Env: A Daily Word Environment for Reinforcement Learning Setup Steps: git pull [email&#

2 Mar 28, 2022
Implementation of "JOKR: Joint Keypoint Representation for Unsupervised Cross-Domain Motion Retargeting"

JOKR: Joint Keypoint Representation for Unsupervised Cross-Domain Motion Retargeting Pytorch implementation for the paper "JOKR: Joint Keypoint Repres

45 Dec 25, 2022
ICRA 2021 - Robust Place Recognition using an Imaging Lidar

Robust Place Recognition using an Imaging Lidar A place recognition package using high-resolution imaging lidar. For best performance, a lidar equippe

Tixiao Shan 293 Dec 27, 2022
Official Implementation for Fast Training of Neural Lumigraph Representations using Meta Learning.

Fast Training of Neural Lumigraph Representations using Meta Learning Project Page | Paper | Data Alexander W. Bergman, Petr Kellnhofer, Gordon Wetzst

Alex 39 Oct 08, 2022
Basit bir burΓ§ modΓΌlΓΌ.

Bu modulu burclar hakkinda gundelik bir sekilde bilgi alin diye yaptim ve sizler icin kullanima sunuyorum. Modulun kullanimi asiri basit: Ornek Kullan

Special 17 Jun 08, 2022
Models Supported: AlbUNet [18, 34, 50, 101, 152] (1D and 2D versions for Single and Multiclass Segmentation, Feature Extraction with supports for Deep Supervision and Guided Attention)

AlbUNet-1D-2D-Tensorflow-Keras This repository contains 1D and 2D Signal Segmentation Model Builder for AlbUNet and several of its variants developed

Sakib Mahmud 1 Nov 15, 2021
This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting.

GAN Memory for Lifelong learning This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting. Please consider citing our paper

Miaoyun Zhao 43 Dec 27, 2022
πŸ•ΊFull body detection and tracking

Pose-Detection πŸ€” Overview Human pose estimation from video plays a critical role in various applications such as quantifying physical exercises, sign

Abbas Ataei 20 Nov 21, 2022
Estimating Example Difficulty using Variance of Gradients

Estimating Example Difficulty using Variance of Gradients This repository contains source code necessary to reproduce some of the main results in the

Chirag Agarwal 48 Dec 26, 2022
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

20 May 28, 2022
An inofficial PyTorch implementation of PREDATOR based on KPConv.

PREDATOR: Registration of 3D Point Clouds with Low Overlap An inofficial PyTorch implementation of PREDATOR based on KPConv. The code has been tested

ZhuLifa 14 Aug 03, 2022
Principled Detection of Out-of-Distribution Examples in Neural Networks

ODIN: Out-of-Distribution Detector for Neural Networks This is a PyTorch implementation for detecting out-of-distribution examples in neural networks.

189 Nov 29, 2022
Topic Modelling for Humans

gensim – Topic Modelling in Python Gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. Targ

RARE Technologies 13.8k Jan 03, 2023