Management Dashboard for Torchserve

Overview

Torchserve Dashboard

Total Downloads

Torchserve Dashboard using Streamlit

Related blog post

Demo

Usage

Additional Requirement: torchserve (recommended:v0.5.2)

Simply run:

pip3 install torchserve-dashboard --user
# torchserve-dashboard [streamlit_options(optional)] -- [config_path(optional)] [model_store(optional)] [log_location(optional)] [metrics_location(optional)]
torchserve-dashboard
#OR change port 
torchserve-dashboard --server.port 8105 -- --config_path ./torchserve.properties
#OR provide a custom configuration 
torchserve-dashboard -- --config_path ./torchserve.properties --model_store ./model_store

Keep in mind that If you change any of the --config_path,--model_store,--metrics_location,--log_location options while there is a torchserver already running before starting torch-dashboard they won't come into effect until you stop&start torchserve. These options are used instead of their respective environment variables TS_CONFIG_FILE, METRICS_LOCATION, LOG_LOCATION.

OR

git clone https://github.com/cceyda/torchserve-dashboard.git
streamlit run torchserve_dashboard/dash.py 
#OR
streamlit run torchserve_dashboard/dash.py --server.port 8105 -- --config_path ./torchserve.properties 

Example torchserve config:

inference_address=http://127.0.0.1:8443
management_address=http://127.0.0.1:8444
metrics_address=http://127.0.0.1:8445
grpc_inference_port=7070
grpc_management_port=7071
number_of_gpu=0
batch_size=1
model_store=./model_store

If the server doesn't start for some reason check if your ports are already in use!

Updates

[15-oct-2020] add scale workers tab

[16-feb-2021] (functionality) make logpath configurable,(functionality)remove model_name requirement,(UI)add cosmetic error messages

[10-may-2021] update config & make it optional. update streamlit. Auto create folders

[31-may-2021] Update to v0.4 (Add workflow API) Refactor out streamlit from api.py.

[30-nov-2021] Update to v0.5, adding support for encrypted model serving (not tested). Update streamlit to v1+

FAQs

  • Does torchserver keep running in the background?

    The torchserver is spawned using Popen and keeps running in the background even if you stop the dashboard.

  • What about environment variables?

    These environment variables are passed to the torchserve command:

    ENVIRON_WHITELIST=["LD_LIBRARY_PATH","LC_CTYPE","LC_ALL","PATH","JAVA_HOME","PYTHONPATH","TS_CONFIG_FILE","LOG_LOCATION","METRICS_LOCATION","AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY", "AWS_DEFAULT_REGION"]

  • How to change the logging format of torchserve?

    You can set the location of your custom log4j2 config in your configuration file as in here

    vmargs=-Dlog4j.configurationFile=file:///path/to/custom/log4j2.xml

  • What is the meaning behind the weird versioning?

    The minor follows the compatible torchserve version, patch version reflects the dashboard versioning

Help & Question & Feedback

Open an issue

TODOs

  • Async?
  • Better logging
  • Remote only mode
Comments
  • Update to serve 0.4

    Update to serve 0.4

    I love your project and was hoping we can feature it more prominently in the main torchserve repo - I was wondering if you'd be OK and interested in this. And if so I was wondering if you could give me some feedback on the below

    Installation instructions

    I tried to torchserve-dashboard --server.port 8105 -- --config_path ./torchserve.properties --model_store ./model_store but the page never seems to load regardless of whether I use the network or external url that I have

    I setup a config

    (torchservedashboard) [email protected]:~/torchserve-dashboard$ cat torchserve.properties 
    inference_address=http://127.0.0.1:8443
    management_address=http://127.0.0.1:8444
    metrics_address=http://127.0.0.1:8445
    grpc_inference_port=7070
    grpc_management_port=7071
    number_of_gpu=0
    batch_size=1
    model_store=model_store
    

    But perhaps makes the most sense to just add a default one to the repo so things just work. I'm happy to make the PR just let me know what you suggest. Ideally things just work with zero config and people can come back and change stuff once they feel more comfortable.

    Also on Ubuntu I had to type export PATH="$HOME/.local/bin:$PATH" so I could call torchserve-dashboard

    Features

    Also there's some new features we're excited like the below which would be very interesting to see like

    1. Model interpretability with Captum https://github.com/pytorch/serve/blob/master/captum/Captum_visualization_for_bert.ipynb
    2. Workflow support coming in 0.4 which will allow much more configurable pipelines https://github.com/pytorch/serve/pull/1024/files

    In all cases please let me know if you think we're on the right track and how we can make the torchserve more useful to you. I liked your suggestion on automatic doc generation and it's something I'm looking into so please keep them coming!

    opened by msaroufim 5
  • Improvements of package setup logic

    Improvements of package setup logic

    This PR is related to #1 . It improves the structure of the package setup: All package related info is moved to torchserve_dashboard.init.py.

    Requirement files are added which are split up depending on the usage of the repo/package.

    All functions linked to setup are moved to torchserve_dashboard.setup_tools.py. The function parsing the requirements can handle commented requirements as well as references to github etc (#egg included in requirement)

    opened by FlorianMF 3
  • click >=8 possibly not compatible

    click >=8 possibly not compatible

    Couldn't run the dashboard initially

    Traceback (most recent call last):
      File "/Users/me/Desktop/pytorch-mnist/venv/bin/torchserve-dashboard", line 8, in <module>
        sys.exit(main())
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 1137, in __call__
        return self.main(*args, **kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 1062, in main
        rv = self.invoke(ctx)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 763, in invoke
        return __callback(*args, **kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/decorators.py", line 26, in new_func
        return f(get_current_context(), *args, **kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/torchserve_dashboard/cli.py", line 16, in main
        ctx.forward(streamlit.cli.main_run, target=filename, args=args, *kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 784, in forward
        return __self.invoke(__cmd, *args, **kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 763, in invoke
        return __callback(*args, **kwargs)
    TypeError: main_run() got multiple values for argument 'target'
    

    After a bit of googling, I found this: https://github.com/rytilahti/python-eq3bt/issues/30

    The default install brought in click==8.0.1. I had to downgrade to 7.1.2 to get past the error.

    opened by jsphweid 1
  • better caching, init option, v0.6 update

    better caching, init option, v0.6 update

    • Better caching using @st.experimental_singleton

      • argument parsing and API class initialization should only happen once (across sessions) on initial load.
      • Should be way better compared to before which ran those functions after each page refresh 😱 Might be optimized further later...need to refactor cli-param parsing/init logic.
    • Added --init option to initialize torchserve on start. as per this issue: https://github.com/cceyda/torchserve-dashboard/issues/16 Use like: torchserve-dashboard --init Although you still have to load the dashboard screen once for it to actually start!

    • Update to match changes in torchserve v0.6

      • there seems to be only one update to ManagementAPI in v0.6 https://github.com/pytorch/serve/pull/1421 which adds ?customized=true option to return custom_metadata in model details. Although the feature seems to be buggy for old .mar files not implementing it. (tested on: https://github.com/pytorch/serve/blob/master/frontend/archive/src/test/resources/models/noop-customized.mar)

      Anyway I added a checkbox (defaulted to False) to return custom_metadata if needed.

    opened by cceyda 0
  • update streamlit version to v1.11.1

    update streamlit version to v1.11.1

    update streamlit version to include security update v1.11.1 Although torchserve-dashboard isn't using any custom components and therefore not effected by it.

    opened by cceyda 0
  • Update to 0.5.0

    Update to 0.5.0

    update torchserve:

    • v0.5
    • add aws encrypted model feature
    • add log-config to options

    update steamlit:

    • v1.2.0 -> drop beta_ prefix
    • drop python 3.6

    https://github.com/cceyda/torchserve-dashboard/issues/13

    opened by cceyda 0
  • Update to v0.4

    Update to v0.4

    • Add workflow management endpoints (untested)
    • Add version check
    • Refactor api.py (remove streamlit)

    Closes: https://github.com/cceyda/torchserve-dashboard/issues/2

    opened by cceyda 0
  • Add workflow Management API

    Add workflow Management API

    Self-todo ETA: june 3rd https://github.com/pytorch/serve/tree/release_0.4.0/examples/Workflows https://github.com/pytorch/serve/blob/release_0.4.0/docs/workflow_management_api.md

    opened by cceyda 0
  • Can't start torchserve-dashboard

    Can't start torchserve-dashboard

    I'm getting this error while on the start

    Traceback (most recent call last):
      File "/home/kavan/.local/bin/torchserve-dashboard", line 5, in <module>
        from torchserve_dashboard.cli import main
      File "/home/kavan/.local/lib/python3.8/site-packages/torchserve_dashboard/cli.py", line 2, in <module>
        import streamlit.cli
      File "/home/kavan/.local/lib/python3.8/site-packages/streamlit/__init__.py", line 49, in <module>
        from streamlit.proto.RootContainer_pb2 import RootContainer
      File "/home/kavan/.local/lib/python3.8/site-packages/streamlit/proto/RootContainer_pb2.py", line 22, in <module>
        create_key=_descriptor._internal_create_key,
    AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'
    

    Btw thanks for this awesome lib.

    opened by Kavan72 1
  • Explanations API

    Explanations API

    Mentioned in https://github.com/cceyda/torchserve-dashboard/issues/1 Model interpretability with Captum https://github.com/pytorch/serve/blob/master/captum/Captum_visualization_for_bert.ipynb

    This would be good to add if we end up adding an InferenceAPI

    opened by cceyda 0
  • Inference API

    Inference API

    Moving the discussion from https://github.com/cceyda/torchserve-dashboard/issues/1#issuecomment-863911194 to here

    Current challenges blocking this:

    • there is no way to know the format of the expected request/response. Especially for custom handlers.

    (I prefer not do model_name->type matching manually)

    If a request/response schema is added to the returned OpenAPI definitions, I can probably auto generate something like SwaggerUI.

    opened by cceyda 0
  • Docker container

    Docker container

    I think it would be great for users and for developers to be able to easily share their dashboard or run it in production without deploying via Streamlit. I could add a simple Dockerfile wrapping everything up into a container.

    Torchserve-Dashbord would be to Torchserve what MongoExpress is to MongoDB. Thoughts?

    opened by FlorianMF 3
Releases(v0.6.0)
  • v0.6.0(Aug 1, 2022)

    What's Changed

    • update streamlit version to v1.11.1 by @cceyda in https://github.com/cceyda/torchserve-dashboard/pull/18
    • Better caching using @st.experimental_singleton https://github.com/cceyda/torchserve-dashboard/pull/19
    • Added --init option to initialize torchserve on start https://github.com/cceyda/torchserve-dashboard/pull/19
    • Update to match changes in torchserve v0.6 @cceyda in https://github.com/cceyda/torchserve-dashboard/pull/19

    Full Changelog: https://github.com/cceyda/torchserve-dashboard/compare/v0.5.0...v0.6.0

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Nov 30, 2021)

    Update to v0.5, adding support for encrypted model serving (not tested). Update streamlit to v1+

    What's Changed

    • Improvements of package setup logic by @FlorianMF in https://github.com/cceyda/torchserve-dashboard/pull/5
    • WIP: Add type annotations by @FlorianMF in https://github.com/cceyda/torchserve-dashboard/pull/7
    • Update to 0.5.0 by @cceyda in https://github.com/cceyda/torchserve-dashboard/pull/15

    New Contributors

    • @FlorianMF made their first contribution in https://github.com/cceyda/torchserve-dashboard/pull/5

    Full Changelog: https://github.com/cceyda/torchserve-dashboard/compare/v0.4.0...v0.5.0

    Source code(tar.gz)
    Source code(zip)
  • v0.3.3(Jun 12, 2021)

  • v0.3.2(May 9, 2021)

  • v0.3.1(May 9, 2021)

  • v0.2.5(Feb 16, 2021)

  • v0.2.4(Feb 16, 2021)

  • v0.2.3(Oct 15, 2020)

  • v0.2.2(Oct 13, 2020)

  • v0.2.0(Oct 13, 2020)

Owner
Ceyda Cinarel
AI researcher & engineer~ all things NLP 🤖 generative models ★ like trying out new libraries & tools ♥ Python
Ceyda Cinarel
Info and sample codes for "NTU RGB+D Action Recognition Dataset"

"NTU RGB+D" Action Recognition Dataset "NTU RGB+D 120" Action Recognition Dataset "NTU RGB+D" is a large-scale dataset for human action recognition. I

Amir Shahroudy 578 Dec 30, 2022
TraSw for FairMOT - A Single-Target Attack example (Attack ID: 19; Screener ID: 24):

TraSw for FairMOT A Single-Target Attack example (Attack ID: 19; Screener ID: 24): Fig.1 Original Fig.2 Attacked By perturbing only two frames in this

Derry Lin 21 Dec 21, 2022
TensorFlow-LiveLessons - "Deep Learning with TensorFlow" LiveLessons

TensorFlow-LiveLessons Note that the second edition of this video series is now available here. The second edition contains all of the content from th

Deep Learning Study Group 830 Jan 03, 2023
This project uses Template Matching technique for object detecting by detection of template image over base image.

Object Detection Project Using OpenCV This project uses Template Matching technique for object detecting by detection the template image over base ima

Pratham Bhatnagar 7 May 29, 2022
VideoGPT: Video Generation using VQ-VAE and Transformers

VideoGPT: Video Generation using VQ-VAE and Transformers [Paper][Website][Colab][Gradio Demo] We present VideoGPT: a conceptually simple architecture

Wilson Yan 470 Dec 30, 2022
RCT-ART is an NLP pipeline built with spaCy for converting clinical trial result sentences into tables through jointly extracting intervention, outcome and outcome measure entities and their relations.

Randomised controlled trial abstract result tabulator RCT-ART is an NLP pipeline built with spaCy for converting clinical trial result sentences into

2 Sep 16, 2022
This project is a re-implementation of MASTER: Multi-Aspect Non-local Network for Scene Text Recognition by MMOCR

This project is a re-implementation of MASTER: Multi-Aspect Non-local Network for Scene Text Recognition by MMOCR,which is an open-source toolbox based on PyTorch. The overall architecture will be sh

Jianquan Ye 82 Nov 17, 2022
[ECCV 2020] Gradient-Induced Co-Saliency Detection

Gradient-Induced Co-Saliency Detection Zhao Zhang*, Wenda Jin*, Jun Xu, Ming-Ming Cheng ⭐ Project Home » The official repo of the ECCV 2020 paper Grad

Zhao Zhang 35 Nov 25, 2022
This repo provides function call to track multi-objects in videos

Custom Object Tracking Introduction This repo provides function call to track multi-objects in videos with a given trained object detection model and

Jeff Lo 51 Nov 22, 2022
Simple image captioning model - CLIP prefix captioning.

Simple image captioning model - CLIP prefix captioning.

688 Jan 04, 2023
GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models

GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Model This repository is the official PyTorch implementation of GraphRNN, a graph gene

Jiaxuan 568 Dec 29, 2022
Interactive Image Generation via Generative Adversarial Networks

iGAN: Interactive Image Generation via Generative Adversarial Networks Project | Youtube | Paper Recent projects: [pix2pix]: Torch implementation for

Jun-Yan Zhu 3.9k Dec 23, 2022
This repository contains the implementation of the following paper: Cross-Descriptor Visual Localization and Mapping

Cross-Descriptor Visual Localization and Mapping This repository contains the implementation of the following paper: "Cross-Descriptor Visual Localiza

Mihai Dusmanu 81 Oct 06, 2022
Industrial knn-based anomaly detection for images. Visit streamlit link to check out the demo.

Industrial KNN-based Anomaly Detection ⭐ Now has streamlit support! ⭐ Run $ streamlit run streamlit_app.py This repo aims to reproduce the results of

aventau 102 Dec 26, 2022
Clean Machine Learning, a Coding Kata

Kata: Clean Machine Learning From Dirty Code First, open the Kata in Google Colab (or else download it) You can clone this project and launch jupyter-

Neuraxio 13 Nov 03, 2022
LSUN Dataset Documentation and Demo Code

LSUN Please check LSUN webpage for more information about the dataset. Data Release All the images in one category are stored in one lmdb database fil

Fisher Yu 426 Jan 02, 2023
GNNAdvisor: An Efficient Runtime System for GNN Acceleration on GPUs

GNNAdvisor: An Efficient Runtime System for GNN Acceleration on GPUs [Paper, Slides, Video Talk] at USENIX OSDI'21 @inproceedings{GNNAdvisor, title=

YUKE WANG 47 Jan 03, 2023
The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

ycj_project 1 Jan 18, 2022
Automatic meme generation model using Tensorflow Keras.

Memefly You can find the project at MemeflyAI. Contributors Nick Buukhalter Harsh Desai Han Lee Project Overview Trello Board Product Canvas Automatic

BloomTech Labs 2 Jan 13, 2022
Face-Recognition-Attendence-System - This face recognition Attendence system using Python

Face-Recognition-Attendence-System I have developed this face recognition Attend

Riya Gupta 4 May 10, 2022