Inferoxy is a service for quick deploying and using dockerized Computer Vision models.

Overview

Inferoxy

codecov

What is it?

Inferoxy is a service for quick deploying and using dockerized Computer Vision models. It's a core of EORA's Computer Vision platform Vision Hub that runs on top of AWS EKS.

Why use it?

You should use it if:

  • You want to simplify deploying Computer Vision models with an appropriate Data Science stack to production: all you need to do is to build a Docker image with your model including any pre- and post-processing steps and push it into an accessible registry
  • You have only one machine or cluster for inference (CPU/GPU)
  • You want automatic batching for multi-GPU/multi-node setup
  • Model versioning

Architecture

Overall architecture

Inferoxy is built using message broker pattern.

  • Roughly speaking, it accepts user requests through different interfaces which we call "bridges". Multiple bridges can run simultaneously. Current supported bridges are REST API, gRPC and ZeroMQ
  • The requests are carefully split into batches and processed on a single multi-GPU machine or a multi-node cluster
  • The models to be deployed are managed through Model Manager that communicates with Redis to store/retrieve models information such as Docker image URL, maximum batch size value, etc.

Batching

Batching

One of the core Inferoxy's features is the batching mechanism.

  • For batch processing it's taken into consideration that different models can utilize different batch sizes and that some models can process a series of batches from a specific user, e.g. for video processing tasks. The latter models are called "stateful" models while models which don't depend on user state are called "stateless"
  • Multiple copies of the same model can run on different machines while only one copy can run on the same GPU device. So, to increase models efficiency it's recommended to set batch size for models to be as high as possible
  • A user of the stateful model reserves the whole copy of the model and releases it when his task is finished.
  • Users of the stateless models can use the same copy of the model simultaneously
  • Numpy tensors of RGB images with metadata are all going through ZeroMQ to the models and the results are also read from ZeroMQ socket

Cluster management

Cluster

The cluster management consists of keeping track of the running copies of the models, load analysis, health checking and alerting.

Requirements

You can run Inferoxy locally on a single machine or k8s cluster. To run Inferoxy, you should have a minimum of 4GB RAM and CPU or GPU device depending on your speed/cost trade-off.

Basic commands

Local run

To run locally you should use Inferoxy Docker image. The last version you can find here.

docker pull public.registry.visionhub.ru/inferoxy:v1.0.4

After image is pulled we need to make basic configuration using .env file

# .env
CLOUD_CLIENT=docker
TASK_MANAGER_DOCKER_CONFIG_NETWORK=inferoxy
TASK_MANAGER_DOCKER_CONFIG_REGISTRY=
TASK_MANAGER_DOCKER_CONFIG_LOGIN=
TASK_MANAGER_DOCKER_CONFIG_PASSWORD=
MODEL_STORAGE_DATABASE_HOST=redis
MODEL_STORAGE_DATABASE_PORT=6379
MODEL_STORAGE_DATABASE_NUMBER=0
LOGGING_LEVEL=INFO

The next step is to create inferoxy Docker network.

docker network create inferoxy

Now we should run Redis in this network. Redis is needed to store information about your models.

docker run --network inferoxy --name redis redis:latest 

Create models.yaml file with simple set of models. You can read about models.yaml in documentation

stub:
  address: public.registry.visionhub.ru/models/stub:v5
  batch_size: 256
  run_on_gpu: False
  stateless: True

Now we can start Inferoxy:

docker run --env-file .env 
	-v /var/run/docker.sock:/var/run/docker.sock \
	-p 7787:7787 -p 7788:7788 -p 8000:8000 -p 8698:8698\
	--name inferoxy --rm \
	--network inferoxy \
	-v $(pwd)/models.yaml:/etc/inferoxy/models.yaml \
	public.registry.visionhub.ru/inferoxy:${INFEROXY_VERSION}

Documentation

You can find the full documentation here

Discord

Join our community in Discord server to discuss stuff related to Inferoxy usage and development

Wiremind Kubernetes helper

Wiremind Kubernetes helper This Python library is a high-level set of Kubernetes Helpers allowing either to manage individual standard Kubernetes cont

Wiremind 3 Oct 09, 2021
Simple, Pythonic remote execution and deployment.

Welcome to Fabric! Fabric is a high level Python (2.7, 3.4+) library designed to execute shell commands remotely over SSH, yielding useful Python obje

Fabric 13.8k Jan 06, 2023
CI repo for building Skia as a shared library

Automated Skia builds This repo is dedicated to building Skia binaries for use in Skija. Prebuilt binaries Prebuilt binaries can be found in releases.

Humble UI 20 Jan 06, 2023
Tencent Yun tools with python

Tencent_Yun_tools 使用 python3.9 + 腾讯云 AccessKey 利用工具 使用之前请先填写config.ini配置文件 Usage python3 Tencent_rce.py -h Scanner python3 Tencent_rce.py -s 生成CSV

<img src="> 13 Dec 20, 2022
Autoscaling volumes for Kubernetes (with the help of Prometheus)

Kubernetes Volume Autoscaler (with Prometheus) This repository contains a service that automatically increases the size of a Persistent Volume Claim i

DevOps Nirvana 142 Dec 28, 2022
Tools for writing awesome Fabric files

About fabtools includes useful functions to help you write your Fabric files. fabtools makes it easier to manage system users, packages, databases, et

1.3k Dec 30, 2022
Python utility function to communicate with a subprocess using iterables: for when data is too big to fit in memory and has to be streamed

iterable-subprocess Python utility function to communicate with a subprocess using iterables: for when data is too big to fit in memory and has to be

Department for International Trade 5 Jul 10, 2022
Google Kubernetes Engine (GKE) with a Snyk Kubernetes controller installed/configured for Snyk App

Google Kubernetes Engine (GKE) with a Snyk Kubernetes controller installed/configured for Snyk App This example provisions a Google Kubernetes Engine

Pas Apicella 2 Feb 09, 2022
Ansible Collection: A collection of Ansible Modules and Lookup Plugins (MLP) from Linuxfabrik.

ansible_mlp An Ansible collection of Ansible Modules and Lookup Plugins (MLP) from Linuxfabrik. Ansible Bitwarden Item Lookup Plugin Returns a passwor

Linuxfabrik 2 Feb 07, 2022
Micro Data Lake based on Docker Compose

Micro Data Lake based on Docker Compose This is the implementation of a Minimum Data Lake

Abel Coronado 15 Jan 07, 2023
Hatch plugin for Docker containers

hatch-containers CI/CD Package Meta This provides a plugin for Hatch that allows

Ofek Lev 11 Dec 30, 2022
A cpp project template that uses CMake to build and Google Test / Github Actions to provide a CI

A cpp project template that uses CMake to build and Google Test / Github Actions to provide a CI

Martin Olivier 6 Nov 17, 2022
A Kubernetes operator that creates UptimeRobot monitors for your ingresses

This operator automatically creates uptime monitors at UptimeRobot for your Kubernetes Ingress resources. This allows you to easily integrate uptime monitoring of your services into your Kubernetes d

Max 49 Dec 14, 2022
Utilitaire de contrôle de Kubernetes

Utilitaire de contrôle de Kubernetes ** What is this ??? ** Every time we use a word in English our manager tells us to use the French translation of

Théophane Vié 9 Dec 03, 2022
Ganeti is a virtual machine cluster management tool built on top of existing virtualization technologies such as Xen or KVM and other open source software.

Ganeti 3.0 =========== For installation instructions, read the INSTALL and the doc/install.rst files. For a brief introduction, read the ganeti(7) m

395 Jan 04, 2023
🐳 RAUDI: Regularly and Automatically Updated Docker Images

🐳 RAUDI: Regularly and Automatically Updated Docker Images RAUDI (Regularly and Automatically Updated Docker Images) automatically generates and keep

SecSI 534 Dec 29, 2022
Bash-based Python-venv convenience wrapper

venvrc Bash-based Python-venv convenience wrapper. Demo Install Copy venvrc file to ~/.venvrc, and add the following line to your ~/.bashrc file: # so

1 Dec 29, 2022
Remote Desktop Protocol in Twisted Python

RDPY Remote Desktop Protocol in twisted python. RDPY is a pure Python implementation of the Microsoft RDP (Remote Desktop Protocol) protocol (client a

Sylvain Peyrefitte 1.6k Dec 30, 2022
Copy a Kubernetes pod and run commands in its environment

copypod Utility for copying a running Kubernetes pod so you can run commands in a copy of its environment, without worrying about it the pod potential

Memrise 4 Apr 08, 2022
Big data on k8s

# microsoft azure # https://docs.microsoft.com/en-us/cli/azure/install-azure-cli az account set --subscription [] az aks get-credentials --resource-g

Luan Moreno 22 Dec 24, 2022