A multi-tenant multi-client scalable product categorising demo stack

Overview

Better Categories 4All: A multi-tenant multi-client product categorising stack

The steps to reproduce training and inference are in the end of this file, sorry for the long explanation.

example workflow

Problem scope

We want to create a full product categorization stack for multiple clients. For each client, and each product we want to find the 5 most suitable categories.

Project structure

The project is split into two layers:

  • ML layer: the python package for training and serving model. It's a pipenv based project. The Pipfile include all required dependencies. The python environment generated by pipenv is used to run the training/inference and run also unit tests. Code is generic for all clients.
  • Orchestration layer: the Airflow DAGs for training and prediction. Each client has its own training DAG and its prediction DAG. These DAGs uses the Airflow BashOperator to execute training and prediction inside the pipenv environment.

img_1.png

Why one DAG per a client instead of a single DAG for all client ?

We could have a single DAG that train all clients. So each client has its own training task inside the same DAG. I chose rather to build a separate DAG for each client. Several reasons motivated my decision:

  • In my past experiences, some individual cients may have problem s with their data and it's more practical to have a DAG per client when it's come to day to day monitoring.
  • New clients may come and other may leave and we may endup with a DAG that keeps constantly adding new Task and loosing others and it's against airflow best practicies.
  • It make sens to have one failed DAG and 99 other successful DAGs rather than a single DAG failing all the time because of one random client training failing each day.

Training

In this part we will train a classification model for each client.

Training package

The package categories_classification include a training function train_model. It takes the following inputs:

  • client_id: the id of the client in training dataset
  • features: a list of features names to use in training
  • model_params: a dict of params to be passed to model python class.
  • training_date: the execution date of training, used to track the training run.

The chosen model is scikit-learn implementation of random forest sklearn.ensemble.RandomForestClassifier. For the sake of simplicity, we didn't fine tune model parameters, but optimal params can be set in config.

In addition to train_model function, a cli binary is created to be able to run training directly from command line. The binary command trainer runs the training:

pipenv run python categories_classification_cli.py trainer --help

Usage: categories_classification_cli.py trainer [OPTIONS]

Options:
  --client_id TEXT     The id of the client.  [required]
  --features TEXT      The list of input features.  [required]
  --model_params TEXT  Params to be passed to model.  [required]
  --training_date TEXT  The training date.  [required]
  --help               Show this message and exit.

Data and model paths

All data are stored in a command base path retrieved from environment variable DATA_PREFIX, default is ./data. Given a client id, training data is loaded from $DATA_PREFIX/train/client_id= /data_train.csv.gz .

Splitting data

Before training, data is split into training set and test set. The train set is used to train the model while the test set is used to evaluate the model after training. Evaluation score is logged.

Model tracking and versioning

The whole training event is tracked in Mlfow as a training run. Each client hash its own experiments and its own model name following the convention " _model". The tracking process saves also metrics and model parameters in the same run metadata.

Finally, the model is saved in Mlflow Registry with name " _model". Saving the model means a new model version is saved in Mlflow, as the same model may have multiple versions.

Prediction

In this part, we will predict product categories using previously trained model.

Prediction package

The package categories_classification include a prediction function predict_categories. It takes the following inputs:

  • client_id: the id of the client in training dataset
  • inference_date: an inference execution date to version output categories

The prediction is done through spark so that it can be done on big datasets. Prediction dataset is loaded in spark DataFrame. We use Mlflow to get the latest model version and load latest model. The model is then broadcasted in Spark in order to be available in Spark workers. To apply the model to the prediction dataset, I use a new Spark 3.0 experimental feature called mapInPandas. This Dataframe method maps an iterator of batches (pandas Dataframe) using a prediction used-defined function that outputs also a pandas Dataframe. This is done thanks to PyArrow efficient data transfer between Spark JVM and python pandas runtime.

Prediction function

The advantage of mapInPandas feature comparing to classic pandas_udf is that we can add more rows than we have as input. Thus for each product, we can output 5 predicted categories with their probabilities and ranked from 0 to 4. The predicted label are then persisted to filesystem as parquet dataset.

Model version retrieval

Before loading the model, we use Mlflow to get the latest version of the model. In production system we probabilities want to push model to staging, verify its metrics or validate it before passing it to production. Let's suppose that we are working the same stage line, we use MlflowClient to connect to Mlflow Registry and get the latest model version. The version is then used to build the latest model uri.

Reproducing training and inference

Pipenv initialization

First you need to check you have pipenv installed locally otherwise you can install it with pip install pipenv.

Then you need to initialize the pipenv environment with the following command:

make init-pipenv

This may take some time as it will install all required dependencies. Once done you can run linter (pylint) and unit tests:

make lint
make unit-tests

Airflow/Mlflow initialization

You need also to initialize the local airflow stack, thus building a custom airflow docker image including the pipenv environment, the mlflow image and initializing the Airflow database.

make init-airflow

Generate DAGs

Airflow dags needs to be generated using config file in conf/clients_config.yaml. It's already created with the 10 clients example datasets. But if you want you can add new clients or change the actual configuration. For each client you must include the list of features and optional model params.

Then, you can generate DAGs using the following command:

make generate-dags

This will can the script scripts/generate_dags.py which will:

  • load training and inference DAG templates from dags_templates, they are jinja2 templates.
  • load conf from conf/clients_config.yaml
  • render DAG for each client and each template

Start local Airflow

You can start local airflow with following command:

make start-airflow

Once all services started, you can go to you browser and visit:

  • Airflow UI in http://localhost:8080
  • Mlflow UI in http://localhost:5000

Run training and inference

In Airflow all DAGs are disabled by default. To run training for a client you can enable the DAG and it will immediately trigger the training.

Once the model in Mlflow, you can enable the inference DAG and it will immediately trigger a prediction.

Inspect result

To inspect result you run a local jupyter, you do it with:

make run-jupyter

Then visit notebook inspect_inference_result.ipynb and run it to check the prediction output.

Flaga ze Szturmu na AWS.

Witaj Jesteś na GitHub'ie i czytasz właśnie plik README.md który znajduje się wewnątrz repozytorium Flaga z 7 i 8 etapu Szturmu na AWS. W tym etapie w

9 May 16, 2022
Written in Python, freezed into stand-alone executable with PyInstaller. This app will make sure you stay in New World without getting kicked for inactivity.

New World - AFK Written in Python, freezed into stand-alone executable with PyInstaller. This app will make sure you stay in New World without getting

Rodney 5 Oct 31, 2021
A simple Discord Bot created for basic functionality and fun chat commands for use in a private server.

LoveAndChaos-Bot v0.1.0 LoveAndChaos-Bot is a Discord Bot specifically designed for a private server; this bot is merely a test and a method to expose

Morgan Rose 1 Dec 12, 2021
一个基于Python3的Bot。目前支持以Docker的方式部署在vps上。支持Aria2、本子下载、网易云音乐下载、Pixiv榜单下载、Youtue-dl支持、搜图。

介绍 一个基于Python3的Bot。目前支持以Docker的方式部署在vps上。 主要功能: 文件管理 修改主界面为 filebrowser,账号为admin,密码为admin,主界面路径:http://ip:port,请自行修改密码 FolderMagic自带的webdav:路径:http://

Ben 650 Jan 08, 2023
The best discord.py template with a changeable prefix

Discord.py Bot Template By noma4321#0035 With A Custom Prefix To Every Guild Function Features Has a custom prefix that is changeable for every guild

Noma4321 5 Nov 24, 2022
Let your friends know when you are online and offline xD

Twitter Last Seen Activity Let your friends know when you are online and offline Laser-light eyes when online Last seen is mentioned in user bio Also

Kush Choudhary 12 Aug 16, 2021
𝐀 𝐔𝐥𝐭𝐢𝐦𝐚𝐭𝐞 𝐓𝐞𝐥𝐞𝐠𝐫𝐚𝐦 𝐁𝐨𝐭 𝐅𝐨𝐫 𝐅𝐨𝐫𝐜𝐢𝐧𝐠 𝐘𝐨𝐮𝐫 𝐆𝐫𝐨𝐮𝐩 𝐌𝐞𝐦𝐛𝐞𝐫𝐬 𝐓𝐨 𝐒𝐮𝐛𝐬𝐜𝐫𝐢𝐛𝐞 𝐘𝐨𝐮𝐫 𝐓𝐞𝐥𝐞𝐠𝐫𝐚𝐦 𝐂𝐡𝐚𝐧𝐧𝐞𝐥

𝐇𝐨𝐰 𝐓𝐨 𝐃𝐞𝐩𝐥𝐨𝐲 For easiest way to deploy this Bot click on the below button 𝐌𝐚𝐝𝐞 𝐁𝐲 𝐒𝐮𝐩𝐩𝐨𝐫𝐭 𝐆𝐫𝐨𝐮𝐩 𝐒𝐨𝐮𝐫𝐜𝐞𝐬 𝐅𝐢𝐧𝐝

Mukesh Solanki 2 Jan 05, 2022
A simple Python app to provide RPC for iTunes and the Music app. MacOS exclusive.

Ongaku You know, ongaku. A port of Ongaku to Python. Why? I don't know. A simple application providing the now playing state from iTunes (or the Music

Deltaion Lee 4 Oct 22, 2022
🌶️ Give real chat boosting to your discord server.

Chat-Booster Give real chat boosting to your discord server. ✅ Setup: - Add token to scrape messages on server that you whant. - Put the token in

&! Ѵιchy.#0110 36 Nov 04, 2022
telegram bot that calculates file hash / Dosya toplamı hesaplayan telegram botu

Telegram File Hash Bot FileHashBot: 🇬🇧 Bot that calculates file hashes. 🇹🇷 Dosya toplamları hesaplayan bot. Demo in Telegram: @FileHashBot 🇬🇧 Se

Hüzünlü Artemis [HuzunluArtemis] 5 Jun 29, 2022
Probably Overengineered Unimore Booker

POUB Probably Overengineered Unimore Booker A python-powered, actor-based, telegram-facing, timetable-aware booker for unimore (if you know more adjec

Lorenzo Rossi 3 Feb 20, 2022
Netflix Movies and TV Series Downloader Tool including CDM L1 which you guys can Donwload 4K Movies

NFRipper2.0 I could not shared all the code here Because its has lots of files inisde it https://new.gdtot.me/file/86651844 - Downoad File From Here.

Kiran 15 May 06, 2022
Instagram Bot posting earthquakes with magnitude greater than or equal to 3.5.

Instagram Bot posting earthquakes with magnitude greater than or equal to 3.5

Alican Yüksel 4 Aug 22, 2022
Network simulation tools

Overview I'm building my network simulation environments with Vagrant using libvirt plugin on a Ubuntu 20.04 system... and I always hated how boring i

Ivan Pepelnjak 219 Jan 07, 2023
L3DAS22 challenge supporting API

L3DAS22 challenge supporting API This repository supports the L3DAS22 IEEE ICASSP Grand Challenge and it is aimed at downloading the dataset, pre-proc

L3DAS 38 Dec 25, 2022
Python client for Vektonn

Python client for Vektonn Installation Install the latest version: $ pip install vektonn Install specific version: $ pip install vektonn==1.2.3 Upgrad

Vektonn 16 Dec 09, 2022
Discord bot that performs various functions.

rikka-bot A Discord bot that performs various functions. Table of Contents Commands Main Commands Utility Commands Admin Commands Self-Assignable Role

Carlos Saucedo 7 Aug 27, 2021
An API serving data on all creatures, monsters, materials, equipment, and treasure in The Legend of Zelda: Breath of the Wild

Hyrule Compendium API An API serving data on all creatures, monsters, materials, equipment, and treasure in The Legend of Zelda: Breath of the Wild. B

Aarav Borthakur 116 Dec 01, 2022
Python bindings for LibreTranslate

Python bindings for LibreTranslate

Argos Open Tech 42 Jan 03, 2023
AWS SDK for Python

Boto3 - The AWS SDK for Python Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to wri

the boto project 7.8k Jan 08, 2023