Luminaire is a python package that provides ML driven solutions for monitoring time series data.

Overview

Luminaire

A hands-off Anomaly Detection Library

PyPI version PyPI - Python Version License build publish docs


Table of contents

What is Luminaire

Luminaire is a python package that provides ML-driven solutions for monitoring time series data. Luminaire provides several anomaly detection and forecasting capabilities that incorporate correlational and seasonal patterns as well as uncontrollable variations in the data over time.

Quick Start

Install Luminaire from PyPI using pip

pip install luminaire

Import luminaire module in python

import luminaire

Check out Luminaire documentation for detailed description of methods and usage.

Time Series Outlier Detection Workflow

Luminaire Flow

Luminaire outlier detection workflow can be divided into 3 major components:

Data Preprocessing and Profiling Component

This component can be called to prepare a time series prior to training an anomaly detection model on it. This step applies a number of methods that make anomaly detection more accurate and reliable, including missing data imputation, identifying and removing recent outliers from training data, necessary mathematical transformations, and data truncation based on recent change points. It also generates profiling information (historical change points, trend changes, etc.) that are considered in the training process.

Profiling information for time series data can be used to monitor data drift and irregular long-term swings.

Modeling Component

This component performs time series model training based on the user-specified configuration OR optimized configuration (see Luminaire hyperparameter optimization). Luminaire model training is integrated with different structural time series models as well as filtering based models. See Luminaire outlier detection for more information.

The Luminaire modeling step can be called after the data preprocessing and profiling step to perform necessary data preparation before training.

Configuration Optimization Component

Luminaire's integration with configuration optimization enables a hands-off anomaly detection process where the user needs to provide very minimal configuration for monitoring any type of time series data. This step can be combined with the preprocessing and modeling for any auto-configured anomaly detection use case. See fully automatic outlier detection for a detailed walkthrough.

Anomaly Detection for High Frequency Time Series

Luminaire can also monitor a set of data points over windows of time instead of tracking individual data points. This approach is well-suited for streaming use cases where sustained fluctuations are of greater concern than individual fluctuations. See anomaly detection for streaming data for detailed information.

Contributing

Want to help improve Luminaire? Check out our contributing documentation.

Citing

Please cite the following article if Luminaire is used for any research purpose or scientific publication:

Chakraborty, S., Shah, S., Soltani, K., Swigart, A., Yang, L., & Buckingham, K. (2020, December). Building an Automated and Self-Aware Anomaly Detection System. In 2020 IEEE International Conference on Big Data (Big Data) (pp. 1465-1475). IEEE. (arxiv link)

Other Useful Resources

  • Chakraborty, S., Shah, S., Soltani, K., & Swigart, A. (2019, December). Root Cause Detection Among Anomalous Time Series Using Temporal State Alignment. In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA) (pp. 523-528). IEEE. (arxiv link)

Blogs

Development Team

Luminaire is developed and maintained by Sayan Chakraborty, Smit Shah, Kiumars Soltani, Luyao Yang, Anna Swigart, Kyle Buckingham and many other contributors from the Zillow Group A.I. team.

Comments
  • bug #112: window size identification fixed for trend change detection

    bug #112: window size identification fixed for trend change detection

    The current approach for trend detection in the Data exploration module (/luminaire/exploration/data_exploration.py) was enabled only for daily ('D') and hourly ('H) time series. I added a fix to trigger the computation of window sizes for weekly ('W') frequency, using a value of 4. Also, I added a fix to support all the other frequencies.

    opened by papaemman 9
  • Unable to call score function, error:

    Unable to call score function, error: "setting an array element with a sequence"

    We met the problem when we tried to call the score function in WindowDensity API (Luminaire Libary). The error message was "setting an array element with a sequence". We searched online and asked for other professionals' experience but still failed to solve it. Can anybody help us with it? Thanks in advance~~

    Luminaire Reference: https://zillow.github.io/luminaire/_modules/luminaire/model/window_density.html#WindowDensityHyperParams

    1 2 3 5 6 7

    opened by vickeywangvw 7
  • DataExploration.profile results in

    DataExploration.profile results in "ErrorMessage': "unsupported operand type(s) for -: 'int' and 'NoneType'"

    Hey all!

    I'm trying to use the package but I'm getting that message.

    import luminaire
    import pandas as pd
    
    from luminaire.exploration.data_exploration import DataExploration
    
    past = pd.read_csv("dataset.csv").set_index("index")
    
    de = DataExploration(freq='D')
    
    past_prof, profile = de.profile(df=past)
    #(None,
    #{'success': False,
    # 'ErrorMessage': "unsupported operand type(s) for -: 'int' and 'NoneType'"})
    

    image

    Is that anything data-related?

    Here are my infos:

    • Python 3.7.10
    • requirements.txt: see below, result from pip install -U jupyterlab numpy pandas matplotlib luminaire pip setuptools pyarrow

    Thanks!


    anyio==3.6.1
    appnope==0.1.3
    argon2-cffi==21.3.0
    argon2-cffi-bindings==21.2.0
    attrs==22.1.0
    Babel==2.10.3
    backcall==0.2.0
    beautifulsoup4==4.11.1
    bleach==5.0.1
    boto3==1.24.76
    botocore==1.27.76
    certifi==2022.9.14
    cffi==1.15.1
    changepy==0.3.1
    charset-normalizer==2.1.1
    cloudpickle==2.2.0
    cycler==0.11.0
    debugpy==1.6.3
    decorator==5.1.1
    defusedxml==0.7.1
    entrypoints==0.4
    fastjsonschema==2.16.2
    fonttools==4.37.2
    future==0.18.2
    hyperopt==0.2.7
    idna==3.4
    importlib-metadata==4.12.0
    importlib-resources==5.9.0
    ipykernel==6.15.3
    ipython==7.34.0
    ipython-genutils==0.2.0
    jedi==0.18.1
    Jinja2==3.1.2
    jmespath==1.0.1
    joblib==1.2.0
    json5==0.9.10
    jsonschema==4.16.0
    jupyter-core==4.11.1
    jupyter-server==1.18.1
    jupyter_client==7.3.5
    jupyterlab==3.4.7
    jupyterlab-pygments==0.2.2
    jupyterlab_server==2.15.1
    kiwisolver==1.4.4
    luminaire==0.4.0
    lxml==4.9.1
    MarkupSafe==2.1.1
    matplotlib==3.5.3
    matplotlib-inline==0.1.6
    mistune==2.0.4
    nbclassic==0.4.3
    nbclient==0.6.8
    nbconvert==7.0.0
    nbformat==5.5.0
    nest-asyncio==1.5.5
    networkx==2.6.3
    notebook==6.4.12
    notebook-shim==0.1.0
    numpy==1.21.6
    packaging==21.3
    pandas==1.3.5
    pandas-redshift==2.0.5
    pandocfilters==1.5.0
    parso==0.8.3
    patsy==0.5.2
    pexpect==4.8.0
    pickleshare==0.7.5
    Pillow==9.2.0
    pkgutil_resolve_name==1.3.10
    prometheus-client==0.14.1
    prompt-toolkit==3.0.31
    psutil==5.9.2
    psycopg2-binary==2.9.3
    ptyprocess==0.7.0
    py4j==0.10.9.7
    pyarrow==9.0.0
    pycparser==2.21
    Pygments==2.13.0
    pykalman==0.9.5
    pyparsing==3.0.9
    pyrsistent==0.18.1
    python-dateutil==2.8.2
    pytz==2022.2.1
    pyzmq==24.0.0
    requests==2.28.1
    s3transfer==0.6.0
    scikit-learn==1.0.2
    scipy==1.7.3
    Send2Trash==1.8.0
    six==1.16.0
    sniffio==1.3.0
    soupsieve==2.3.2.post1
    statsmodels==0.13.2
    terminado==0.15.0
    threadpoolctl==3.1.0
    tinycss2==1.1.1
    tomli==2.0.1
    tornado==6.2
    tqdm==4.64.1
    traitlets==5.4.0
    typing_extensions==4.3.0
    urllib3==1.26.12
    wcwidth==0.2.5
    webencodings==0.5.1
    websocket-client==1.4.1
    zipp==3.8.1
    
    opened by paulochf 5
  • Related to issue #112: Exploration failure for weekly data

    Related to issue #112: Exploration failure for weekly data

    The current approach for trend turning was enabled only for daily and hourly time series. Added a quick fix to trigger computation of window sizes for other frequency types.

    opened by sayanchk 5
  • Diff order fix

    Diff order fix

    Corrected issue where the diff order was hard coded as 2 in lad_filtering. Also added test_lad_filtering_scoring_diff_order to test_models which uses the last data points, takes the appropriate diff, and then compares to the adjusted actual to make sure the appropriate diff order is applied.

    Related Issue: #120 @sayanchk for review

    opened by pdurham2 4
  • Failproof project setup

    Failproof project setup

    I guess python 3.7 and later considered not supported (see https://github.com/zillow/luminaire/runs/1946332964)

    On python 3.6 pyramid-arima wheel build will fail (but it will not affect the installation of dependency - just produce log noise) without a numpy installed, but it looks like it's not required to actually have it as dependecy - see https://github.com/zillow/luminaire/pull/74

    P.S. https://pip.pypa.io/en/latest/reference/pip_install/#controlling-setup-requires There is a warning about how dangerous to use this keyword, but i guess it's ok for such simple case It's also used in https://github.com/zillow/luminaire/pull/77/

    opened by Aristarhys 4
  • Switch to sphinx-material theme

    Switch to sphinx-material theme

    No actual content change in the documentation.

    • Replaced the incomplete sphinx theme with a more polished one, along with corresponding stylesheets
    • Fixed some indentation issues in the docs
    • Shuffled files around: removed dedicated TOC pages and added them all in the index instead

    Screenshot of the home page: image

    Here's a second screenshot that shows syntax highlighting and footer (closes #40) image

    @sayanchk you might want to look into shortening the page titles for the API ref (or just name them after the modules)

    opened by snazzyfox 4
  • Unable to use data exploration

    Unable to use data exploration "The training data observed continuous missing data near the end. Require more stable data to train"

    I have tried to use simple data and its giving these issues

    Here is the notebook https://colab.research.google.com/drive/19muQTHoWxdh5fC1DQE2FpYu763fn-0zC?usp=sharing

    opened by eaglewarrior 3
  • Force linter to fail ci check

    Force linter to fail ci check

    exit 1 will will called only if first flake8 will fail and return non zero code from script block immediately

    Before last command of script block was evaluated and second flake8 invocation was always returning 0 because of flag passed

    bug meta 
    opened by Aristarhys 3
  • Add test runner and linter support for setup.py

    Add test runner and linter support for setup.py

    I think it will be worth to have local means of running tests/lint even if you have CI perfectly working (python setup.py test, python setup.py flake8) I used config from https://github.com/zillow/luminaire/blob/master/.github/workflows/python-app.yml#L42 for flake Flake gonna nuke the integration at some point, but i guess it's ok for now (we can use last version without this warning - it's not that old)

    https://gitlab.com/pycqa/flake8/-/issues/544

    opened by Aristarhys 3
  • Missing data or second level

    Missing data or second level

    Hi there,

    I have a question rather than any specific issues. I wonder if this library can work with missing points/date during training stage? and what about anomaly detection at seconds level data? I will appreciate your response

    question 
    opened by soroosh-rz 2
  • diff_order seems to be hard coded to be diff order of 2

    diff_order seems to be hard coded to be diff order of 2

    When diff_order is applied in lad_filtering.py, the value passed to np.diff is fixed as 2. Is this intended or should diff_order be passed instead?

    if diff_order:
      actual_previous_per_diff = [interpolated_actual_previous[-1]] \
          if diff_order == 1 else [interpolated_actual_previous[-1], np.diff(interpolated_actual_previous)[0]]
      seq_tail = interpolated_actual_previous + [interpolated_actual]
      interpolated_actual = np.diff(seq_tail, 2)[-1]
    
    bug 
    opened by pdurham2 2
  • Optimize _detect_window_size within DataExploration for weekly data

    Optimize _detect_window_size within DataExploration for weekly data

    _detect_window_size is currently not optimized for weekly time series data in order to detect the most frequent periodic pattern. This issue need some investigation on that front. Reference: https://github.com/zillow/luminaire/pull/114

    Note: This method is a dependency for Structural, Filtering and Window based models. Therefore, any change in this method requires testing on any existing supported (or rather optimized on) time series data types (daily, hourly and even higher frequencies). Please refer to the datasets for testing.

    help wanted 
    opened by sayanchk 0
  • Fix the repo with all the linter based warning

    Fix the repo with all the linter based warning

    The repo has linter running but there are quite some warnings which are not breaking but needs to be resolved.

    Example pipeline: https://github.com/zillow/luminaire/runs/5104474237?check_suite_focus=true

    11    C901 'DataExploration._detrender' is too complex (13)
    7     E122 continuation line missing indentation or outdented
    12    E127 continuation line over-indented for visual indent
    29    E128 continuation line under-indented for visual indent
    2     E[203](https://github.com/zillow/luminaire/runs/5104474237?check_suite_focus=true#step:6:203) whitespace before ':'
    2     E225 missing whitespace around operator
    2     E231 missing whitespace after ','
    3     E266 too many leading '#' for block comment
    22    E302 expected 2 blank lines, found 1
    10    E303 too many blank lines (2)
    9     E501 line too long (134 > 127 characters)
    1     E714 test for object identity should be 'is not'
    2     E722 do not use bare 'except'
    16    F401 'luminaire.optimization' imported but unused
    5     F403 'from luminaire.exploration.data_exploration import *' used; unable to detect undefined names
    34    F405 'DataExploration' may be undefined, or defined from star imports: luminaire.exploration.data_exploration
    1     F841 local variable 'e' is assigned to but never used
    1     W291 trailing whitespace
    3     W292 no newline at end of file
    4     W293 blank line contains whitespace
    1     W391 blank line at end of file
    
    bug 
    opened by shahsmit14 0
  • Extracting time series components dataframe

    Extracting time series components dataframe

    Hello!

    Is there any way to extract the dataframes containing the decomposition of the time series? That is, one column for the trend, another for the seasonality, etc.

    Thanks

    question 
    opened by lventosa 1
  • Unable to profile data

    Unable to profile data

    Hello I have the following data frame. image

    I am calling it using imputed_data, pre_prc = de_obj.profile(hourly, impute_only=True)

    and getting the following error. {'success': False, 'ErrorMessage': "ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''"}

    I have been trying to figure it out, but no avail. Any help would be much appreciated. Thanks!

    wontfix 
    opened by grechasneak 1
Releases(v0.4.2)
  • v0.4.2(Nov 23, 2022)

  • v0.4.1(Oct 7, 2022)

    Changes:

    • Leveraging https://pypi.org/project/bayescd/ instead of explicitly installing that same repos code for ci/cd since its now available on PyPI
    • Adding bayescd to dependency so users don't have to install them manually
    • Support for weekly frequency for data exploration

    Closes issues:

    • https://github.com/zillow/luminaire/issues/112
    • https://github.com/zillow/luminaire/issues/115
    • https://github.com/zillow/luminaire/pull/118
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Jul 26, 2022)

    • Support up to Python 3.10
    • Support of latest versions of scipy, statsmodels and bayesian-changepoin-detection
    • Minor bug fixes and improvements in data exploration
    • Minor bug fixes and improvements in structural model
    • Ability to perform model validation due to under-fit added in structural model
    • Holiday list updated

    Note: We had to remove bayesian-changepoint-detection package from requirements due to deployment issues in pypi (the latest version of scipy is not supported by bayesian-changepoint-detection 0.2.dev1 available in PyPI). If you are planning to use this luminaire v0.4.0, you have to manually install a compatible version of bayesian-changepoint-detection from github provided by the community but not yet made available on PyPI using the following script:

    pip install git+https://github.com/hildensia/bayesian_changepoint_detection@2dd95f5c1d028116899a842ccb3baa173f9d5be9#egg=bayesian-changepoint-detection

    Source code(tar.gz)
    Source code(zip)
  • 0.4.0.dev3(Mar 17, 2022)

    Luminaire cd fixes from dev2

    Release notes from dev1:

    • Support up to Python 3.10
    • Support of latest versions of Scipy, Statsmodels and bayesian-changepoin-detection
    • Minor bug fixes and improvements in data exploration
    • Minor bug fixes and improvements in structural model
    • Ability to perform model validation due to underfit added in structural model
    • Holiday list updated

    Please read: We had to remove bayesian-changepoint-detection package from requirements due to deployment issues in pypi (the latest version of scipy is not supported by bayesian-changepoint-detection 0.2.dev1). If you are planning to use this dev release of luminaire, you have to manually install a compatible version of bayesian-changepoint-detection from github using the following script:

    pip install git+https://github.com/hildensia/bayesian_changepoint_detection@2dd95f5c1d028116899a842ccb3baa173f9d5be9#egg=bayesian-changepoint-detection
    
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0.dev2(Mar 8, 2022)

    Luminaire cd fixes from dev1

    Release notes from dev1:

    • Support up to Python 3.10
    • Support of latest versions of Scipy, Statsmodels and bayesian-changepoin-detection
    • Minor bug fixes and improvements in data exploration
    • Minor bug fixes and improvements in structural model
    • Ability to perform model validation due to underfit added in structural model
    • Holiday list updated
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0.dev1(Mar 8, 2022)

    • Support up to Python 3.10
    • Support of latest versions of Scipy, Statsmodels and bayesian-changepoin-detection
    • Minor bug fixes and improvements in data exploration
    • Minor bug fixes and improvements in structural model
    • Ability to perform model validation due to underfit added in structural model
    • Holiday list updated
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Dec 2, 2021)

    Update the requirements files list:

    • The existing requirements file specifies hard version requirements which is not helpful to the user
    • We ran the existing test cases to see what all recent version of dependent packages can be supported and based on that update the requirements files list
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0.dev1(Nov 22, 2021)

    Support for python 3.7 for build and deploy

    • Major dependent packages are kept the same
    • Just making the code compatible with Python 3.7 as Python 3.6 is reaching End of Life

    Note: Major package upgrade is planned for Q1-2022.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.4(Oct 5, 2021)

  • v0.2.3(Aug 12, 2021)

    Bug fixes:

    • Data reindexing while imputation fixed at the presence of missing / invalid data

    Scoring logic updates:

    • Model uncertainty is taken into consideration while making stationarity adjustments while scoring WindowDensityModel
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Jul 27, 2021)

  • v0.2.1(Jun 9, 2021)

  • v0.2.0(Feb 23, 2021)

    • WindowDensity model improvements for streaming and high-frequency time series
    • Full automation in training and scoring the window density model
    • Minor version upgrades for package dependencies (more on the way!)
    • Bugfixes
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0.dev1(Feb 12, 2021)

    Dev release for v0.2.0.

    This release includes the following:

    • Improved WindowDensity modeling for streaming use cases.
    • Bringing automation in configuring window density model for streaming use cases.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.4(Nov 24, 2020)

  • v0.1.3(Aug 25, 2020)

  • v0.1.1(Aug 23, 2020)

  • v0.1.0(Aug 21, 2020)

    Making Luminaire available as a beta release.

    Details:

    • Core Luminaire code base
    • Documentation:
      • Readme https://github.com/zillow/luminaire/blob/master/README.md
      • Github pages https://zillow.github.io/luminaire/
    • CI/CD pipeline workflow for build, release and documents
    • Improved code/files organization
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0.dev8(Aug 20, 2020)

  • v0.1.0.dev7(Aug 19, 2020)

  • v0.1.0.dev6.2(Aug 17, 2020)

  • v0.1.0.dev6.0(Aug 17, 2020)

  • v0.1.0.dev5.2(Aug 17, 2020)

  • v0.1.0.dev5(Aug 17, 2020)

  • v0.1.0.dev6.1(Aug 17, 2020)

  • v0.1.0.dev6(Aug 17, 2020)

  • v0.1.0.dev5.1(Aug 17, 2020)

  • v0.1.0.dev4(Aug 15, 2020)

  • v0.1.0.dev3(Aug 15, 2020)

  • v0.1.0.dev2(Aug 14, 2020)

Code for paper "ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation"

ASAP-Net This project implements ASAP-Net of paper ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation (BMVC2020). Overview We i

Hanwen Cao 26 Aug 25, 2022
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

Microsoft 8.4k Jan 01, 2023
Latte: Cross-framework Python Package for Evaluation of Latent-based Generative Models

Cross-framework Python Package for Evaluation of Latent-based Generative Models Latte Latte (for LATent Tensor Evaluation) is a cross-framework Python

Karn Watcharasupat 30 Sep 08, 2022
LSTM and QRNN Language Model Toolkit for PyTorch

LSTM and QRNN Language Model Toolkit This repository contains the code used for two Salesforce Research papers: Regularizing and Optimizing LSTM Langu

Salesforce 1.9k Jan 08, 2023
LaneDetectionAndLaneKeeping - Lane Detection And Lane Keeping

LaneDetectionAndLaneKeeping This project is part of my bachelor's thesis. The go

5 Jun 27, 2022
QICK: Quantum Instrumentation Control Kit

QICK: Quantum Instrumentation Control Kit The QICK is a kit of firmware and software to use the Xilinx RFSoC to control quantum systems. It consists o

81 Dec 15, 2022
Implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

SemCo The official pytorch implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

42 Nov 14, 2022
Unofficial Implementation of Oboe (SIGCOMM'18').

Oboe-Reproduce This is the unofficial implementation of the paper "Oboe: Auto-tuning video ABR algorithms to network conditions, Zahaib Akhtar, Yun Se

Tianchi Huang 13 Nov 04, 2022
RoBERTa Marathi Language model trained from scratch during huggingface 🤗 x flax community week

RoBERTa base model for Marathi Language (मराठी भाषा) Pretrained model on Marathi language using a masked language modeling (MLM) objective. RoBERTa wa

Nipun Sadvilkar 23 Oct 19, 2022
Official repository for "On Improving Adversarial Transferability of Vision Transformers" (2021)

Improving-Adversarial-Transferability-of-Vision-Transformers Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Khan, Fatih Porikli arxiv link A

Muzammal Naseer 47 Dec 02, 2022
An open framework for Federated Learning.

Welcome to Intel® Open Federated Learning Federated learning is a distributed machine learning approach that enables organizations to collaborate on m

Intel Corporation 397 Dec 27, 2022
Statsmodels: statistical modeling and econometrics in Python

About statsmodels statsmodels is a Python package that provides a complement to scipy for statistical computations including descriptive statistics an

statsmodels 8.1k Jan 02, 2023
Fast and customizable reconnaissance workflow tool based on simple YAML based DSL.

Fast and customizable reconnaissance workflow tool based on simple YAML based DSL, with support of notifications and distributed workload of that work

Américo Júnior 3 Mar 11, 2022
Official implementation of "An Image is Worth 16x16 Words, What is a Video Worth?" (2021 paper)

An Image is Worth 16x16 Words, What is a Video Worth? paper Official PyTorch Implementation Gilad Sharir, Asaf Noy, Lihi Zelnik-Manor DAMO Academy, Al

213 Nov 12, 2022
Music Classification: Beyond Supervised Learning, Towards Real-world Applications

Music Classification: Beyond Supervised Learning, Towards Real-world Applications

104 Dec 15, 2022
Official code for: A Probabilistic Hard Attention Model For Sequentially Observed Scenes

"A Probabilistic Hard Attention Model For Sequentially Observed Scenes" Authors: Samrudhdhi Rangrej, James Clark Accepted to: BMVC'21 A recurrent atte

5 Nov 19, 2022
MMRazor: a model compression toolkit for model slimming and AutoML

Documentation: https://mmrazor.readthedocs.io/ English | 简体中文 Introduction MMRazor is a model compression toolkit for model slimming and AutoML, which

OpenMMLab 899 Jan 02, 2023
Technical Analysis library in pandas for backtesting algotrading and quantitative analysis

bta-lib - A pandas based Technical Analysis Library bta-lib is pandas based technical analysis library and part of the backtrader family. Links Main P

DRo 393 Dec 20, 2022
Code for our TKDE paper "Understanding WeChat User Preferences and “Wow” Diffusion"

wechat-wow-analysis Understanding WeChat User Preferences and “Wow” Diffusion. Fanjin Zhang, Jie Tang, Xueyi Liu, Zhenyu Hou, Yuxiao Dong, Jing Zhang,

18 Sep 16, 2022
A web-based application for quick, scalable, and automated hyperparameter tuning and stacked ensembling in Python.

Xcessiv Xcessiv is a tool to help you create the biggest, craziest, and most excessive stacked ensembles you can think of. Stacked ensembles are simpl

Reiichiro Nakano 1.3k Nov 17, 2022