Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill

Overview

Tests codecov PyPI - Downloads Documentation Status

Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill

This is a port of the amazing openskill.js package.

Installation

pip install openskill

Usage

>>> from openskill import Rating, rate
>>> a1 = Rating()
>>> a1
Rating(mu=25, sigma=8.333333333333334)
>>> a2 = Rating(mu=32.444, sigma=5.123)
>>> a2
Rating(mu=32.444, sigma=5.123)
>>> b1 = Rating(43.381, 2.421)
>>> b1
Rating(mu=43.381, sigma=2.421)
>>> b2 = Rating(mu=25.188, sigma=6.211)
>>> b2
Rating(mu=25.188, sigma=6.211)

If a1 and a2 are on a team, and wins against a team of b1 and b2, send this into rate:

>>> [[x1, x2], [y1, y2]] = rate([[a1, a2], [b1, b2]])
>>> x1, x2, y1, y2
([28.669648436582808, 8.071520788025197], [33.83086971107981, 5.062772998705765], [43.071274808241974, 2.4166900452721256], [23.149503312339064, 6.1378606973362135])

You can also create Rating objects by importing create_rating:

>>> from openskill import create_rating
>>> x1 = create_rating(x1)
>>> x1
Rating(mu=28.669648436582808, sigma=8.071520788025197)

Ranks

When displaying a rating, or sorting a list of ratings, you can use ordinal:

>>> from openskill import ordinal
>>> ordinal(mu=43.07, sigma=2.42)
35.81

By default, this returns mu - 3 * sigma, showing a rating for which there's a 99.7% likelihood the player's true rating is higher, so with early games, a player's ordinal rating will usually go up and could go up even if that player loses.

Artificial Ranks

If your teams are listed in one order but your ranking is in a different order, for convenience you can specify a ranks option, such as:

>>> a1 = b1 = c1 = d1 = Rating()
>>> result = [[a2], [b2], [c2], [d2]] = rate([[a1], [b1], [c1], [d1]], rank=[4, 1, 3, 2])
>>> result
[[[20.96265504062538, 8.083731307186588]], [[27.795084971874736, 8.263160757613477]], [[24.68943500312503, 8.083731307186588]], [[26.552824984374855, 8.179213704945203]]]

It's assumed that the lower ranks are better (wins), while higher ranks are worse (losses). You can provide a score instead, where lower is worse and higher is better. These can just be raw scores from the game, if you want.

Ties should have either equivalent rank or score.

>>> a1 = b1 = c1 = d1 = Rating()
>>> result = [[a2], [b2], [c2], [d2]] = rate([[a1], [b1], [c1], [d1]], score=[37, 19, 37, 42])
>>> result
[[[24.68943500312503, 8.179213704945203]], [[22.826045021875203, 8.179213704945203]], [[24.68943500312503, 8.179213704945203]], [[27.795084971874736, 8.263160757613477]]]

Choosing Models

The default model is PlackettLuce. You can import alternate models from openskill.models like so:

>>> from openskill.models import BradleyTerryFull
>>> a1 = b1 = c1 = d1 = Rating()
>>> rate([[a1], [b1], [c1], [d1]], rank=[4, 1, 3, 2], model=BradleyTerryFull)
[[[17.09430584957905, 7.5012190693964005]], [[32.90569415042095, 7.5012190693964005]], [[22.36476861652635, 7.5012190693964005]], [[27.63523138347365, 7.5012190693964005]]]

Predicting Winners

You can compare two or more teams to get the probabilities of each team winning.

>>> from openskill import predict_win
>>> a1 = Rating()
>>> a2 = Rating(mu=33.564, sigma=1.123)
>>> predictions = predict_win(teams=[[a1], [a2]])
>>> predictions
[0.45110901512761536, 0.5488909848723846]
>>> sum(predictions)
1.0

Available Models

  • BradleyTerryFull: Full Pairing for Bradley-Terry
  • BradleyTerryPart: Partial Pairing for Bradley-Terry
  • PlackettLuce: Generalized Bradley-Terry
  • ThurstoneMostellerFull: Full Pairing for Thurstone-Mosteller
  • ThurstoneMostellerPart: Partial Pairing for Thurstone-Mosteller

Which Model Do I Want?

  • Bradley-Terry rating models follow a logistic distribution over a player's skill, similar to Glicko.
  • Thurstone-Mosteller rating models follow a gaussian distribution, similar to TrueSkill. Gaussian CDF/PDF functions differ in implementation from system to system (they're all just chebyshev approximations anyway). The accuracy of this model isn't usually as great either, but tuning this with an alternative gamma function can improve the accuracy if you really want to get into it.
  • Full pairing should have more accurate ratings over partial pairing, however in high k games (like a 100+ person marathon race), Bradley-Terry and Thurstone-Mosteller models need to do a calculation of joint probability which involves is a k-1 dimensional integration, which is computationally expensive. Use partial pairing in this case, where players only change based on their neighbors.
  • Plackett-Luce (default) is a generalized Bradley-Terry model for k ≥ 3 teams. It scales best.

Implementations in other Languages

Comments
  • Support for partial play/weighting/player performance

    Support for partial play/weighting/player performance

    Would it be possible to add some sort of system to weight player performance like how the official trueskill module does it? I'm trying to create a system that weights players overall performance compared to their teams to get a more accurate skill rating.

    enhancement help wanted 
    opened by spookybear0 7
  • Support for Python 3.7+

    Support for Python 3.7+

    I've been using openskill in my local Python 3.9 environment for a while now. I've been liking it a lot and I wanted to add it to one of my projects for use in production. I was surprised to find that the latest version on pypi said it only supported 3.10+, which is pretty rough from a compatibility perspective. I wanted to check how much it actually depended on features in 3.10, so I switched to a 3.6 virtual env and tried running the tests. They failed, and indicated that isinstance(var, Union[T1, T2]) calls were an issue. This is a fairly easy thing to express in older pythons with: isinstance(var, (T1, T2)). I did a search and replace on these, and then the tests passed. That was the only incompatibility I could find.

    If the maintainers are open to supporting older Pythons for a new release, that would be very helpful to me. I would also be OK with the minimum version being something like 3.7 or 3.8.

    I haven't used towncrier before, and it wasn't immediately obvious how to impute the changelog, so I figure I'll wait until a maintainer greenlights this before I continue any work on details like that.

    Affirmation

    -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: Erotemic -----BEGIN PGP SIGNATURE-----

    iHUEARYIAB0WIQT3wcl6TvLPlkPZTGZswF8VWxKrNgUCYnUyfgAKCRBswF8VWxKr Nv5kAQCy8UMZZYCX3+dG5sLn1UmF/SZ1qnGwsMAUWsVgTdJvFAEAzOCgQwWOpIma wPXZMrEOpp8S9IAgNpeFetms+kp9lQ0= =7IXK -----END PGP SIGNATURE-----

    enhancement 
    opened by Erotemic 6
  • let score difference be reflected in rating

    let score difference be reflected in rating

    When you enter scores into rate(), the difference between the scores have no effect on the rating - meaning: rate([team1,team2],score(1,0)) == rate([team1,team2],score(100,0)) is true. They have exactly the same rating effect on team1 and team2.

    I don't know if it is mathematical possible and how it would look like. But it would be great if the difference could be somehow factored into the calculation, as it is (if your game has a score) quite an important datapoint for skill evaluation.

    enhancement 
    opened by jonathan-scholz 4
  • Are `predict_win` and `predict_draw` functions accidentally using Thurstone-Mosteller specific calculations?

    Are `predict_win` and `predict_draw` functions accidentally using Thurstone-Mosteller specific calculations?

    If I understand it correctly, those two functions seem to perform calculations using equations numbered (65) in the paper. However, those equations seems to be specific to Thurstone-Mosteller model and as far as I can tell, the proper way to calculate probabilities for Bradley-Terry model would be to use equations (48) and (51) (also seen as p_iq in equation (49)). Is this intended? Or am I misunderstanding either the paper or the code of these functions?

    question 
    opened by asyncth 2
  • Update link

    Update link

    Description of Changes

    • [ ] Wrote at least one-line docstrings (for any new functions)
    • [ ] Added test(s) covering the changes (if testable)

    Updated link to repository

    Issue(s) Resolved

    Fixes #

    Affirmation

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: bstummer

    opened by bstummer 2
  • Unable to install on Google Colab

    Unable to install on Google Colab

    Describe the bug I can't install openskill on Google Colab via pip. What should I do?

    Screenshots スクリーンショット 2022-03-31 23 14 18

    Platform Information

    • Google Colab
    • Python Version: 3.7.13
    wontfix 
    opened by toshi71 2
  • Implement additive dynamics factor 'tau'.

    Implement additive dynamics factor 'tau'.

    Description of Changes

    • [x] Wrote at least one-line docstrings (for any new functions)
    • [x] Added test(s) covering the changes (if testable)

    Ports https://github.com/philihp/openskill.js/pull/233

    Affirmation

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: daegontaven

    opened by daegontaven 2
  • Faster runtime of predict_win and predict_draw

    Faster runtime of predict_win and predict_draw

    Description of Changes

    • [x] Adds some unit tests that I have for the Javascript library, to prevent any regressions.
    • [x] Tells itertools.permutations to give permutation pairs, rather than all full permutations
    • [x] Additionally, the length of permutation_pairs is shorter, and that simplifies the denominator to just a triangular number.

    I noticed that for teams of ABCD, your code as written was finding all permutations (ABCD, ABDC, ACBD, ACDB, ...) and then only using the first two from the permutation, which for n>=4 teams, this causes the same pairings to be calculated multiple times.

    This should reduce runtime from O(n^n) to O(n^2).

    Affirmation

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: Philihp Busby

    opened by philihp 2
  • Bump prompt-toolkit from 3.0.26 to 3.0.27

    Bump prompt-toolkit from 3.0.26 to 3.0.27

    Bumps prompt-toolkit from 3.0.26 to 3.0.27.

    Changelog

    Sourced from prompt-toolkit's changelog.

    3.0.27: 2022-02-07

    New features:

    • Support for cursor shapes. The cursor shape for prompts/applications can now be configured, either as a fixed cursor shape, or in case of Vi input mode, according to the current input mode.
    • Handle "cursor forward" command in ANSI formatted text. This makes it possible to render many kinds of generated ANSI art.
    • Accept align attribute in Label widget.
    • Added PlainTextOutput: an output implementation that doesn't render any ANSI escape sequences. This will be used by default when redirecting stdout to a file.
    • Added create_app_session_from_tty: a context manager that enforces input/output to go to the current TTY, even if stdin/stdout are attached to pipes.
    • Added to_plain_text utility for converting formatted text into plain text.

    Fixes:

    • Don't automatically use sys.stderr for output when sys.stdout is not a TTY, but sys.stderr is. The previous behavior was confusing, especially when rendering formatted text to the output, and we expect it to follow redirection.
    Commits
    • 6ac867a Release 3.0.27
    • 96ec6fb Removed unused imports.
    • b4d728e Added support for cursor shapes.
    • 4a66820 Added to_plain_text utility. A function to turn formatted text into a string.
    • 7dd8435 Added create_app_session_from_tty.
    • 2c96fe2 Stop preferring a TTY output when creating the default output.
    • 402b6a3 Added PlainTextOutput: an output that doesn't write ANSI escape sequences to ...
    • 57b42c4 Added ansi-art-and-textarea.py example.
    • a71de3c Accept 'align' attribute in Label widget.
    • 64d870a Added ptk-logo-ansi-art.py example.
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Bump scipy from 1.7.3 to 1.8.0

    Bump scipy from 1.7.3 to 1.8.0

    Bumps scipy from 1.7.3 to 1.8.0.

    Release notes

    Sourced from scipy's releases.

    SciPy 1.8.0 Release Notes

    SciPy 1.8.0 is the culmination of 6 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Before upgrading, we recommend that users check that their own code does not use deprecated SciPy functionality (to do so, run your code with python -Wd and check for DeprecationWarning s). Our development attention will now shift to bug-fix releases on the 1.8.x branch, and on adding new features on the master branch.

    This release requires Python 3.8+ and NumPy 1.17.3 or greater.

    For running on PyPy, PyPy3 6.0+ is required.

    Highlights of this release

    • A sparse array API has been added for early testing and feedback; this work is ongoing, and users should expect minor API refinements over the next few releases.
    • The sparse SVD library PROPACK is now vendored with SciPy, and an interface is exposed via scipy.sparse.svds with solver='PROPACK'. It is currently default-off due to potential issues on Windows that we aim to resolve in the next release, but can be optionally enabled at runtime for friendly testing with an environment variable setting of USE_PROPACK=1.
    • A new scipy.stats.sampling submodule that leverages the UNU.RAN C library to sample from arbitrary univariate non-uniform continuous and discrete distributions
    • All namespaces that were private but happened to miss underscores in their names have been deprecated.

    New features

    scipy.fft improvements

    Added an orthogonalize=None parameter to the real transforms in scipy.fft which controls whether the modified definition of DCT/DST is used without changing the overall scaling.

    scipy.fft backend registration is now smoother, operating with a single

    ... (truncated)

    Commits
    • b5d8bab REL: 1.8.0 release commit.
    • d84f731 Merge pull request #15521 from tylerjereddy/treddy_prep_180_final
    • 315dd53 DOC: update 1.8.0 relnotes.
    • b54b7ae MAINT: fix broken link and remove CI badges
    • 920e27b REL: 1.8.0 unreleased.
    • ea004bd REL: 1.8.0rc4 released.
    • 4f3969d Merge pull request #15479 from tylerjereddy/treddy_180rc4
    • 8ed6aa9 DOC: update 1.8.0 relnotes.
    • efe4ca5 MAINT: PR 15479 revisions
    • 1803913 MAINT: remove non-default settings (except shallow) in .gitmodules
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Bump twine from 3.7.1 to 3.8.0

    Bump twine from 3.7.1 to 3.8.0

    Bumps twine from 3.7.1 to 3.8.0.

    Release notes

    Sourced from twine's releases.

    3.8.0

    https://pypi.org/project/twine/3.8.0/

    Changelog

    Changelog

    Sourced from twine's changelog.

    Twine 3.8.0 (2022-02-02)

    Features ^^^^^^^^

    • Add --verbose logging for querying keyring credentials. ([#849](https://github.com/pypa/twine/issues/849) <https://github.com/pypa/twine/issues/849>_)
    • Log all upload responses with --verbose. ([#859](https://github.com/pypa/twine/issues/859) <https://github.com/pypa/twine/issues/859>_)
    • Show more helpful error message for invalid metadata. ([#861](https://github.com/pypa/twine/issues/861) <https://github.com/pypa/twine/issues/861>_)

    Bugfixes ^^^^^^^^

    • Require a recent version of urllib3. ([#858](https://github.com/pypa/twine/issues/858) <https://github.com/pypa/twine/issues/858>_)
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Tournament Interface

    Tournament Interface

    Is your feature request related to a problem? Please describe. Creating models of tournaments is hard since you have to parse the data using another library (depending on the format) and then pass everything into rate and predict manually. It's a lot of effort to predict the entire outcome of say, "2022 FIFA World Cup" easily.

    Describe the solution you'd like it would be nice if there was a tournament class of some kind that allowed us to pass in rounds which themselves contained matches. Then using an exhaustive approach predict winners and move them along each bracket/round. Especially now that #74 has landed it would be easier to predict whole matches and in turn tournaments.

    The classes should be customizable to allow our own logic. For instance, allow using the munkres algorithm and other such methods.

    Describe alternatives you've considered I don't know any other libraries that do this already.

    enhancement help wanted 
    opened by daegontaven 0
  • Improve win predictions for 1v1 teams

    Improve win predictions for 1v1 teams

    First of all, congrats and thanks for the great repo!

    In a scenario that Player A has 2x the rating of Player B, the predicted win probability is 60% vs 40%. This seems strange.

    players = [ [Rating(50)], [Rating(25)] ]
    
    predict_win(teams=players)
    
    [ 1 ]: [0.6002914159316424, 0.39970858406835763]
    
    

    If I use this function implementation, I get 97% vs 3% which sounds more reasonable to me.

    Maybe the predict_win function has some flaw?

    enhancement 
    opened by nthypes 1
Releases(v4.0.0)
Owner
Open Debates Project
Debate the way it's meant to be.
Open Debates Project
Retrieve annotated intron sequences and classify them as minor (U12-type) or major (U2-type)

(intron I nterrogator and C lassifier) intronIC is a program that can be used to classify intron sequences as minor (U12-type) or major (U2-type), usi

Graham Larue 4 Jul 26, 2022
A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

Daniel Formoso 5.7k Dec 30, 2022
Bodywork deploys machine learning projects developed in Python, to Kubernetes.

Bodywork deploys machine learning projects developed in Python, to Kubernetes. It helps you to: serve models as microservices execute batch jobs run r

Bodywork Machine Learning 409 Jan 01, 2023
Predico Disease Prediction system based on symptoms provided by patient- using Python-Django & Machine Learning

Predico Disease Prediction system based on symptoms provided by patient- using Python-Django & Machine Learning

Felix Daudi 1 Jan 06, 2022
MaD GUI is a basis for graphical annotation and computational analysis of time series data.

MaD GUI Machine Learning and Data Analytics Graphical User Interface MaD GUI is a basis for graphical annotation and computational analysis of time se

Machine Learning and Data Analytics Lab FAU 10 Dec 19, 2022
AutoX是一个高效的自动化机器学习工具,它主要针对于表格类型的数据挖掘竞赛。 它的特点包括: 效果出色、简单易用、通用、自动化、灵活。

English | 简体中文 AutoX是什么? AutoX一个高效的自动化机器学习工具,它主要针对于表格类型的数据挖掘竞赛。 它的特点包括: 效果出色: AutoX在多个kaggle数据集上,效果显著优于其他解决方案(见效果对比)。 简单易用: AutoX的接口和sklearn类似,方便上手使用。

4Paradigm 431 Dec 28, 2022
Learn how to responsibly deliver value with ML.

Made With ML Applied ML · MLOps · Production Join 30K+ developers in learning how to responsibly deliver value with ML. 🔥 Among the top MLOps reposit

Goku Mohandas 32k Dec 30, 2022
李航《统计学习方法》复现

本项目复现李航《统计学习方法》每一章节的算法 特点: 笔记摘要:在每个文件开头都会有一些核心的摘要 pythonic:这里会用尽可能规范的方式来实现,包括编程风格几乎严格按照PEP8 循序渐进:前期的算法会更list的方式来做计算,可读性比较强,后期几乎完全为numpy.array的计算,并且辅助详

58 Oct 22, 2021
Price forecasting of SGB and IRFC Bonds and comparing there returns

Project_Bonds Project Title : Price forecasting of SGB and IRFC Bonds and comparing there returns. Introduction of the Project The 2008-09 global fina

Tishya S 1 Oct 28, 2021
stability-selection - A scikit-learn compatible implementation of stability selection

stability-selection - A scikit-learn compatible implementation of stability selection stability-selection is a Python implementation of the stability

185 Dec 03, 2022
Nevergrad - A gradient-free optimization platform

Nevergrad - A gradient-free optimization platform nevergrad is a Python 3.6+ library. It can be installed with: pip install nevergrad More installati

Meta Research 3.4k Jan 08, 2023
As we all know the BGMI Loot Crate comes with so many resources for the gamers, this ML Crate will be the hub of various ML projects which will be the resources for the ML enthusiasts! Open Source Program: SWOC 2021 and JWOC 2022.

Machine Learning Loot Crate 💻 🧰 🔴 Welcome contributors! As we all know the BGMI Loot Crate comes with so many resources for the gamers, this ML Cra

Abhishek Sharma 89 Dec 28, 2022
Winning solution for the Galaxy Challenge on Kaggle

Winning solution for the Galaxy Challenge on Kaggle

Sander Dieleman 483 Jan 02, 2023
This project impelemented for midterm of the Machine Learning #Zoomcamp #Alexey Grigorev

MLProject_01 This project impelemented for midterm of the Machine Learning #Zoomcamp #Alexey Grigorev Context Dataset English question data set file F

Hadi Nakhi 1 Dec 18, 2021
Skforecast is a python library that eases using scikit-learn regressors as multi-step forecasters

Skforecast is a python library that eases using scikit-learn regressors as multi-step forecasters. It also works with any regressor compatible with the scikit-learn API (pipelines, CatBoost, LightGBM

Joaquín Amat Rodrigo 297 Jan 09, 2023
A Python implementation of GRAIL, a generic framework to learn compact time series representations.

GRAIL A Python implementation of GRAIL, a generic framework to learn compact time series representations. Requirements Python 3.6+ numpy scipy tslearn

3 Nov 24, 2021
Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning

Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API.

7.4k Jan 04, 2023
Built on python (Mathematical straight fit line coordinates error predictor machine learning foundational model)

Sum-Square_Error-Business-Analytical-Tool- Built on python (Mathematical straight fit line coordinates error predictor machine learning foundational m

om Podey 1 Dec 03, 2021
Factorization machines in python

Factorization Machines in Python This is a python implementation of Factorization Machines [1]. This uses stochastic gradient descent with adaptive re

Corey Lynch 892 Jan 03, 2023
The Emergence of Individuality

The Emergence of Individuality

16 Jul 20, 2022