A Powerful Serverless Analysis Toolkit That Takes Trial And Error Out of Machine Learning Projects

Overview


KXY: A Seemless API to 10x The Productivity of Machine Learning Engineers

License PyPI Latest Release Downloads

Documentation

https://www.kxy.ai/reference/

Installation

From PyPi:

pip install kxy

From GitHub:

git clone https://github.com/kxytechnologies/kxy-python.git & cd ./kxy-python & pip install .

Authentication

All heavy-duty computations are run on our serverless infrastructure and require an API key. To configure the package with your API key, run

kxy configure

and follow the instructions. To get an API key you need an account; you can sign up for a free trial here. You'll then be automatically given an API key which you can find here.

KXY is free for academic use.

Docker

The Docker image kxytechnologies/kxy has been built for your convenience, and comes with anaconda, auto-sklearn, and the kxy package.

To start a Jupyter Notebook server from a sandboxed Docker environment, run

&& /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser --allow-root --NotebookApp.token=''" ">
docker run -i -t -p 5555:8888 kxytechnologies/kxy:latest /bin/bash -c "kxy configure 
   
     && /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser --allow-root --NotebookApp.token=''
    "
   

where you should replace with your API key and navigate to http://localhost:5555 in your browser. This docker environment comes with all examples available on the documentation website.

To start a Jupyter Notebook server from an existing directory of notebooks, run

&& /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser --allow-root --NotebookApp.token=''" ">
docker run -i -t --mount src=</path/to/your/local/dir>,target=/opt/notebooks,type=bind -p 5555:8888 kxytechnologies/kxy:latest /bin/bash -c "kxy configure 
   
     && /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser --allow-root --NotebookApp.token=''
    "
   

where you should replace with the path to your local notebook folder and navigate to http://localhost:5555 in your browser.

Other Programming Language

We plan to release friendly API client in more programming language.

In the meantime, you can directly issue requests to our RESTFul API using your favorite programming language.

You might also like...
Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable.

SDK: Overview of the Kubeflow pipelines service Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on

Model Validation Toolkit is a collection of tools to assist with validating machine learning models prior to deploying them to production and monitoring them after deployment to production.

Model Validation Toolkit is a collection of tools to assist with validating machine learning models prior to deploying them to production and monitoring them after deployment to production.

A machine learning toolkit dedicated to time-series data

tslearn The machine learning toolkit for time series analysis in Python Section Description Installation Installing the dependencies and tslearn Getti

A machine learning toolkit dedicated to time-series data

tslearn The machine learning toolkit for time series analysis in Python Section Description Installation Installing the dependencies and tslearn Getti

Kats is a toolkit to analyze time series data, a lightweight, easy-to-use, and generalizable framework to perform time series analysis.
Kats is a toolkit to analyze time series data, a lightweight, easy-to-use, and generalizable framework to perform time series analysis.

Kats, a kit to analyze time series data, a lightweight, easy-to-use, generalizable, and extendable framework to perform time series analysis, from understanding the key statistics and characteristics, detecting change points and anomalies, to forecasting future trends.

A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.
A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

A library of extension and helper modules for Python's data analysis and machine learning libraries.
A library of extension and helper modules for Python's data analysis and machine learning libraries.

Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks. Sebastian Raschka 2014-2021 Links Doc

A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Python Extreme Learning Machine (ELM) Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Comments
  • error in import kxy

    error in import kxy

    Hi, After installing the kxy package and configuring the API key, the import kxy shows the error below:

    .../python3.9/site-packages/kxy/pfs/pfs_selector.py in <module>
          6 import numpy as np
          7 
    ----> 8 import tensorflow as tf
          9 from tensorflow.keras.callbacks import EarlyStopping, TerminateOnNaN
         10 from tensorflow.keras.optimizers import Adam
    
    ModuleNotFoundError: No module named 'tensorflow'
    
    

    what version of tensorflow is needed for kxy to work?

    opened by zeydabadi 2
  • generate_features Documentation?

    generate_features Documentation?

    Is there any documentation on how to use the generate_features function? It doesn't appear in the documentation and I can't find it in the github. e.g. how to use the entity column, how to format time-series data in advance for it, etc'. Thanks!

    opened by ddofer 1
  • error kxy.data_valuation

    error kxy.data_valuation

    Hi, After running chievable_performance_df = X_train_reduced.kxy.data_valuation(target_column='state', problem_type='classification', include_mutual_information=True, anonymize=True) I get the following error and the function does not return anything: `During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/lib/python3.9/asyncio/tasks.py", line 258, in __step result = coro.throw(exc) File "/home/lucy/Downloads/general/lib/python3.9/site-packages/tornado/websocket.py", line 1104, in wrapper raise WebSocketClosedError() tornado.websocket.WebSocketClosedError Task exception was never retrieved future: <Task finished name='Task-46004' coro=<WebSocketProtocol13.write_message..wrapper() done, defined at /home/lucy/Downloads/general/lib/python3.9/site-packages/tornado/websocket.py:1100> exception=WebSocketClosedError()> Traceback (most recent call last): File "/home/lucy/Downloads/general/lib/python3.9/site-packages/tornado/websocket.py", line 1102, in wrapper await fut File "/usr/lib/python3.9/asyncio/tasks.py", line 328, in __wakeup future.result() tornado.iostream.StreamClosedError: Stream is closed `

    opened by zeydabadi 0
Releases(v1.4.10)
  • v1.4.10(Apr 25, 2022)

    Change Log

    v.1.4.10 Changes

    • Added a function to construct features derived from PFS mutual information estimation that should be expected to be linearly related to the target.
    • Fixed a global name conflict in kxy.learning.base_learners.

    v.1.4.9 Changes

    • Change the activation function used by PFS from ReLU to switch/SILU.
    • Leaving it to the user to set the logging level.

    v.1.4.8 Changes

    • Froze the versions of all python packages in the docker file.

    v.1.4.7 Changes

    Changes related to optimizing Principal Feature Selection.

    • Made it easy to change PFS' default learning parameters.
    • Changed PFS' default learning parameters (learning rate is now 0.005 and epsilon 1e-04)
    • Adding a seed parameter to PFS' fit for reproducibility.

    To globally change the learning rate to 0.003, change Adam's epsilon to 1e-5, and the number of epochs to 25, do

    from kxy.misc.tf import set_default_parameter
    set_default_parameter('lr', 0.003)
    set_default_parameter('epsilon', 1e-5)
    set_default_parameter('epochs', 25)
    

    To change the number epochs for a single iteration of PFS, use the epochs argument of the fit method of your PFS object. The fit method now also has a seed parameter you may use to make the PFS implementation deterministic.

    Example:

    from kxy.pfs import PFS
    selector = PFS()
    selector.fit(x, y, epochs=25, seed=123)
    

    Alternatively, you may also use the kxy.misc.tf.set_seed method to make PFS deterministic.

    v.1.4.6 Changes

    Minor PFS improvements.

    • Adding more (robust) mutual information loss functions.
    • Exposing the learned total mutual information between principal features and target as an attribute of PFS.
    • Exposing the number of epochs as a parameter of PFS' fit.
    Source code(tar.gz)
    Source code(zip)
  • v1.4.9(Apr 12, 2022)

    Change Log

    v.1.4.9 Changes

    • Change the activation function used by PFS from ReLU to switch/SILU.
    • Leaving it to the user to set the logging level.

    v.1.4.8 Changes

    • Froze the versions of all python packages in the docker file.

    v.1.4.7 Changes

    Changes related to optimizing Principal Feature Selection.

    • Made it easy to change PFS' default learning parameters.
    • Changed PFS' default learning parameters (learning rate is now 0.005 and epsilon 1e-04)
    • Adding a seed parameter to PFS' fit for reproducibility.

    To globally change the learning rate to 0.003, change Adam's epsilon to 1e-5, and the number of epochs to 25, do

    from kxy.misc.tf import set_default_parameter
    set_default_parameter('lr', 0.003)
    set_default_parameter('epsilon', 1e-5)
    set_default_parameter('epochs', 25)
    

    To change the number epochs for a single iteration of PFS, use the epochs argument of the fit method of your PFS object. The fit method now also has a seed parameter you may use to make the PFS implementation deterministic.

    Example:

    from kxy.pfs import PFS
    selector = PFS()
    selector.fit(x, y, epochs=25, seed=123)
    

    Alternatively, you may also use the kxy.misc.tf.set_seed method to make PFS deterministic.

    v.1.4.6 Changes

    Minor PFS improvements.

    • Adding more (robust) mutual information loss functions.
    • Exposing the learned total mutual information between principal features and target as an attribute of PFS.
    • Exposing the number of epochs as a parameter of PFS' fit.
    Source code(tar.gz)
    Source code(zip)
  • v1.4.8(Apr 11, 2022)

    Change Log

    v.1.4.8 Changes

    • Froze the versions of all python packages in the docker file.

    v.1.4.7 Changes

    Changes related to optimizing Principal Feature Selection.

    • Made it easy to change PFS' default learning parameters.
    • Changed PFS' default learning parameters (learning rate is now 0.005 and epsilon 1e-04)
    • Adding a seed parameter to PFS' fit for reproducibility.

    To globally change the learning rate to 0.003, change Adam's epsilon to 1e-5, and the number of epochs to 25, do

    from kxy.misc.tf import set_default_parameter
    set_default_parameter('lr', 0.003)
    set_default_parameter('epsilon', 1e-5)
    set_default_parameter('epochs', 25)
    

    To change the number epochs for a single iteration of PFS, use the epochs argument of the fit method of your PFS object. The fit method now also has a seed parameter you may use to make the PFS implementation deterministic.

    Example:

    from kxy.pfs import PFS
    selector = PFS()
    selector.fit(x, y, epochs=25, seed=123)
    

    Alternatively, you may also use the kxy.misc.tf.set_seed method to make PFS deterministic.

    v.1.4.6 Changes

    Minor PFS improvements.

    • Adding more (robust) mutual information loss functions.
    • Exposing the learned total mutual information between principal features and target as an attribute of PFS.
    • Exposing the number of epochs as a parameter of PFS' fit.
    Source code(tar.gz)
    Source code(zip)
  • v1.4.7(Apr 10, 2022)

    Change Log

    v.1.4.7 Changes

    Changes related to optimizing Principal Feature Selection.

    • Made it easy to change PFS' default learning parameters.
    • Changed PFS' default learning parameters (learning rate is now 0.005 and epsilon 1e-04)
    • Adding a seed parameter to PFS' fit for reproducibility.

    To globally change the learning rate to 0.003, change Adam's epsilon to 1e-5, and the number of epochs to 25, do

    from kxy.misc.tf import set_default_parameter
    set_default_parameter('lr', 0.003)
    set_default_parameter('epsilon', 1e-5)
    set_default_parameter('epochs', 25)
    

    To change the number epochs for a single iteration of PFS, use the epochs argument of the fit method of your PFS object. The fit method now also has a seed parameter you may use to make the PFS implementation deterministic.

    Example:

    from kxy.pfs import PFS
    selector = PFS()
    selector.fit(x, y, epochs=25, seed=123)
    

    Alternatively, you may also use the kxy.misc.tf.set_seed method to make PFS deterministic.

    v.1.4.6 Changes

    Minor PFS improvements.

    • Adding more (robust) mutual information loss functions.
    • Exposing the learned total mutual information between principal features and target as an attribute of PFS.
    • Exposing the number of epochs as a parameter of PFS' fit.
    Source code(tar.gz)
    Source code(zip)
  • v1.4.6(Apr 10, 2022)

    Changes

    • Adding more (robust) mutual information loss functions.
    • Exposing the learned total mutual information between principal features and target as an attribute of PFS.
    • Exposing the number of epochs as a parameter of PFS' fit.
    Source code(tar.gz)
    Source code(zip)
  • v1.4.5(Apr 9, 2022)

  • v1.4.4(Apr 8, 2022)

  • v0.3.2(Aug 14, 2020)

  • v0.3.0(Aug 3, 2020)

    Adding a maximum-entropy based classifier (kxy.MaxEntClassifier) and regressor (kxy.MaxEntRegressor) following the scikit-learn signature for fitting and predicting.

    These models estimate the posterior mean E[u_y|x] and the posterior standard deviation sqrt(Var[u_y|x]) for any specific value of x, where the copula-uniform representations (u_y, u_x) follow the maximum-entropy distribution.

    Predictions in the primal are derived from E[u_y|x].

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jun 25, 2020)

    • Regression analyses now fully support categorical variables.
    • Foundations for multi-output regressions are laid.
    • Categorical variables are now systematically encoded and treated as continuous, consistent with what's done at the learning stage.
    • Regression and classification are further normalized, and most the compute for classification problems now takes place on the API side, and should be considerably faster.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.18(May 26, 2020)

  • v0.0.16(May 18, 2020)

  • v0.0.15(May 18, 2020)

  • v0.0.14(May 18, 2020)

  • v0.0.13(May 16, 2020)

  • v0.0.11(May 13, 2020)

  • v0.0.10(May 11, 2020)

Owner
KXY Technologies, Inc.
KXY Technologies, Inc.
The MLOps is the process of continuous integration and continuous delivery of Machine Learning artifacts as a software product, keeping it inside a loop of Design, Model Development and Operations.

MLOps The MLOps is the process of continuous integration and continuous delivery of Machine Learning artifacts as a software product, keeping it insid

Maykon Schots 25 Nov 27, 2022
GroundSeg Clustering Optimized Kdtree

ground seg and clustering based on kitti velodyne data, and a additional optimized kdtree for knn and radius nn search

2 Dec 02, 2021
Generate music from midi files using BPE and markov model

Generate music from midi files using BPE and markov model

Aditya Khadilkar 37 Oct 24, 2022
A statistical library designed to fill the void in Python's time series analysis capabilities, including the equivalent of R's auto.arima function.

pmdarima Pmdarima (originally pyramid-arima, for the anagram of 'py' + 'arima') is a statistical library designed to fill the void in Python's time se

alkaline-ml 1.3k Jan 06, 2023
PyNNDescent is a Python nearest neighbor descent for approximate nearest neighbors.

PyNNDescent PyNNDescent is a Python nearest neighbor descent for approximate nearest neighbors. It provides a python implementation of Nearest Neighbo

Leland McInnes 699 Jan 09, 2023
PennyLane is a cross-platform Python library for differentiable programming of quantum computers

PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural ne

PennyLaneAI 1.6k Jan 01, 2023
Merlion: A Machine Learning Framework for Time Series Intelligence

Merlion is a Python library for time series intelligence. It provides an end-to-end machine learning framework that includes loading and transforming data, building and training models, post-processi

Salesforce 2.8k Jan 05, 2023
A quick reference guide to the most commonly used patterns and functions in PySpark SQL

Using PySpark we can process data from Hadoop HDFS, AWS S3, and many file systems. PySpark also is used to process real-time data using Streaming and

Sundar Ramamurthy 53 Dec 21, 2022
machine learning model deployment project of Iris classification model in a minimal UI using flask web framework and deployed it in Azure cloud using Azure app service

This is a machine learning model deployment project of Iris classification model in a minimal UI using flask web framework and deployed it in Azure cloud using Azure app service. We initially made th

Krishna Priyatham Potluri 73 Dec 01, 2022
A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in Python using Scikit-Learn, Keras and TensorFlow 2.

Machine Learning Notebooks, 3rd edition This project aims at teaching you the fundamentals of Machine Learning in python. It contains the example code

Aurélien Geron 1.6k Jan 05, 2023
Anytime Learning At Macroscale

On Anytime Learning At Macroscale Learning from sequential data dumps (key) Requirements Python 3.7 Pytorch 1.9.0 Hydra 1.1.0 (pip install hydra-core

Meta Research 8 Mar 29, 2022
Simulation of early COVID-19 using SIR model and variants (SEIR ...).

COVID-19-simulation Simulation of early COVID-19 using SIR model and variants (SEIR ...). Made by the Laboratory of Sustainable Life Assessment (GYRO)

José Paulo Pereira das Dores Savioli 1 Nov 17, 2021
Getting Profit and Loss Make Easy From Binance

Getting Profit and Loss Make Easy From Binance I have been in Binance Automated Trading for some time and have generated a lot of transaction records,

17 Dec 21, 2022
Python implementation of the rulefit algorithm

RuleFit Implementation of a rule based prediction algorithm based on the rulefit algorithm from Friedman and Popescu (PDF) The algorithm can be used f

Christoph Molnar 326 Jan 02, 2023
A logistic regression model for health insurance purchasing prediction

Logistic_Regression_Model A logistic regression model for health insurance purchasing prediction This code is using these packages, so please make sur

ShawnWang 1 Nov 29, 2021
JMP is a Mixed Precision library for JAX.

Mixed precision training [0] is a technique that mixes the use of full and half precision floating point numbers during training to reduce the memory bandwidth requirements and improve the computatio

DeepMind 108 Dec 31, 2022
fastFM: A Library for Factorization Machines

Citing fastFM The library fastFM is an academic project. The time and resources spent developing fastFM are therefore justified by the number of citat

1k Dec 24, 2022
High performance implementation of Extreme Learning Machines (fast randomized neural networks).

High Performance toolbox for Extreme Learning Machines. Extreme learning machines (ELM) are a particular kind of Artificial Neural Networks, which sol

Anton Akusok 174 Dec 07, 2022
inding a method to objectively quantify skill versus chance in games, using reinforcement learning

Skill-vs-chance-games-analysis - Finding a method to objectively quantify skill versus chance in games, using reinforcement learning

Marcus Chiam 4 Nov 19, 2022
A Collection of Conference & School Notes in Machine Learning 🦄📝🎉

Machine Learning Conference & Summer School Notes. 🦄📝🎉

558 Dec 28, 2022