PySpark + Scikit-learn = Sparkit-learn

Overview

Sparkit-learn

Build Status PyPi Join the chat at https://gitter.im/lensacom/sparkit-learn Gitential Coding Hours

PySpark + Scikit-learn = Sparkit-learn

GitHub: https://github.com/lensacom/sparkit-learn

About

Sparkit-learn aims to provide scikit-learn functionality and API on PySpark. The main goal of the library is to create an API that stays close to sklearn's.

The driving principle was to "Think locally, execute distributively." To accomodate this concept, the basic data block is always an array or a (sparse) matrix and the operations are executed on block level.

Requirements

  • Python 2.7.x or 3.4.x
  • Spark[>=1.3.0]
  • NumPy[>=1.9.0]
  • SciPy[>=0.14.0]
  • Scikit-learn[>=0.16]

Run IPython from notebooks directory

PYTHONPATH=${PYTHONPATH}:.. IPYTHON_OPTS="notebook" ${SPARK_HOME}/bin/pyspark --master local\[4\] --driver-memory 2G

Run tests with

./runtests.sh

Quick start

Sparkit-learn introduces three important distributed data format:

  • ArrayRDD:

    A numpy.array like distributed array

    from splearn.rdd import ArrayRDD
    
    data = range(20)
    # PySpark RDD with 2 partitions
    rdd = sc.parallelize(data, 2) # each partition with 10 elements
    # ArrayRDD
    # each partition will contain blocks with 5 elements
    X = ArrayRDD(rdd, bsize=5) # 4 blocks, 2 in each partition

    Basic operations:

    len(X) # 20 - number of elements in the whole dataset
    X.blocks # 4 - number of blocks
    X.shape # (20,) - the shape of the whole dataset
    
    X # returns an ArrayRDD
    # <class 'splearn.rdd.ArrayRDD'> from PythonRDD...
    
    X.dtype # returns the type of the blocks
    # numpy.ndarray
    
    X.collect() # get the dataset
    # [array([0, 1, 2, 3, 4]),
    #  array([5, 6, 7, 8, 9]),
    #  array([10, 11, 12, 13, 14]),
    #  array([15, 16, 17, 18, 19])]
    
    X[1].collect() # indexing
    # [array([5, 6, 7, 8, 9])]
    
    X[1] # also returns an ArrayRDD!
    
    X[1::2].collect() # slicing
    # [array([5, 6, 7, 8, 9]),
    #  array([15, 16, 17, 18, 19])]
    
    X[1::2] # returns an ArrayRDD as well
    
    X.tolist() # returns the dataset as a list
    # [0, 1, 2, ... 17, 18, 19]
    X.toarray() # returns the dataset as a numpy.array
    # array([ 0,  1,  2, ... 17, 18, 19])
    
    # pyspark.rdd operations will still work
    X.getNumPartitions() # 2 - number of partitions
  • SparseRDD:

    The sparse counterpart of the ArrayRDD, the main difference is that the blocks are sparse matrices. The reason behind this split is to follow the distinction between numpy.ndarray*s and *scipy.sparse matrices. Usually the SparseRDD is created by splearn's transformators, but one can instantiate too.

    # generate a SparseRDD from a text using SparkCountVectorizer
    from splearn.rdd import SparseRDD
    from sklearn.feature_extraction.tests.test_text import ALL_FOOD_DOCS
    ALL_FOOD_DOCS
    #(u'the pizza pizza beer copyright',
    # u'the pizza burger beer copyright',
    # u'the the pizza beer beer copyright',
    # u'the burger beer beer copyright',
    # u'the coke burger coke copyright',
    # u'the coke burger burger',
    # u'the salad celeri copyright',
    # u'the salad salad sparkling water copyright',
    # u'the the celeri celeri copyright',
    # u'the tomato tomato salad water',
    # u'the tomato salad water copyright')
    
    # ArrayRDD created from the raw data
    X = ArrayRDD(sc.parallelize(ALL_FOOD_DOCS, 4), 2)
    X.collect()
    # [array([u'the pizza pizza beer copyright',
    #         u'the pizza burger beer copyright'], dtype='<U31'),
    #  array([u'the the pizza beer beer copyright',
    #         u'the burger beer beer copyright'], dtype='<U33'),
    #  array([u'the coke burger coke copyright',
    #         u'the coke burger burger'], dtype='<U30'),
    #  array([u'the salad celeri copyright',
    #         u'the salad salad sparkling water copyright'], dtype='<U41'),
    #  array([u'the the celeri celeri copyright',
    #         u'the tomato tomato salad water'], dtype='<U31'),
    #  array([u'the tomato salad water copyright'], dtype='<U32')]
    
    # Feature extraction executed
    from splearn.feature_extraction.text import SparkCountVectorizer
    vect = SparkCountVectorizer()
    X = vect.fit_transform(X)
    # and we have a SparseRDD
    X
    # <class 'splearn.rdd.SparseRDD'> from PythonRDD...
    
    # it's type is the scipy.sparse's general parent
    X.dtype
    # scipy.sparse.base.spmatrix
    
    # slicing works just like in ArrayRDDs
    X[2:4].collect()
    # [<2x11 sparse matrix of type '<type 'numpy.int64'>'
    #   with 7 stored elements in Compressed Sparse Row format>,
    #  <2x11 sparse matrix of type '<type 'numpy.int64'>'
    #   with 9 stored elements in Compressed Sparse Row format>]
    
    # general mathematical operations are available
    X.sum(), X.mean(), X.max(), X.min()
    # (55, 0.45454545454545453, 2, 0)
    
    # even with axis parameters provided
    X.sum(axis=1)
    # matrix([[5],
    #         [5],
    #         [6],
    #         [5],
    #         [5],
    #         [4],
    #         [4],
    #         [6],
    #         [5],
    #         [5],
    #         [5]])
    
    # It can be transformed to dense ArrayRDD
    X.todense()
    # <class 'splearn.rdd.ArrayRDD'> from PythonRDD...
    X.todense().collect()
    # [array([[1, 0, 0, 0, 1, 2, 0, 0, 1, 0, 0],
    #         [1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0]]),
    #  array([[2, 0, 0, 0, 1, 1, 0, 0, 2, 0, 0],
    #         [2, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0]]),
    #  array([[0, 1, 0, 2, 1, 0, 0, 0, 1, 0, 0],
    #         [0, 2, 0, 1, 0, 0, 0, 0, 1, 0, 0]]),
    #  array([[0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0],
    #         [0, 0, 0, 0, 1, 0, 2, 1, 1, 0, 1]]),
    #  array([[0, 0, 2, 0, 1, 0, 0, 0, 2, 0, 0],
    #         [0, 0, 0, 0, 0, 0, 1, 0, 1, 2, 1]]),
    #  array([[0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1]])]
    
    # One can instantiate SparseRDD manually too:
    sparse = sc.parallelize(np.array([sp.eye(2).tocsr()]*20), 2)
    sparse = SparseRDD(sparse, bsize=5)
    sparse
    # <class 'splearn.rdd.SparseRDD'> from PythonRDD...
    
    sparse.collect()
    # [<10x2 sparse matrix of type '<type 'numpy.float64'>'
    #   with 10 stored elements in Compressed Sparse Row format>,
    #  <10x2 sparse matrix of type '<type 'numpy.float64'>'
    #   with 10 stored elements in Compressed Sparse Row format>,
    #  <10x2 sparse matrix of type '<type 'numpy.float64'>'
    #   with 10 stored elements in Compressed Sparse Row format>,
    #  <10x2 sparse matrix of type '<type 'numpy.float64'>'
    #   with 10 stored elements in Compressed Sparse Row format>]
  • DictRDD:

    A column based data format, each column with it's own type.

    from splearn.rdd import DictRDD
    
    X = range(20)
    y = list(range(2)) * 10
    # PySpark RDD with 2 partitions
    X_rdd = sc.parallelize(X, 2) # each partition with 10 elements
    y_rdd = sc.parallelize(y, 2) # each partition with 10 elements
    # DictRDD
    # each partition will contain blocks with 5 elements
    Z = DictRDD((X_rdd, y_rdd),
                columns=('X', 'y'),
                bsize=5,
                dtype=[np.ndarray, np.ndarray]) # 4 blocks, 2/partition
    # if no dtype is provided, the type of the blocks will be determined
    # automatically
    
    # or:
    import numpy as np
    
    data = np.array([range(20), list(range(2))*10]).T
    rdd = sc.parallelize(data, 2)
    Z = DictRDD(rdd,
                columns=('X', 'y'),
                bsize=5,
                dtype=[np.ndarray, np.ndarray])

    Basic operations:

    len(Z) # 8 - number of blocks
    Z.columns # returns ('X', 'y')
    Z.dtype # returns the types in correct order
    # [numpy.ndarray, numpy.ndarray]
    
    Z # returns a DictRDD
    #<class 'splearn.rdd.DictRDD'> from PythonRDD...
    
    Z.collect()
    # [(array([0, 1, 2, 3, 4]), array([0, 1, 0, 1, 0])),
    #  (array([5, 6, 7, 8, 9]), array([1, 0, 1, 0, 1])),
    #  (array([10, 11, 12, 13, 14]), array([0, 1, 0, 1, 0])),
    #  (array([15, 16, 17, 18, 19]), array([1, 0, 1, 0, 1]))]
    
    Z[:, 'y'] # column select - returns an ArrayRDD
    Z[:, 'y'].collect()
    # [array([0, 1, 0, 1, 0]),
    #  array([1, 0, 1, 0, 1]),
    #  array([0, 1, 0, 1, 0]),
    #  array([1, 0, 1, 0, 1])]
    
    Z[:-1, ['X', 'y']] # slicing - DictRDD
    Z[:-1, ['X', 'y']].collect()
    # [(array([0, 1, 2, 3, 4]), array([0, 1, 0, 1, 0])),
    #  (array([5, 6, 7, 8, 9]), array([1, 0, 1, 0, 1])),
    #  (array([10, 11, 12, 13, 14]), array([0, 1, 0, 1, 0]))]

Basic workflow

With the use of the described data structures, the basic workflow is almost identical to sklearn's.

Distributed vectorizing of texts

SparkCountVectorizer
from splearn.rdd import ArrayRDD
from splearn.feature_extraction.text import SparkCountVectorizer
from sklearn.feature_extraction.text import CountVectorizer

X = [...]  # list of texts
X_rdd = ArrayRDD(sc.parallelize(X, 4))  # sc is SparkContext

local = CountVectorizer()
dist = SparkCountVectorizer()

result_local = local.fit_transform(X)
result_dist = dist.fit_transform(X_rdd)  # SparseRDD
SparkHashingVectorizer
from splearn.rdd import ArrayRDD
from splearn.feature_extraction.text import SparkHashingVectorizer
from sklearn.feature_extraction.text import HashingVectorizer

X = [...]  # list of texts
X_rdd = ArrayRDD(sc.parallelize(X, 4))  # sc is SparkContext

local = HashingVectorizer()
dist = SparkHashingVectorizer()

result_local = local.fit_transform(X)
result_dist = dist.fit_transform(X_rdd)  # SparseRDD
SparkTfidfTransformer
from splearn.rdd import ArrayRDD
from splearn.feature_extraction.text import SparkHashingVectorizer
from splearn.feature_extraction.text import SparkTfidfTransformer
from splearn.pipeline import SparkPipeline

from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.pipeline import Pipeline

X = [...]  # list of texts
X_rdd = ArrayRDD(sc.parallelize(X, 4))  # sc is SparkContext

local_pipeline = Pipeline((
    ('vect', HashingVectorizer()),
    ('tfidf', TfidfTransformer())
))
dist_pipeline = SparkPipeline((
    ('vect', SparkHashingVectorizer()),
    ('tfidf', SparkTfidfTransformer())
))

result_local = local_pipeline.fit_transform(X)
result_dist = dist_pipeline.fit_transform(X_rdd)  # SparseRDD

Distributed Classifiers

from splearn.rdd import DictRDD
from splearn.feature_extraction.text import SparkHashingVectorizer
from splearn.feature_extraction.text import SparkTfidfTransformer
from splearn.svm import SparkLinearSVC
from splearn.pipeline import SparkPipeline

from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.svm import LinearSVC
from sklearn.pipeline import Pipeline

X = [...]  # list of texts
y = [...]  # list of labels
X_rdd = sc.parallelize(X, 4)
y_rdd = sc.parallelize(y, 4)
Z = DictRDD((X_rdd, y_rdd),
            columns=('X', 'y'),
            dtype=[np.ndarray, np.ndarray])

local_pipeline = Pipeline((
    ('vect', HashingVectorizer()),
    ('tfidf', TfidfTransformer()),
    ('clf', LinearSVC())
))
dist_pipeline = SparkPipeline((
    ('vect', SparkHashingVectorizer()),
    ('tfidf', SparkTfidfTransformer()),
    ('clf', SparkLinearSVC())
))

local_pipeline.fit(X, y)
dist_pipeline.fit(Z, clf__classes=np.unique(y))

y_pred_local = local_pipeline.predict(X)
y_pred_dist = dist_pipeline.predict(Z[:, 'X'])

Distributed Model Selection

from splearn.rdd import DictRDD
from splearn.grid_search import SparkGridSearchCV
from splearn.naive_bayes import SparkMultinomialNB

from sklearn.grid_search import GridSearchCV
from sklearn.naive_bayes import MultinomialNB

X = [...]
y = [...]
X_rdd = sc.parallelize(X, 4)
y_rdd = sc.parallelize(y, 4)
Z = DictRDD((X_rdd, y_rdd),
            columns=('X', 'y'),
            dtype=[np.ndarray, np.ndarray])

parameters = {'alpha': [0.1, 1, 10]}
fit_params = {'classes': np.unique(y)}

local_estimator = MultinomialNB()
local_grid = GridSearchCV(estimator=local_estimator,
                          param_grid=parameters)

estimator = SparkMultinomialNB()
grid = SparkGridSearchCV(estimator=estimator,
                         param_grid=parameters,
                         fit_params=fit_params)

local_grid.fit(X, y)
grid.fit(Z)

ROADMAP

  • [ ] Transparent API to support plain numpy and scipy objects (partially done in the transparent_api branch)
  • [ ] Update all dependencies
  • [ ] Use Mllib and ML packages more extensively (since it becames more mature)
  • [ ] Support Spark DataFrames

Special thanks

  • scikit-learn community
  • spylearn community
  • pyspark community

Similar Projects

Comments
  • DBSCAN Import Error

    DBSCAN Import Error

    I have been trying to run DBSCAN, using Python from command line .. I got this error

    ImportError: cannot import name _get_unmangled_double_vector_rdd

    Any one can help me regarding this ?

    opened by Elbehery 6
  • Poor performances

    Poor performances

    Hi to all! I just started to deep into spark machine learning, coming from scikit-learn. I tried to fit a linear SVC from scikit-learn and sparkit-learn. Splearn is remaining slower than scikit. How is this possible? (I am attaching my code and results)

    import time as t from sklearn.datasets import make_classification from sklearn.tree import DecisionTreeClassifier from sklearn.svm import LinearSVC from splearn.svm import SparkLinearSVC from splearn.rdd import ArrayRDD, DictRDD import numpy as np

    X,y=make_classification(n_samples=20000,n_classes=2) print 'Dataset created. # of samples: ',X.shape[0] skstart = t.time() dt=DecisionTreeClassifier() local_clf = LinearSVC() local_clf.fit(X,y)

    sktime = t.time()-skstart print 'Scikit-learn fitting time: ',sktime

    spstart= t.time() X_rdd=sc.parallelize(X,20) y_rdd=sc.parallelize(y,20) Z = DictRDD((X_rdd, y_rdd), columns=('X', 'y'), dtype=[np.ndarray, np.ndarray])

    distr_clf = SparkLinearSVC() distr_clf.fit(Z,np.unique(y)) sptime = t.time()-spstart print 'Spark time: ',sptime

    ============== RESULTS ================= Dataset created. # of samples: 20000 Scikit-learn fitting time: 3.03552293777 Spark time: 3.919039011

    OR for less samples: Dataset created. # of samples: 2000 Scikit-learn fitting time: 0.244801998138 Spark time: 3.15833210945

    opened by orfi2017 3
  • Fixes

    Fixes

    Hello,

    Here are some fixes for issues I encountered using sparkit learn. If you prefer 1 pull request per fix I can provide them.

    • The check for numpy is often failing and does not seem very important
    • Some .persist() where very aggressive
    • SparkFeatureUnion does not handle more than 2 steps, DictRDD and there was an issue with a return

    Best,

    opened by taynaud 3
  • few readme.md and setup.py corrections

    few readme.md and setup.py corrections

    I'm not sure if this stuff was supposed to work or if it was untested rough draft. Creating a PR for few things I've noticed, I'll dig more if you think I should.

    opened by vchollati 3
  • Validate Vocabulary issue

    Validate Vocabulary issue

    First of all, thanks for the git. It is very helpful.

    When running splearn/feature_extraction/text.py in pyspark shell, I am getting "AttributeError: 'SparkCountVectorizer' object has no attribute '_validate_vocabulary' " in fit_transform method. (Is it need to be '_init_vocab' or something??)

    Python 2.7 Spark 1.2.0 Scikit 0.15.2 numpy 1.9.0

    Code I am using - vect = text.SparkCountVectorizer() result_dist = vect.fit_transform(docs).collect()

    If an appropriate to post the issue details here, please redirect the same. Thanks in advance

    question 
    opened by raghunittala 3
  • ImportError: No module named _common

    ImportError: No module named _common

    From AWS EMR 5.4, Spark 2.1.0, I can't import dbscan

    File "/usr/local/lib/python2.7/site-packages/splearn/cluster/dbscan.py", line 3, in <module>
      from pyspark.mllib._common import (_get_unmangled_double_vector_rdd,
    ImportError: No module named _common
    
    opened by nicerobot 2
  • Fix issue in bayes predict_proba

    Fix issue in bayes predict_proba

    predict_proba map to scikit predict_proba, which call self.predict_proba. As sparkit-learn herits from scikit bayes, this call sparkit-learn predict_proba which fail on numpy or scipy array.

    opened by taynaud 2
  • Error in Creating DictRdd: Can only zip RDDs with same number of elements in each partition

    Error in Creating DictRdd: Can only zip RDDs with same number of elements in each partition

    I am trying to create a DictRdd as follows:

    cleanedRdd=sc.sequenceFile(path="hdfs:///bdpilot/text_mining/sequence_input_with_target_v",minSplits=100)
    train_rdd,test_rdd = cleanedRdd.randomSplit([0.7,0.3])
    train_rdd.saveAsSequenceFile("hdfs:///bdpilot/text_mining/sequence_train_input")
    
    train_rdd = sc.sequenceFile(path="hdfs:///bdpilot3_h/text_mining/sequence_train_input",minSplits=100)
    
    train_y = train_rdd.map(lambda(x,y): int(y.split("~")[1]))
    train_text = train_rdd.map(lambda(x,y): y.split("~")[0])
    
    train_Z = DictRDD((train_text,train_y),columns=('X','y'),bsize=50)
    

    But, I get the follwing error, when I do:

    train_Z.first()
    
    org.apache.spark.SparkException: Can only zip RDDs with same number of
    elements in each partition
    

    I tried the following as well, but with no sucess:

    train_y = train_rdd.map(lambda(x,y): int(y.split("~")[1]),perservesPartitioning=True)
    train_text = train_rdd.map(lambda(x,y): y.split("~")[0],perservesPartitioning=True)
    train_Z = DictRDD((train_text,train_y),columns=('X','y'),bsize=50)
    
    opened by mrshanth 2
  • Spark version 1.2.1, error AttributeError: <class 'rdd.BlockRDD'> object has no attribute treeReduce

    Spark version 1.2.1, error AttributeError: object has no attribute treeReduce

    countvectorizer = SparkCountVectorizer(tokenizer=tokenize_pre_process) count_vector <class 'rdd.ArrayRDD'> from PythonRDD[22] at collect at rdd.py:168 sel_vt = SparkVarianceThreshold() red_vt_vector = sel_vt.fit_transform(count_vector) Traceback (most recent call last): File "", line 1, in File "base.py", line 63, in fit_transform return self.fit(Z, **fit_params).transform(Z) File "feature_selection.py", line 72, in fit _, , self.variances = X.map(mapper).treeReduce(reducer) File "rdd.py", line 179, in getattr self.class, attr)) AttributeError: <class 'rdd.BlockRDD'> object has no attribute treeReduce

    I am using spark 1.2.1, and I think rdd has the method treeReduce. Would you have any idea why this error could be popping out of the ArrayRDD extendig BlockRDD

    opened by vishalrajpal25 2
  • TypeError: 'Broadcast' object is unsubscriptable

    TypeError: 'Broadcast' object is unsubscriptable

    We are trying to create a Count Vector using SparkCountVectorizer. We are using Python 2.6.6, hence replaced all the dict_comprehensions in the code. We ran into this following error:

      File "base.py", line 19, in func_wrapper
        return func(*args, **kwargs)
      File "splearn_custom.py", line 176, in _count_vocab
        j_indices.append(vocabulary[feature])
    TypeError: 'Broadcast' object is unsubscriptable
    

    Here, splearn_custom.py refers to feature_extraction/text.py

    Thanks in advance

    opened by mrshanth 2
  • Syntax Errors

    Syntax Errors

    I am getting syntax errors in

    • splearn/base.py :
      • for name in self.transient}, line 12
    • splearn/feature_extraction/text.py :
      • vocabulary = {t: i for i, t in enumerate(accum.value)}, line 154
    opened by mrshanth 2
  • Examples missing

    Examples missing

    Hi!

    I slipped into different sub directories to have a look at end to end demonstration of features provided by spark kit-learn but couldn't find them. However, a directory named examples exists but it seems empty. If you permit, can I work towards writing code based tutorials for spark kit-learn?

    Thanks!

    opened by bhav09 0
  • docs: fix simple typo, unnunnecessary -> unnecessary

    docs: fix simple typo, unnunnecessary -> unnecessary

    There is a small typo in splearn/rdd.py.

    Should read unnecessary rather than unnunnecessary.

    Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

    opened by timgates42 0
  • ImportError: cannot import name _check_numpy_unicode_bug

    ImportError: cannot import name _check_numpy_unicode_bug

    I got the error when importing the SparkitLabelEncoder module with scikit-learn version 0.19.1.

    >>> from splearn.preprocessing import SparkLabelEncoder
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/lib/python2.7/site-packages/splearn/preprocessing/__init__.py", line 1, in <module>
        from .label import SparkLabelEncoder
      File "/usr/lib/python2.7/site-packages/splearn/preprocessing/label.py", line 3, in <module>
        from sklearn.preprocessing.label import _check_numpy_unicode_bug
    ImportError: cannot import name _check_numpy_unicode_bug
    
    opened by dankiho 0
  • [Question] ArrayRDD to Pyspark Dataframe?

    [Question] ArrayRDD to Pyspark Dataframe?

    Hi - thanks so much for this package!

    I came to this repo because I need to run a scikit-learn predictive model on Spark. It is easy to map the model with ArrayRDDs. However, my postprocessing assumes a PySpark DataFrame. Is there a way to convert an ArrayRDD to a DataFrame?

    I appreciate any help, thanks!

    opened by osimpson 1
  • Import error cannot import name

    Import error cannot import name "frombuffer_empty"

    I installed the latest version of sparkit-learn and I got the error

    from splearn.feature_extraction.text import SparkCountVectorizer Traceback (most recent call last): File "", line 1, in File "/home/gaurishk/anaconda3/lib/python3.6/site-packages/splearn/feature_extraction/init.py", line 3, in from .text import SparkCountVectorizer File "/home/gaurishk/anaconda3/lib/python3.6/site-packages/splearn/feature_extraction/text.py", line 16, in from sklearn.utils.fixes import frombuffer_empty ImportError: cannot import name 'frombuffer_empty

    opened by thak123 2
  • What is the roadmap for this project: is it moribund?

    What is the roadmap for this project: is it moribund?

    There is little to no activity for over one year: and one of the issues recommends to use spark ml/mllib instead. Would the owners please clarify whether this project is intended to be supported moving forward? I would like to know in order to calibrate whether to add some algorithm here or independently.

    opened by javadba 1
Releases(0.2.5)
Owner
Lensa
Lensa
ELI5 is a Python package which helps to debug machine learning classifiers and explain their predictions

A library for debugging/inspecting machine learning classifiers and explaining their predictions

154 Dec 17, 2022
Iris-Heroku - Putting a Machine Learning Model into Production with Flask and Heroku

Puesta en Producción de un modelo de aprendizaje automático con Flask y Heroku L

Jesùs Guillen 1 Jun 03, 2022
EbookMLCB - ebook Machine Learning cơ bản

Mã nguồn cuốn ebook "Machine Learning cơ bản", Vũ Hữu Tiệp. ebook Machine Learning cơ bản pdf-black_white, pdf-color. Mọi hình thức sao chép, in ấn đề

943 Jan 02, 2023
A pure-python implementation of the UpSet suite of visualisation methods by Lex, Gehlenborg et al.

pyUpSet A pure-python implementation of the UpSet suite of visualisation methods by Lex, Gehlenborg et al. Contents Purpose How to install How it work

288 Jan 04, 2023
ThunderGBM: Fast GBDTs and Random Forests on GPUs

Documentations | Installation | Parameters | Python (scikit-learn) interface What's new? ThunderGBM won 2019 Best Paper Award from IEEE Transactions o

Xtra Computing Group 648 Dec 16, 2022
Machine Learning Model to predict the payment date of an invoice when it gets created in the system.

Payment-Date-Prediction Machine Learning Model to predict the payment date of an invoice when it gets created in the system.

15 Sep 09, 2022
vortex particles for simulating smoke in 2d

vortex-particles-method-2d vortex particles for simulating smoke in 2d -vortexparticles_s

12 Aug 23, 2022
Markov bot - A Writing bot based on Markov Chain for Data Structure Lab

基于马尔可夫链的写作机器人 前端 用html/css完成 Demo展示(已给出文本的相应展示) 用户提供相关的语料库后训练的成果 后端 要完成的几个接口 解析文

DysprosiumDy 9 May 05, 2022
Predict the demand for electricity (R) - FRENCH

06.demand-electricity Predict the demand for electricity (R) - FRENCH Prédisez la demande en électricité Prérequis Pour effectuer ce projet, vous devr

1 Feb 13, 2022
Intel(R) Extension for Scikit-learn is a seamless way to speed up your Scikit-learn application

Intel(R) Extension for Scikit-learn* Installation | Documentation | Examples | Support | FAQ With Intel(R) Extension for Scikit-learn you can accelera

Intel Corporation 858 Dec 25, 2022
A simple example of ML classification, cross validation, and visualization of feature importances

Simple-Classifier This is a basic example of how to use several different libraries for classification and ensembling, mostly with sklearn. Example as

Rob 2 Aug 25, 2022
MLR - Machine Learning Research

Machine Learning Research 1. Project Topic 1.1. Exsiting research Benmark: https://paperswithcode.com/sota ACL anthology for NLP papers: http://www.ac

Charles 69 Oct 20, 2022
A logistic regression model for health insurance purchasing prediction

Logistic_Regression_Model A logistic regression model for health insurance purchasing prediction This code is using these packages, so please make sur

ShawnWang 1 Nov 29, 2021
SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker.

SageMaker Python SDK SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker. With the S

Amazon Web Services 1.8k Jan 01, 2023
Probabilistic programming framework that facilitates objective model selection for time-varying parameter models.

Time series analysis today is an important cornerstone of quantitative science in many disciplines, including natural and life sciences as well as eco

Christoph Mark 129 Dec 24, 2022
Data Efficient Decision Making

Data Efficient Decision Making

Microsoft 197 Jan 06, 2023
A Lucid Framework for Transparent and Interpretable Machine Learning Models.

Currently a Beta-Version lucidmode is an open-source, low-code and lightweight Python framework for transparent and interpretable machine learning mod

lucidmode 15 Aug 12, 2022
Machine Learning approach for quantifying detector distortion fields

DistortionML Machine Learning approach for quantifying detector distortion fields. This project is a feasibility study for training a surrogate model

Joel Bernier 1 Nov 05, 2021
A machine learning model for Covid case prediction

CovidcasePrediction A machine learning model for Covid case prediction Problem Statement Using regression algorithms we can able to track the active c

VijayAadhithya2019rit 1 Feb 02, 2022
MasTrade is a trading bot in baselines3,pytorch,gym

mastrade MasTrade is a trading bot in baselines3,pytorch,gym idea we have for example 1 btc and we buy a crypto with it with market option to trade in

Masoud Azizi 18 May 24, 2022