Apache (Py)Spark type annotations (stub files).

Overview

PySpark Stubs

Build Status PyPI version Conda Forge version

A collection of the Apache Spark stub files. These files were generated by stubgen and manually edited to include accurate type hints.

Tests and configuration files have been originally contributed to the Typeshed project. Please refer to its contributors list and license for details.

Important

This project has been merged with the main Apache Spark repository (SPARK-32714). All further development for Spark 3.1 and onwards will be continued there.

For Spark 2.4 and 3.0, development of this package will be continued, until their official deprecation.

  • If your problem is specific to Spark 2.3 and 3.0 feel free to create an issue or open pull requests here.
  • Otherwise, please check the official Spark JIRA and contributing guidelines. If you create a JIRA ticket or Spark PR related to type hints, please ping me with [~zero323] or @zero323 respectively. Thanks in advance.

Motivation

  • Static error detection (see SPARK-20631)

    SPARK-20631

  • Improved autocompletion.

    Syntax completion

Installation and usage

Please note that the guidelines for distribution of type information is still work in progress (PEP 561 - Distributing and Packaging Type Information). Currently installation script overlays existing Spark installations (pyi stub files are copied next to their py counterparts in the PySpark installation directory). If this approach is not acceptable you can add stub files to the search path manually.

According to PEP 484:

Third-party stub packages can use any location for stub storage. Type checkers should search for them using PYTHONPATH.

Moreover:

Third-party stub packages can use any location for stub storage. Type checkers should search for them using PYTHONPATH. A default fallback directory that is always checked is shared/typehints/python3.5/ (or 3.6, etc.)

Please check usage before proceeding.

The package is available on PYPI:

pip install pyspark-stubs

and conda-forge:

conda install -c conda-forge pyspark-stubs

Depending on your environment you might also need a type checker, like Mypy or Pytype [1], and autocompletion tool, like Jedi.

Editor Type checking Autocompletion Notes
Atom [2] [3] Through plugins.
IPython / Jupyter Notebook [4]  
PyCharm  
PyDev [5] ?  
VIM / Neovim [6] [7] Through plugins.
Visual Studio Code [8] [9] Completion with plugin
Environment independent / other editors [10] [11] Through Mypy and Jedi.

This package is tested against MyPy development branch and in rare cases (primarily important upstrean bugfixes), is not compatible with the preceding MyPy release.

PySpark Version Compatibility

Package versions follow PySpark versions with exception to maintenance releases - i.e. pyspark-stubs==2.3.0 should be compatible with pyspark>=2.3.0,<2.4.0. Maintenance releases (post1, post2, ..., postN) are reserved for internal annotations updates.

API Coverage:

As of release 2.4.0 most of the public API is covered. For details please check API coverage document.

See also

Disclaimer

Apache Spark, Spark, PySpark, Apache, and the Spark logo are trademarks of The Apache Software Foundation. This project is not owned, endorsed, or sponsored by The Apache Software Foundation.

Footnotes

[1] Not supported or tested.
[2] Requires atom-mypy or equivalent.
[3] Requires autocomplete-python-jedi or equivalent.
[4] It is possible to use magics to type check directly in the notebook. In general though, you'll have to export whole notebook to .py file and run type checker on the result.
[5] Requires PyDev 7.0.3 or later.
[6] TODO Using vim-mypy, syntastic or Neomake.
[7] With jedi-vim.
[8] With Mypy linter.
[9] With Python extension for Visual Studio Code.
[10] Just use your favorite checker directly, optionally combined with tool like entr.
[11] See Jedi editor plugins list.
Comments
  • Fix 2-argument math functions

    Fix 2-argument math functions

    Fixes the binary math functions:

    • atan2 and hypot take two arguments, not one
    • pow supports taking a literal numeric value as its second argument in addition to a Column.
    bug 3.0 2.3 2.4 
    opened by harpaj 10
  • Jedi doesn't work with MLReaders

    Jedi doesn't work with MLReaders

    It seems like there is some problem with Jedi compatibility. Some components seem to work pretty well. For example DataFrame without stubs:

    In [1]: import jedi                                                                                                                                                                                                
    
    In [2]: from pyspark.sql import SparkSession                                                                                                                                                                       
    
    In [3]: jedi.Interpreter("SparkSession.builder.getOrCreate().createDataFrame([]).", [globals()]).completions()                                                                                                     
    ---------------------------------------------------------------------------
    AttributeError   
    ...
    AttributeError: 'ModuleContext' object has no attribute 'py__path__'
    

    and with stubs:

    In [1]: from pyspark.sql import SparkSession                                                                                                                                                                       
    
    In [2]: import jedi                                                                                                                                                                                                
    
    In [3]: jedi.Interpreter("SparkSession.builder.getOrCreate().createDataFrame([]).", [globals()]).completions()                                                                                                     
    Out[3]: 
    [<Completion: agg>,
     <Completion: alias>,
     <Completion: approxQuantile>,
     <Completion: cache>,
     <Completion: checkpoint>,
     <Completion: coalesce>,
     <Completion: collect>,
     <Completion: colRegex>,
     <Completion: columns>,
     <Completion: corr>,
     <Completion: count>,
     <Completion: cov>,
    ...
     <Completion: __str__>]
    

    So far so good. However, if take for example LinearRegressionModel.load things don't work so well. Without stubs provides no suggestions

    In [1]: import jedi                                                                                                                                                                                                
    
    In [2]: from pyspark.ml.regression import LinearRegressionModel                                                                                                                                                    
    
    In [3]: jedi.Interpreter("LinearRegressionModel.load('foo').", [globals()]).completions()                                                                                                                          
    Out[3]: []
    

    but one provided with stubs

    In [1]: import jedi                                                                                                                                                                                                
    
    In [2]: from pyspark.ml.regression import LinearRegressionModel                                                                                                                                                    
    
    In [3]: jedi.Interpreter("LinearRegressionModel.load('foo').", [globals()]).completions()                                                                                                                          
    Out[3]: 
    [<Completion: load>,
     <Completion: read>,
     <Completion: __annotations__>,
     <Completion: __class__>,
     <Completion: __delattr__>,
     <Completion: __dict__>,
     <Completion: __dir__>,
     <Completion: __doc__>,
     <Completion: __eq__>,
     <Completion: __format__>,
     <Completion: __getattribute__>,
     <Completion: __hash__>,
     <Completion: __init__>,
     <Completion: __init_subclass__>,
     <Completion: __module__>,
     <Completion: __ne__>,
     <Completion: __new__>,
     <Completion: __reduce__>,
     <Completion: __reduce_ex__>,
     <Completion: __repr__>,
     <Completion: __setattr__>,
     <Completion: __sizeof__>,
     <Completion: __slots__>,
    

    don't make much sense. If model is fitted:

    In [4]: from pyspark.ml.regression import LinearRegression                                                                                                                                                         
    
    In [5]: jedi.Interpreter("LinearRegression().fit(...).", [globals()]).completions()                                                                                                                                
    Out[5]: 
    [<Completion: aggregationDepth>,
     <Completion: append>,
     <Completion: clear>,
     <Completion: coefficients>,
     <Completion: copy>,
     <Completion: count>,
    ....
     <Completion: __str__>]
    

    Model which is explicitly annotated works fine, so it seems like there is something in MLReader or one of the sub-classes that causes a failure.

    We already have data tests for this (as well as some test cases from apache/spark examples, and mypy seems to be fine with this.

    Since LinearRegression.fit works fine (and some toy tests confirm that), Generics are not sufficient to reproduce the problem. So it seems like type parameter is not processed correctly on the path:

    Tested with:

    • jedi==0.15.2 and jedi==0.16.0 (0c56aa4).
    • pyspark-stubs==3.0.0.dev5
    • pyspark==3.0.0.dev0 (afe70b3)
    opened by zero323 7
  • DataFrameReader.load parameters incorrectly expected all to be strings

    DataFrameReader.load parameters incorrectly expected all to be strings

    Using 2.4.0.post6

    spark.read.load(folders, inferSchema=True, header=False)
    

    mypy reports Expected type 'str', got 'bool' instead for both inferSchema and header.

    Looks like the issue is in third_party/3/pyspark/sql/readwriter.pyi Line 23 where in the definition for load() we have **options: str. For csv suppport this needs to be **options: Optional[Union[bool, str, int]] but to handle the general case it probably needs to be **options: Any.

    enhancement 
    opened by ghost 7
  • Added contains to Column

    Added contains to Column

    The contains method is missing from the stubs causing mypy to raise error: "Column" not callable.

    This PR adds the typehints to 2.4 specifically (the version we are using), but it should probably also be added to the other versions.

    opened by Braamling 6
  • #394: Use Union[List[Column], List[str]] for Select

    #394: Use Union[List[Column], List[str]] for Select

    Passing a List[str] to select raises a mypy warning, similar for List[Column]. We change the type from List[Union[Column, str]] to Union[List[Column], List[str]].

    Fixes #394 .

    opened by jhereth 5
  • Update distinct() and repartition() definitions

    Update distinct() and repartition() definitions

    Update repartition functions to allow for Col in numPartitions parameter.

    Reference

    numPartitions – can be an int to specify the target number of partitions or a Column.
        If it is a Column, it will be used as the first partitioning column.
        If not specified, the default number of partitions is used.
    

    Also add stub for DataFrame#distinct()

    opened by zpencerq 5
  • Allow `Column` type for timezone argument in pyspark.sql.functions

    Allow `Column` type for timezone argument in pyspark.sql.functions

    In the functions here: https://github.com/zero323/pyspark-stubs/blob/3c4684a224c1be4eea4577e475f8bb4d045edddd/third_party/3/pyspark/sql/functions.pyi#L100-L101 we currently have tz: str but this can also be specified as a Column

    Example:

    >>> from pyspark.sql import functions
    >>> df = spark.sql("SELECT CAST(0 AS TIMESTAMP) AS timestamp, 'Asia/Tokyo' AS tz")
    >>> df.select(functions.from_utc_timestamp(df.timestamp, df.tz)).collect()
    [Row(from_utc_timestamp(timestamp, tz)=datetime.datetime(1970, 1, 1, 18, 0))]
    

    I think this could be expanded to tz: ColumnOrName?

    3.0 2.4 3.1 
    opened by charlietsai 4
  • Overload DataFrame.drop: sequences must be *str

    Overload DataFrame.drop: sequences must be *str

    The method DataFrame.drop expects either 1 Column, or 1 str, or an iterable of strings. This is only type checked inside the function though.

    Currently the type hints (and the actual API) allow to pass multiple Columns but it does result in a runtime error. Personally, I'd like to have that caught earlier. But as this might be getting too close to the internals of the functions, I’d like to hear your opinion on whether or not the type hints should “look inside” to aid development.

    opened by oliverw1 4
  • provide overloaded methods for sample

    provide overloaded methods for sample

    The fraction is a required argument to the sample method. Anytime someone calls df.sample(.01) this is met in mypy with

    Argument 1 to "sample" of "DataFrame" has incompatible type "float"; expected "Optional[bool]"

    In the Pyspark API, the three arguments are in fact pure keyword arguments that are handled later to ensure fraction must be given. This is probably done to keep consistent with the Scala API.

    By overloading the methods, the issue is resolved.

    opened by oliverw1 4
  • Allow non-string load/save parameters

    Allow non-string load/save parameters

    Resolves #273

    Additional parameters to DataFrameReader.load() and DataFrameWriter.save()/.saveTable() are passed to the file-type specific reader or writer types. These parameters can be of any type.

    opened by mark-oppenheim 4
  • Fix return type for DataFrame.groupBy / cube / rollup

    Fix return type for DataFrame.groupBy / cube / rollup

    2.3 has these data types and I was erroneously gettting errors for them.

    Note this is a port of e2d225f06ff36fcbf79e2123f1c18f380e862728

    I tried a cherry-pick but it had some issues (not sure why)

    opened by dangercrow 4
Releases(3.0.0.post3)
Owner
Maciej
Just a dog on the Internet. I would love to tell you more, but then, of course, I'd have to erase your memory. A30CEF0C31A501EC
Maciej
NCVX (NonConVeX): A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning.

NCVX (NonConVeX): A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning.

SUN Group @ UMN 28 Aug 03, 2022
slim-python is a package to learn customized scoring systems for decision-making problems.

slim-python is a package to learn customized scoring systems for decision-making problems. These are simple decision aids that let users make yes-no p

Berk Ustun 37 Nov 02, 2022
Apple-voice-recognition - Machine Learning

Apple-voice-recognition Machine Learning How does Siri work? Siri is based on large-scale Machine Learning systems that employ many aspects of data sc

Harshith VH 1 Oct 22, 2021
using Machine Learning Algorithm to classification AppleStore application

AppleStore-classification-with-Machine-learning-Algo- using Machine Learning Algorithm to classification AppleStore application. the first step : 1: p

Mohammed Hussien 2 May 02, 2022
Create large-scale ML-driven multiscale simulation ensembles to study the interactions

MuMMI RAS v0.1 Released: Nov 16, 2021 MuMMI RAS is the application component of the MuMMI framework developed to create large-scale ML-driven multisca

4 Feb 16, 2022
A simple example of ML classification, cross validation, and visualization of feature importances

Simple-Classifier This is a basic example of how to use several different libraries for classification and ensembling, mostly with sklearn. Example as

Rob 2 Aug 25, 2022
Machine Learning Study 혼자 해보기

Machine Learning Study 혼자 해보기 기여자 (Contributors) ✨ Teddy Lee 🏠 HongJaeKwon 🏠 Seungwoo Han 🏠 Tae Heon Kim 🏠 Steve Kwon 🏠 SW Song 🏠 K1A2 🏠 Wooil

Teddy Lee 1.7k Jan 01, 2023
End to End toy example of MLOps

churn_model MLOps Toy Example End to End You might find below links useful Connect VSCode to Git MLFlow Port Heroku App Project Organization ├── LICEN

Ashish Tele 6 Feb 06, 2022
A collection of machine learning examples and tutorials.

machine_learning_examples A collection of machine learning examples and tutorials.

LazyProgrammer.me 7.1k Jan 01, 2023
PyNNDescent is a Python nearest neighbor descent for approximate nearest neighbors.

PyNNDescent PyNNDescent is a Python nearest neighbor descent for approximate nearest neighbors. It provides a python implementation of Nearest Neighbo

Leland McInnes 699 Jan 09, 2023
Learn Machine Learning Algorithms by doing projects in Python and R Programming Language

Learn Machine Learning Algorithms by doing projects in Python and R Programming Language. This repo covers all aspect of Machine Learning Algorithms.

Ravi Chaubey 6 Oct 20, 2022
Bayesian optimization in JAX

Bayesian optimization in JAX

Predictive Intelligence Lab 26 May 11, 2022
Python implementation of the rulefit algorithm

RuleFit Implementation of a rule based prediction algorithm based on the rulefit algorithm from Friedman and Popescu (PDF) The algorithm can be used f

Christoph Molnar 326 Jan 02, 2023
A webpage that utilizes machine learning to extract sentiments from tweets.

Tweets_Classification_Webpage The goal of this project is to be able to predict what rating customers on social media platforms would give to products

Ayaz Nakhuda 1 Dec 30, 2021
My capstone project for Udacity's Machine Learning Nanodegree

MLND-Capstone My capstone project for Udacity's Machine Learning Nanodegree Lane Detection with Deep Learning In this project, I use a deep learning-b

Michael Virgo 407 Dec 12, 2022
Python package for machine learning for healthcare using a OMOP common data model

This library was developed in order to facilitate rapid prototyping in Python of predictive machine-learning models using longitudinal medical data from an OMOP CDM-standard database.

Sontag Lab 75 Jan 03, 2023
A linear regression model for house price prediction

Linear_Regression_Model A linear regression model for house price prediction. This code is using these packages, so please make sure your have install

ShawnWang 1 Nov 29, 2021
A complete guide to start and improve in machine learning (ML)

A complete guide to start and improve in machine learning (ML), artificial intelligence (AI) in 2021 without ANY background in the field and stay up-to-date with the latest news and state-of-the-art

Louis-François Bouchard 3.3k Jan 04, 2023
Auto updating website that tracks closed & open issues/PRs on scikit-learn/scikit-learn.

Repository Status for Scikit-learn Live webpage Auto updating website that tracks closed & open issues/PRs on scikit-learn/scikit-learn. Running local

Thomas J. Fan 6 Dec 27, 2022
LightGBM + Optuna: no brainer

AutoLGBM LightGBM + Optuna: no brainer auto train lightgbm directly from CSV files auto tune lightgbm using optuna auto serve best lightgbm model usin

Rishiraj Acharya 22 Dec 15, 2022