Write python locally, execute SQL in your data warehouse

Overview

Downloads PyPI version Docs Chat on Slack License: AGPL v3

RasgoQL Hero

RasgoQL

Write python locally, execute SQL in your data warehouse
≪ Read the Docs   ·   Join Our Slack »

RasgoQL is a Python package that enables you to easily query and transform tables in your Data Warehouse directly from a notebook.

You can quickly create new features, sample data, apply complex aggregates... all without having to write SQL!

Choose from our library of predefined transformations or make your own to streamline the feature engineering process.

RasgoQL 30-second demo

Why is this package useful?

Data scientists spend much of their time in pandas preparing data for modelling. When they are ready to deploy or scale, two pain points arise:

  1. pandas cannot handle larger volumes of data, forcing the use of VMs or code refactoring.
  2. feature data must be added to the Enterprise Data Warehouse for future processing, requiring refactoring to SQL

We created RasgoQL to solve these two pain points.

Learn more at https://docs.rasgoql.com.

How does it work?

Under the covers, RasgoQL sends all processing to your Data Warehouse, enabling the efficient transformation of massive datasets. RasgoQL only needs basic metadata to execute transforms, so your private data remains secure.

RasgoQL workflow diagram

RasgoQL does these things well:

  • Pulls existing Data Warehouse tables into pandas DataFrames for analysis
  • Constructs SQL queries using a syntax that feels like pandas
  • Creates views in your Data Warehouse to save transformed data
  • Exports runnable sql in .sql files or dbt-compliant .yaml files
  • Offers dozens of free SQL transforms to use
  • Coming Soon: allows users to create & add custom transforms

Rasgo supports Snowflake, BigQuery, Postgres, and Amazon Redshift with more Data Warehouses being added soon. If you'd like to suggest another database type, submit your idea to our GitHub Discussions page so that other community members can weight in and show their support.

Can RasgoQL help you?

  • If you use pandas to build features, but you are working on a massive set of data that won't fit in your machine's memory. RasgoQL can help!

  • If your organization uses dbt or another SQL tool to run production data flows, but you prefer to build features in pandas. RasgoQL can help!

  • If you know pandas, but not SQL and want to learn how queries will translate. RasgoQL can help!

Where to get it

Just run a simple pip install.

pip install rasgoql~=1.0

Report Bug · Suggest Improvement · Request Feature

Quick Start

pip install rasgoql --upgrade

# Connect to your data warehouse
creds = rasgoql.SnowflakeCredentials(
    account="",
    user="",
    password="",
    role="",
    warehouse="",
    database="",
    schema=""
)

# Connect to DW
rql = rasgoql.connect(creds)

# List available tables
rql.list_tables('ADVENTUREWORKS').head(10)

# Allow rasgoQL to interact with an existing Table in your Data Warehouse
dataset = rql.dataset('ADVENTUREWORKS.PUBLIC.FACTINTERNETSALES')

# Take a peek at the data
dataset.preview()

# Use the datetrunc transform to seperate things into weeks
weekly_sales = dataset.datetrunc(dates={'ORDERDATE':'week'})

# Aggregate to sum of sales for each week
agg_weekly_sales = weekly_sales.aggregate(
    group_by=['PRODUCTKEY', 'ORDERDATE_WEEK'],
    aggregations={'SALESAMOUNT': ['SUM']},
    )

# Quickly validate output
agg_weekly_sales.to_df()

# Print the SQL
print(agg_weekly_sales.sql())

Getting Stared Tutorials

The best way to get familiar with the RasgoQL basics is by running through these notebooks in the tutorials folder.

Advanced Examples

Joins

Easily join tables together using the join transform.

sales_dataset = rasgoql.dataset('ADVENTUREWORKS.PUBLIC.FACTINTERNETSALES')

sales_product_dataset = sales_dataset.join(
  join_table='DIM_PRODUCT',
  join_columns={'PRODUCTKEY': 'PRODUCTKEY'},
  join_type='LEFT',
  join_prefix='PRODUCT')

sales_product_dataset.sql()
sales_product_dataset.preview()

Rasgo Join Example

Chain transforms together

Create a rolling average aggregation and then drops unnecessary colomns.

sales_agg_drop = sales_dataset.rolling_agg(
    aggregations={"SALESAMOUNT": ["MAX", "MIN", "SUM"]},
    order_by="ORDERDATE",
    offsets=[-7, 7],
    group_by=["PRODUCTKEY"],
).drop_columns(exclude_cols=["ORDERDATEKEY"])

sales_agg_drop.sql()
sales_agg_drop.preview()

Multiple rasgoql transforms

Transpose unique values with pivots

Quickly generate pivot tables of your data.

sales_by_product = sales_dataset.pivot(
    dimensions=['ORDERDATE'],
    pivot_column='SALESAMOUNT',
    value_column='PRODUCTKEY',
    agg_method='SUM',
    list_of_vals=['310', '345'],
)

sales_by_product.sql()
sales_by_product.preview()

Rasgoql pivot example

Does any of my data get collected?

Rasgo will not collect any personal information. We log execution of methods in transforms.py for success and failure so that we can more accurately track what's useful and what's problematic.

Where do I go for help?

If you have any questions please:

  1. RasgoQL Docs
  2. Slack
  3. GitHub Issues

How can I contribute?

Review the contributors guide

License

RasgoQL uses the GNU AGPL license, as found in the LICENSE file.

This project is sponspored by RasgoML. Find out at https://www.rasgoml.com/

Comments
  • [BigQuery] fqtn is not valid if project name contains '-'

    [BigQuery] fqtn is not valid if project name contains '-'

    Hello,

    The fqtn of the table I want to get is following this pattern : my-awesome-project.schema.table. I tried to get it using rql.dataset(fqtn="my-awesome-project.schema.table") but I get a [ValueError: my-awesome-project.schema.table is not a well-formed fqtn](). It seems that the validate_fqtn() function is applying this regex \w+\.\w+\.\w+ that isn't accepting my GCP project name pattern. Is there a way to make this work without changing my GCP project name ?

    Thank you for this awesome package, I can't wait to try it ! ❤️ 🚀

    bug 
    opened by amirbtb 10
  • [BigQuery] `to_dbt()` raises IndexError

    [BigQuery] `to_dbt()` raises IndexError

    Hi,

    I am trying to generate dbt files (.sql & .yml) from a SQLChain using to_dbt(). The source table is a regular table. I'm using rasgoql 1.0.2a2.

    Here is my code, I'm just trying to generate base sql code for casting the table :

    import rasgoql
    from rasgoql import BigQueryCredentials
    
    PROJECT = "my-project"
    DATASET = "dataset"
    
    creds = BigQueryCredentials(
        json_filepath="/credentials/path",
        project=PROJECT,
        dataset=DATASET
    )
    rql = rasgoql.connect(creds)
    
    ds = rql.dataset(fqtn=f"{PROJECT}.{DATASET}.table")
    
    schema_dict = {column:data_type for column, data_type in ds.get_schema()}
    schema_dict
    
    ds_casted = ds.transform(
      transform_name='cast',
      casts=schema_dict
    )
    
    ds_casted.to_dbt('./test')
    
    

    I get the following error :

    ---------------------------------------------------------------------------
    IndexError                                Traceback (most recent call last)
    /src/test.ipynb Cell [1]
    ----> 1[ ds_casted.to_dbt('./test')
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py:40, in beta.<locals>.wrapper(*args, **kwargs)
         ]()[30](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=29)[ @functools.wraps(func)
         ]()[31](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=30)[ def wrapper(*args, **kwargs):
         ]()[32](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=31)[     logger.info(
         ]()[33](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=32)[         f'{func.__name__} is a beta feature. '
         ]()[34](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=33)[         'Its functionality and parameters may change in future versions and '
       (...)
         ]()[38](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=37)[         'or contact us directly on slack.'
         ]()[39](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=38)[     )
    ---> ]()[40](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=39)[     return func(*args, **kwargs)
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py:402, in SQLChain.to_dbt(self, output_directory, file_name, config_args, include_schema)
        ]()[394](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=393)[         chn_logger.warning(
        ]()[395](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=394)[             'Unexpected error generating the schema of this SQLChain. '
        ]()[396](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=395)[             'Your model.sql file will be generated without a schema.yml file. '
       (...)
        ]()[399](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=398)[             'your_chn.save() to update the view definition in your Data Warehouse.'
        ]()[400](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=399)[         )
        ]()[401](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=400)[     schema = []
    --> ]()[402](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=401)[ return create_dbt_files(
        ]()[403](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=402)[     self.transforms,
        ]()[404](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=403)[     schema,
        ]()[405](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=404)[     output_directory,
        ]()[406](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=405)[     file_name,
        ]()[407](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=406)[     config_args,
        ]()[408](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=407)[     include_schema
        ]()[409](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=408)[ )
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:105, in create_dbt_files(transforms, schema, output_directory, file_name, config_args, include_schema)
        ]()[102](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=101)[ output_directory = output_directory or os.getcwd()
        ]()[103](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=102)[ file_name = file_name or f'{transforms[-1].output_alias}.sql'
        ]()[104](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=103)[ return save_model_file(
    --> ]()[105](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=104)[     sql_definition=assemble_cte_chain(transforms),
        ]()[106](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=105)[     output_directory=output_directory,
        ]()[107](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=106)[     file_name=file_name,
        ]()[108](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=107)[     config_args=config_args,
        ]()[109](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=108)[     include_schema=include_schema,
        ]()[110](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=109)[     schema=schema
        ]()[111](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=110)[ )
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:36, in assemble_cte_chain(transforms, table_type)
         ]()[34](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=33)[     t = transforms[0]
         ]()[35](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=34)[     create_stmt = _set_create_statement(table_type, t.fqtn)
    ---> ]()[36](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=35)[     final_select = generate_transform_sql(
         ]()[37](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=36)[         t.name,
         ]()[38](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=37)[         t.arguments,
         ]()[39](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=38)[         t.source_table,
         ]()[40](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=39)[         None,
         ]()[41](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=40)[         t._dw
         ]()[42](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=41)[     )
         ]()[43](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=42)[     return create_stmt + final_select
         ]()[45](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=44)[ # Handle multi-transform chains
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:124, in generate_transform_sql(name, arguments, source_table, running_sql, dw)
        ]()[120](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=119)[ """
        ]()[121](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=120)[ Returns the SQL for a Transform with applied arguments
        ]()[122](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=121)[ """
        ]()[123](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=122)[ templates = rtx.serve_rasgo_transform_templates(dw.dw_type)
    --> ]()[124](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=123)[ udt: 'TransformTemplate' = [t for t in templates if t.name == name][0]
        ]()[125](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=124)[ if not udt:
        ]()[126](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=125)[     raise TransformRenderingError(f'Cannot find a transform named {name}')
    
    IndexError: list index out of range]()
    
    bug 
    opened by amirbtb 9
  • Support upstream snowflake connector

    Support upstream snowflake connector

    Is your feature request related to a problem? Please describe. Need more of the snowflake connection options that are defined here https://github.com/snowflakedb/snowflake-connector-python/blob/main/src/snowflake/connector/connection.py#L112

    Describe the solution you'd like The ability to directly use the snowflake connector, or all of its options

    enhancement 
    opened by pbarker 6
  • `ds.transform(name='cast',casts=cast_dict)` creates duplicate columns  | BigQuery

    `ds.transform(name='cast',casts=cast_dict)` creates duplicate columns | BigQuery

    Hi,

    The preview of ds.transform(name='cast', casts=cast_dict) shows a dataset with both old and new columns (casted). I gave a look at cast.sql and I see that it starts with a SELECT *. Suggestion : I believe the ds.transform(name='cast', casts=cast_dict) should be able to cast the provided columns, while keeping the other ones.

    Thank you 🙏

    enhancement 
    opened by amirbtb 6
  • Support fetching batches from Snowflake

    Support fetching batches from Snowflake

    Is your feature request related to a problem? Please describe. Hey, love the tool, I am loading large datasets that won't fit into memory

    Describe the solution you'd like Would like to use https://docs.snowflake.com/en/user-guide/python-connector-api.html#fetch_pandas_batches

    enhancement 
    opened by pbarker 5
  • `to_dbt()` creates a view in the schema of the source table  | BigQuery

    `to_dbt()` creates a view in the schema of the source table | BigQuery

    Hi,

    After I run to_dbt(), I noticed that a view is created in the BigQuery schema where the source table (normal table) is located. The view has the same name as the .sql file created in the output path that I provided in to_dbt() and its query is similar to the content of the .sql file outputted by to_dbt() . The ability to quickly create a view based on all the transformations performed via RasgoQL is very useful but I'm not sure if it should be a default output of to_dbt().

    Again, thank you for your work ! 🙏🏽

    bug 
    opened by amirbtb 5
  • Trouble with import

    Trouble with import

    Describe the bug I get this error when I try to pip install rasgoql[snowflake]: no matches found: rasgoql[snowflake]

    Prior to running this, I successfully downloaded the snowflake connector...

    To Reproduce go to bash and type: pip install rasgoql[snowflake]

    Expected Behavior: successful import

    Actual Behavior: bash returns: no matches found: rasgoql[snowflake]

    Version Information (please complete the following information): rasgoql==1.1.1 rasgotransforms==1.1.3

    Additional context I'm trying to connect to a trial snowflake account

    opened by mashhype 3
  • `ds.concat`doesn't accept the `name` argument | BigQuery

    `ds.concat`doesn't accept the `name` argument | BigQuery

    Hello,

    I tried to use the ds.conct() method as shown in the Example of the documentation. ds.concat(concat_list=['first_column',"'-'",'second_column'], name="both_columns") returns the following error :

    [File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py:54, in TransformableClass._create_aliased_function.<locals>.f(*arg, **kwargs)
         ]()[53](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=52)[ def f(*arg, **kwargs) -> 'SQLChain':
    ---> ]()[54](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=53)[     return self.transform(name=transform.name, *arg, **kwargs)
    
    TypeError: transform() got multiple values for keyword argument 'name']()
    

    When I don't provide the name argument, the functions works.

    Thanks again !

    bug 
    opened by amirbtb 3
  • New DB Request | BigQuery

    New DB Request | BigQuery

    Would like to use RasgoQL with BigQuery

    Open questions:

    • will transform templates need changes to support BigQuery SQL syntax?
    • may need to support google oauth login since creds are typically tired to a google account
    enhancement 
    opened by cpdough 3
  • Nest Transform Arguments

    Nest Transform Arguments

    This PR allows a transform to accept a Dataset or SQLChain as an input argument. The new logic flattens the primitive to either a fqtn or a CTE wrapped in parentheses and nests it in the running CTE. I don't know why this works, but 10 Budweisers can't be wrong. JK! It was 11.

    opened by griffatrasgo 1
  • #47 RAS-2651 Adding Amazon Redshift

    #47 RAS-2651 Adding Amazon Redshift

    Adding Amazon Redshift support.

    Test file attached here _test_demo_redshift.zip

    Need the following environment variables:

    REDSHIFT_USER="<dbuser>"
    REDSHIFT_PASSWORD="<dbpass>"
    REDSHIFT_DATABASE="dev"
    REDSHIFT_SCHEMA="public"
    REDSHIFT_HOST="<cluster-host>"
    REDSHIFT_PORT=5439
    REDSHIFT_DB_USER="<dbuser>"
    
    opened by ChrisGriffithRASGO 1
  • Document connecting to data warehouses using dictionary args

    Document connecting to data warehouses using dictionary args

    What feature are you requesting?

    There aren't any docs on connecting to a data warehouse using a dictionary. The current credential classes are limited and gave the impression I wouldn't be able to connect to my warehouse

    Are you using a workaround to do it in or outside of the product today?

    Read the code and figured it out

    How important is this feature to your continued use of the package? Can you qualify the value / importance of this feature in any way?

    I think this is pretty important since it gave me the impression I couldn't use this product

    enhancement 
    opened by pbarker 2
Releases(1.6.4)
  • 1.6.4(Jul 5, 2022)

    Version 1.6.4 - 2022-07-05

    Changed

    • Changed default behavior of to_dbt function. Instead of always appending model details to the schema.yml file (which creates duplicate entries for existing models), rql will now check if a model entry already exists in the file and overwrite it. If the model does not exist, it will be appended.
    Source code(tar.gz)
    Source code(zip)
  • 1.6.3(Jun 28, 2022)

  • 1.6.2(Jun 27, 2022)

  • 1.6.1(Jun 27, 2022)

    Version 1.6.1 - 2022-06-27

    Fixed

    • Fixed a bug in the get_schema method of SQLAlchemy DW classes where users were being asked to enter an overwrite param they cannot access
    Source code(tar.gz)
    Source code(zip)
  • 1.6.0(Jun 23, 2022)

    Version 1.6.0 - 2022-06-23

    Changed

    • Changed the get_schema method on all DW classes to accept a single fqtn_or_sql variable
    • Changed the behavior of transform arguments: when a Dataset or SQLChain class is passed in as an argument to a transform, it is automatically flattened to its corresponding fqtn or CTE then consumed in the transform.
    Source code(tar.gz)
    Source code(zip)
  • 1.5.6(Jun 21, 2022)

    Version 1.5.6 - 2022-06-20

    Changed

    • Changed the get_schema method on Snowflake and BigQuery DW classes to get output columns without creating views
    Source code(tar.gz)
    Source code(zip)
  • 1.5.5(Jun 7, 2022)

Analysis and plotting for motor/prop/ESC characterization, thrust vs RPM and torque vs thrust

esc_test This is a Python package used to plot and analyze data collected for the purpose of characterizing a particular propeller, motor, and ESC con

Alex Spitzer 1 Dec 28, 2021
Missing data visualization module for Python.

missingno Messy datasets? Missing values? missingno provides a small toolset of flexible and easy-to-use missing data visualizations and utilities tha

Aleksey Bilogur 3.4k Dec 29, 2022
visualize_ML is a python package made to visualize some of the steps involved while dealing with a Machine Learning problem

visualize_ML visualize_ML is a python package made to visualize some of the steps involved while dealing with a Machine Learning problem. It is build

Ayush Singh 164 Dec 12, 2022
:small_red_triangle: Ternary plotting library for python with matplotlib

python-ternary This is a plotting library for use with matplotlib to make ternary plots plots in the two dimensional simplex projected onto a two dime

Marc 611 Dec 29, 2022
GUI for visualization and interactive editing of SMPL-family body models ie. SMPL, SMPL-X, MANO, FLAME.

Body Model Visualizer Introduction This is a simple Open3D-based GUI for SMPL-family body models. This GUI lets you play with the shape, expression, a

Muhammed Kocabas 207 Jan 01, 2023
This is a sorting visualizer made with Tkinter.

Sorting-Visualizer This is a sorting visualizer made with Tkinter. Make sure you've installed tkinter in your system to use this visualizer pip instal

Vishal Choubey 7 Jul 06, 2022
The Spectral Diagram (SD) is a new tool for the comparison of time series in the frequency domain

The Spectral Diagram (SD) is a new tool for the comparison of time series in the frequency domain. The SD provides a novel way to display the coherence function, power, amplitude, phase, and skill sc

Mabel 3 Oct 10, 2022
Some method of processing point cloud

Point-Cloud Some method of processing point cloud inversion the completion pointcloud to incomplete point cloud Some model of encoding point cloud to

Tan 1 Nov 19, 2021
A tool for automatically generating 3D printable STLs from freely available lidar scan data.

mini-map-maker A tool for automatically generating 3D printable STLs from freely available lidar scan data. Screenshots Tutorial To use this script, g

Mike Abbott 51 Nov 06, 2022
Simple addon for snapping active object to mesh ground

Snap to Ground Simple addon for snapping active object to mesh ground How to install: install the Python file as an addon use shortcut "D" in 3D view

Iyad Ahmed 12 Nov 07, 2022
These data visualizations were created for my introductory computer science course using Python

Homework 2: Matplotlib and Data Visualization Overview These data visualizations were created for my introductory computer science course using Python

Sophia Huang 12 Oct 20, 2022
A command line tool for visualizing CSV/spreadsheet-like data

PerfPlotter Read data from CSV files using pandas and generate interactive plots using bokeh, which can then be embedded into HTML pages and served by

Gino Mempin 0 Jun 25, 2022
a plottling library for python, based on D3

Hello August 2013 Hello! Maybe you're looking for a nice Python interface to build interactive, javascript based plots that look as nice as all those

Mike Dewar 1.4k Dec 28, 2022
Define fortify and autoplot functions to allow ggplot2 to handle some popular R packages.

ggfortify This package offers fortify and autoplot functions to allow automatic ggplot2 to visualize statistical result of popular R packages. Check o

Sinhrks 504 Dec 23, 2022
plotly scatterplots which show molecule images on hover!

molplotly Plotly scatterplots which show molecule images on hovering over the datapoints! Required packages: pandas rdkit jupyter_dash ➡️ See example.

150 Dec 28, 2022
Machine learning beginner to Kaggle competitor in 30 days. Non-coders welcome. The program starts Monday, August 2, and lasts four weeks. It's designed for people who want to learn machine learning.

30-Days-of-ML-Kaggle 🔥 About the Hands On Program 💻 Machine learning beginner → Kaggle competitor in 30 days. Non-coders welcome The program starts

Roja Achary 145 Jan 01, 2023
Python library that makes it easy for data scientists to create charts.

Chartify Chartify is a Python library that makes it easy for data scientists to create charts. Why use Chartify? Consistent input data format: Spend l

Spotify 3.2k Jan 04, 2023
Data science project for exploratory analysis on the kcse grades dataset (Kamilimu Data Science Track)

Kcse-Data-Analysis Data science project for exploratory analysis on the kcse grades dataset (Kamilimu Data Science Track) Findings The performance of

MUGO BRIAN 1 Feb 23, 2022
A napari plugin for visualising and interacting with electron cryotomograms.

napari-tomoslice A napari plugin for visualising and interacting with electron cryotomograms. Installation You can install napari-tomoslice via pip: p

3 Jan 03, 2023
Write python locally, execute SQL in your data warehouse

RasgoQL Write python locally, execute SQL in your data warehouse ≪ Read the Docs · Join Our Slack » RasgoQL is a Python package that enables you to ea

Rasgo 265 Nov 21, 2022