Write python locally, execute SQL in your data warehouse

Overview

Downloads PyPI version Docs Chat on Slack License: AGPL v3

RasgoQL Hero

RasgoQL

Write python locally, execute SQL in your data warehouse
≪ Read the Docs   ·   Join Our Slack »

RasgoQL is a Python package that enables you to easily query and transform tables in your Data Warehouse directly from a notebook.

You can quickly create new features, sample data, apply complex aggregates... all without having to write SQL!

Choose from our library of predefined transformations or make your own to streamline the feature engineering process.

RasgoQL 30-second demo

Why is this package useful?

Data scientists spend much of their time in pandas preparing data for modelling. When they are ready to deploy or scale, two pain points arise:

  1. pandas cannot handle larger volumes of data, forcing the use of VMs or code refactoring.
  2. feature data must be added to the Enterprise Data Warehouse for future processing, requiring refactoring to SQL

We created RasgoQL to solve these two pain points.

Learn more at https://docs.rasgoql.com.

How does it work?

Under the covers, RasgoQL sends all processing to your Data Warehouse, enabling the efficient transformation of massive datasets. RasgoQL only needs basic metadata to execute transforms, so your private data remains secure.

RasgoQL workflow diagram

RasgoQL does these things well:

  • Pulls existing Data Warehouse tables into pandas DataFrames for analysis
  • Constructs SQL queries using a syntax that feels like pandas
  • Creates views in your Data Warehouse to save transformed data
  • Exports runnable sql in .sql files or dbt-compliant .yaml files
  • Offers dozens of free SQL transforms to use
  • Coming Soon: allows users to create & add custom transforms

Rasgo supports Snowflake, BigQuery, Postgres, and Amazon Redshift with more Data Warehouses being added soon. If you'd like to suggest another database type, submit your idea to our GitHub Discussions page so that other community members can weight in and show their support.

Can RasgoQL help you?

  • If you use pandas to build features, but you are working on a massive set of data that won't fit in your machine's memory. RasgoQL can help!

  • If your organization uses dbt or another SQL tool to run production data flows, but you prefer to build features in pandas. RasgoQL can help!

  • If you know pandas, but not SQL and want to learn how queries will translate. RasgoQL can help!

Where to get it

Just run a simple pip install.

pip install rasgoql~=1.0

Report Bug · Suggest Improvement · Request Feature

Quick Start

pip install rasgoql --upgrade

# Connect to your data warehouse
creds = rasgoql.SnowflakeCredentials(
    account="",
    user="",
    password="",
    role="",
    warehouse="",
    database="",
    schema=""
)

# Connect to DW
rql = rasgoql.connect(creds)

# List available tables
rql.list_tables('ADVENTUREWORKS').head(10)

# Allow rasgoQL to interact with an existing Table in your Data Warehouse
dataset = rql.dataset('ADVENTUREWORKS.PUBLIC.FACTINTERNETSALES')

# Take a peek at the data
dataset.preview()

# Use the datetrunc transform to seperate things into weeks
weekly_sales = dataset.datetrunc(dates={'ORDERDATE':'week'})

# Aggregate to sum of sales for each week
agg_weekly_sales = weekly_sales.aggregate(
    group_by=['PRODUCTKEY', 'ORDERDATE_WEEK'],
    aggregations={'SALESAMOUNT': ['SUM']},
    )

# Quickly validate output
agg_weekly_sales.to_df()

# Print the SQL
print(agg_weekly_sales.sql())

Getting Stared Tutorials

The best way to get familiar with the RasgoQL basics is by running through these notebooks in the tutorials folder.

Advanced Examples

Joins

Easily join tables together using the join transform.

sales_dataset = rasgoql.dataset('ADVENTUREWORKS.PUBLIC.FACTINTERNETSALES')

sales_product_dataset = sales_dataset.join(
  join_table='DIM_PRODUCT',
  join_columns={'PRODUCTKEY': 'PRODUCTKEY'},
  join_type='LEFT',
  join_prefix='PRODUCT')

sales_product_dataset.sql()
sales_product_dataset.preview()

Rasgo Join Example

Chain transforms together

Create a rolling average aggregation and then drops unnecessary colomns.

sales_agg_drop = sales_dataset.rolling_agg(
    aggregations={"SALESAMOUNT": ["MAX", "MIN", "SUM"]},
    order_by="ORDERDATE",
    offsets=[-7, 7],
    group_by=["PRODUCTKEY"],
).drop_columns(exclude_cols=["ORDERDATEKEY"])

sales_agg_drop.sql()
sales_agg_drop.preview()

Multiple rasgoql transforms

Transpose unique values with pivots

Quickly generate pivot tables of your data.

sales_by_product = sales_dataset.pivot(
    dimensions=['ORDERDATE'],
    pivot_column='SALESAMOUNT',
    value_column='PRODUCTKEY',
    agg_method='SUM',
    list_of_vals=['310', '345'],
)

sales_by_product.sql()
sales_by_product.preview()

Rasgoql pivot example

Does any of my data get collected?

Rasgo will not collect any personal information. We log execution of methods in transforms.py for success and failure so that we can more accurately track what's useful and what's problematic.

Where do I go for help?

If you have any questions please:

  1. RasgoQL Docs
  2. Slack
  3. GitHub Issues

How can I contribute?

Review the contributors guide

License

RasgoQL uses the GNU AGPL license, as found in the LICENSE file.

This project is sponspored by RasgoML. Find out at https://www.rasgoml.com/

Comments
  • [BigQuery] fqtn is not valid if project name contains '-'

    [BigQuery] fqtn is not valid if project name contains '-'

    Hello,

    The fqtn of the table I want to get is following this pattern : my-awesome-project.schema.table. I tried to get it using rql.dataset(fqtn="my-awesome-project.schema.table") but I get a [ValueError: my-awesome-project.schema.table is not a well-formed fqtn](). It seems that the validate_fqtn() function is applying this regex \w+\.\w+\.\w+ that isn't accepting my GCP project name pattern. Is there a way to make this work without changing my GCP project name ?

    Thank you for this awesome package, I can't wait to try it ! ❤️ 🚀

    bug 
    opened by amirbtb 10
  • [BigQuery] `to_dbt()` raises IndexError

    [BigQuery] `to_dbt()` raises IndexError

    Hi,

    I am trying to generate dbt files (.sql & .yml) from a SQLChain using to_dbt(). The source table is a regular table. I'm using rasgoql 1.0.2a2.

    Here is my code, I'm just trying to generate base sql code for casting the table :

    import rasgoql
    from rasgoql import BigQueryCredentials
    
    PROJECT = "my-project"
    DATASET = "dataset"
    
    creds = BigQueryCredentials(
        json_filepath="/credentials/path",
        project=PROJECT,
        dataset=DATASET
    )
    rql = rasgoql.connect(creds)
    
    ds = rql.dataset(fqtn=f"{PROJECT}.{DATASET}.table")
    
    schema_dict = {column:data_type for column, data_type in ds.get_schema()}
    schema_dict
    
    ds_casted = ds.transform(
      transform_name='cast',
      casts=schema_dict
    )
    
    ds_casted.to_dbt('./test')
    
    

    I get the following error :

    ---------------------------------------------------------------------------
    IndexError                                Traceback (most recent call last)
    /src/test.ipynb Cell [1]
    ----> 1[ ds_casted.to_dbt('./test')
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py:40, in beta.<locals>.wrapper(*args, **kwargs)
         ]()[30](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=29)[ @functools.wraps(func)
         ]()[31](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=30)[ def wrapper(*args, **kwargs):
         ]()[32](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=31)[     logger.info(
         ]()[33](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=32)[         f'{func.__name__} is a beta feature. '
         ]()[34](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=33)[         'Its functionality and parameters may change in future versions and '
       (...)
         ]()[38](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=37)[         'or contact us directly on slack.'
         ]()[39](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=38)[     )
    ---> ]()[40](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=39)[     return func(*args, **kwargs)
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py:402, in SQLChain.to_dbt(self, output_directory, file_name, config_args, include_schema)
        ]()[394](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=393)[         chn_logger.warning(
        ]()[395](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=394)[             'Unexpected error generating the schema of this SQLChain. '
        ]()[396](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=395)[             'Your model.sql file will be generated without a schema.yml file. '
       (...)
        ]()[399](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=398)[             'your_chn.save() to update the view definition in your Data Warehouse.'
        ]()[400](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=399)[         )
        ]()[401](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=400)[     schema = []
    --> ]()[402](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=401)[ return create_dbt_files(
        ]()[403](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=402)[     self.transforms,
        ]()[404](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=403)[     schema,
        ]()[405](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=404)[     output_directory,
        ]()[406](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=405)[     file_name,
        ]()[407](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=406)[     config_args,
        ]()[408](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=407)[     include_schema
        ]()[409](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=408)[ )
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:105, in create_dbt_files(transforms, schema, output_directory, file_name, config_args, include_schema)
        ]()[102](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=101)[ output_directory = output_directory or os.getcwd()
        ]()[103](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=102)[ file_name = file_name or f'{transforms[-1].output_alias}.sql'
        ]()[104](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=103)[ return save_model_file(
    --> ]()[105](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=104)[     sql_definition=assemble_cte_chain(transforms),
        ]()[106](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=105)[     output_directory=output_directory,
        ]()[107](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=106)[     file_name=file_name,
        ]()[108](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=107)[     config_args=config_args,
        ]()[109](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=108)[     include_schema=include_schema,
        ]()[110](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=109)[     schema=schema
        ]()[111](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=110)[ )
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:36, in assemble_cte_chain(transforms, table_type)
         ]()[34](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=33)[     t = transforms[0]
         ]()[35](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=34)[     create_stmt = _set_create_statement(table_type, t.fqtn)
    ---> ]()[36](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=35)[     final_select = generate_transform_sql(
         ]()[37](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=36)[         t.name,
         ]()[38](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=37)[         t.arguments,
         ]()[39](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=38)[         t.source_table,
         ]()[40](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=39)[         None,
         ]()[41](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=40)[         t._dw
         ]()[42](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=41)[     )
         ]()[43](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=42)[     return create_stmt + final_select
         ]()[45](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=44)[ # Handle multi-transform chains
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:124, in generate_transform_sql(name, arguments, source_table, running_sql, dw)
        ]()[120](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=119)[ """
        ]()[121](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=120)[ Returns the SQL for a Transform with applied arguments
        ]()[122](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=121)[ """
        ]()[123](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=122)[ templates = rtx.serve_rasgo_transform_templates(dw.dw_type)
    --> ]()[124](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=123)[ udt: 'TransformTemplate' = [t for t in templates if t.name == name][0]
        ]()[125](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=124)[ if not udt:
        ]()[126](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=125)[     raise TransformRenderingError(f'Cannot find a transform named {name}')
    
    IndexError: list index out of range]()
    
    bug 
    opened by amirbtb 9
  • Support upstream snowflake connector

    Support upstream snowflake connector

    Is your feature request related to a problem? Please describe. Need more of the snowflake connection options that are defined here https://github.com/snowflakedb/snowflake-connector-python/blob/main/src/snowflake/connector/connection.py#L112

    Describe the solution you'd like The ability to directly use the snowflake connector, or all of its options

    enhancement 
    opened by pbarker 6
  • `ds.transform(name='cast',casts=cast_dict)` creates duplicate columns  | BigQuery

    `ds.transform(name='cast',casts=cast_dict)` creates duplicate columns | BigQuery

    Hi,

    The preview of ds.transform(name='cast', casts=cast_dict) shows a dataset with both old and new columns (casted). I gave a look at cast.sql and I see that it starts with a SELECT *. Suggestion : I believe the ds.transform(name='cast', casts=cast_dict) should be able to cast the provided columns, while keeping the other ones.

    Thank you 🙏

    enhancement 
    opened by amirbtb 6
  • Support fetching batches from Snowflake

    Support fetching batches from Snowflake

    Is your feature request related to a problem? Please describe. Hey, love the tool, I am loading large datasets that won't fit into memory

    Describe the solution you'd like Would like to use https://docs.snowflake.com/en/user-guide/python-connector-api.html#fetch_pandas_batches

    enhancement 
    opened by pbarker 5
  • `to_dbt()` creates a view in the schema of the source table  | BigQuery

    `to_dbt()` creates a view in the schema of the source table | BigQuery

    Hi,

    After I run to_dbt(), I noticed that a view is created in the BigQuery schema where the source table (normal table) is located. The view has the same name as the .sql file created in the output path that I provided in to_dbt() and its query is similar to the content of the .sql file outputted by to_dbt() . The ability to quickly create a view based on all the transformations performed via RasgoQL is very useful but I'm not sure if it should be a default output of to_dbt().

    Again, thank you for your work ! 🙏🏽

    bug 
    opened by amirbtb 5
  • Trouble with import

    Trouble with import

    Describe the bug I get this error when I try to pip install rasgoql[snowflake]: no matches found: rasgoql[snowflake]

    Prior to running this, I successfully downloaded the snowflake connector...

    To Reproduce go to bash and type: pip install rasgoql[snowflake]

    Expected Behavior: successful import

    Actual Behavior: bash returns: no matches found: rasgoql[snowflake]

    Version Information (please complete the following information): rasgoql==1.1.1 rasgotransforms==1.1.3

    Additional context I'm trying to connect to a trial snowflake account

    opened by mashhype 3
  • `ds.concat`doesn't accept the `name` argument | BigQuery

    `ds.concat`doesn't accept the `name` argument | BigQuery

    Hello,

    I tried to use the ds.conct() method as shown in the Example of the documentation. ds.concat(concat_list=['first_column',"'-'",'second_column'], name="both_columns") returns the following error :

    [File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py:54, in TransformableClass._create_aliased_function.<locals>.f(*arg, **kwargs)
         ]()[53](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=52)[ def f(*arg, **kwargs) -> 'SQLChain':
    ---> ]()[54](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=53)[     return self.transform(name=transform.name, *arg, **kwargs)
    
    TypeError: transform() got multiple values for keyword argument 'name']()
    

    When I don't provide the name argument, the functions works.

    Thanks again !

    bug 
    opened by amirbtb 3
  • New DB Request | BigQuery

    New DB Request | BigQuery

    Would like to use RasgoQL with BigQuery

    Open questions:

    • will transform templates need changes to support BigQuery SQL syntax?
    • may need to support google oauth login since creds are typically tired to a google account
    enhancement 
    opened by cpdough 3
  • Nest Transform Arguments

    Nest Transform Arguments

    This PR allows a transform to accept a Dataset or SQLChain as an input argument. The new logic flattens the primitive to either a fqtn or a CTE wrapped in parentheses and nests it in the running CTE. I don't know why this works, but 10 Budweisers can't be wrong. JK! It was 11.

    opened by griffatrasgo 1
  • #47 RAS-2651 Adding Amazon Redshift

    #47 RAS-2651 Adding Amazon Redshift

    Adding Amazon Redshift support.

    Test file attached here _test_demo_redshift.zip

    Need the following environment variables:

    REDSHIFT_USER="<dbuser>"
    REDSHIFT_PASSWORD="<dbpass>"
    REDSHIFT_DATABASE="dev"
    REDSHIFT_SCHEMA="public"
    REDSHIFT_HOST="<cluster-host>"
    REDSHIFT_PORT=5439
    REDSHIFT_DB_USER="<dbuser>"
    
    opened by ChrisGriffithRASGO 1
  • Document connecting to data warehouses using dictionary args

    Document connecting to data warehouses using dictionary args

    What feature are you requesting?

    There aren't any docs on connecting to a data warehouse using a dictionary. The current credential classes are limited and gave the impression I wouldn't be able to connect to my warehouse

    Are you using a workaround to do it in or outside of the product today?

    Read the code and figured it out

    How important is this feature to your continued use of the package? Can you qualify the value / importance of this feature in any way?

    I think this is pretty important since it gave me the impression I couldn't use this product

    enhancement 
    opened by pbarker 2
Releases(1.6.4)
  • 1.6.4(Jul 5, 2022)

    Version 1.6.4 - 2022-07-05

    Changed

    • Changed default behavior of to_dbt function. Instead of always appending model details to the schema.yml file (which creates duplicate entries for existing models), rql will now check if a model entry already exists in the file and overwrite it. If the model does not exist, it will be appended.
    Source code(tar.gz)
    Source code(zip)
  • 1.6.3(Jun 28, 2022)

  • 1.6.2(Jun 27, 2022)

  • 1.6.1(Jun 27, 2022)

    Version 1.6.1 - 2022-06-27

    Fixed

    • Fixed a bug in the get_schema method of SQLAlchemy DW classes where users were being asked to enter an overwrite param they cannot access
    Source code(tar.gz)
    Source code(zip)
  • 1.6.0(Jun 23, 2022)

    Version 1.6.0 - 2022-06-23

    Changed

    • Changed the get_schema method on all DW classes to accept a single fqtn_or_sql variable
    • Changed the behavior of transform arguments: when a Dataset or SQLChain class is passed in as an argument to a transform, it is automatically flattened to its corresponding fqtn or CTE then consumed in the transform.
    Source code(tar.gz)
    Source code(zip)
  • 1.5.6(Jun 21, 2022)

    Version 1.5.6 - 2022-06-20

    Changed

    • Changed the get_schema method on Snowflake and BigQuery DW classes to get output columns without creating views
    Source code(tar.gz)
    Source code(zip)
  • 1.5.5(Jun 7, 2022)

Python package that generates hardware pinout diagrams as SVG images

PinOut A Python package that generates hardware pinout diagrams as SVG images. The package is designed to be quite flexible and works well for general

336 Dec 20, 2022
Visualizing weather changes across the world using third party APIs and Python.

WEATHER FORECASTING ACROSS THE WORLD Overview Python scripts were created to visualize the weather for over 500 cities across the world at varying di

G Johnson 0 Jun 12, 2021
This is a small program that prints a user friendly, visual representation, of your current bsp tree

bspcq, q for query A bspc analyzer (utility for bspwm) This is a small program that prints a user friendly, visual representation, of your current bsp

nedia 9 Apr 24, 2022
A way of looking at COVID-19 data that I haven't seen before.

Visualizing Omicron: COVID-19 Deaths vs. Cases Click here for other countries. Data is from Our World in Data/Johns Hopkins University. About this pro

1 Jan 10, 2022
Python Data Validation for Humans™.

validators Python data validation for Humans. Python has all kinds of data validation tools, but every one of them seems to require defining a schema

Konsta Vesterinen 670 Jan 09, 2023
Sparkling Pandas

SparklingPandas SparklingPandas aims to make it easy to use the distributed computing power of PySpark to scale your data analysis with Pandas. Sparkl

366 Oct 27, 2022
A flexible tool for creating, organizing, and sharing visualizations of live, rich data. Supports Torch and Numpy.

Visdom A flexible tool for creating, organizing, and sharing visualizations of live, rich data. Supports Python. Overview Concepts Setup Usage API To

FOSSASIA 9.4k Jan 07, 2023
A central task in drug discovery is searching, screening, and organizing large chemical databases

A central task in drug discovery is searching, screening, and organizing large chemical databases. Here, we implement clustering on molecular similarity. We support multiple methods to provide a inte

NVIDIA Corporation 124 Jan 07, 2023
The Spectral Diagram (SD) is a new tool for the comparison of time series in the frequency domain

The Spectral Diagram (SD) is a new tool for the comparison of time series in the frequency domain. The SD provides a novel way to display the coherence function, power, amplitude, phase, and skill sc

Mabel 3 Oct 10, 2022
GitHub English Top Charts

Help you discover excellent English projects and get rid of the interference of other spoken language.

kon9chunkit 529 Jan 02, 2023
Process dataframe in a easily way.

Popanda Written by Shengxuan Wang at OSU. Used for processing dataframe, especially for machine learning. The name is from "Po" in the movie Kung Fu P

ShawnWang 1 Dec 24, 2021
Seismic Waveform Inversion Toolbox-1.0

Seismic Waveform Inversion Toolbox (SWIT-1.0)

Haipeng Li 98 Dec 29, 2022
Visualize the training curve from the *.csv file (tensorboard format).

Training-Curve-Vis Visualize the training curve from the *.csv file (tensorboard format). Feature Custom labels Curve smoothing Support for multiple c

Luckky 7 Feb 23, 2022
Design your own matplotlib stylefile interactively

Tired of playing with font sizes and other matplotlib parameters every time you start a new project or write a new plotting function? Want all you plots have the same style? Use matplotlib configurat

yobi byte 207 Dec 08, 2022
A tool for automatically generating 3D printable STLs from freely available lidar scan data.

mini-map-maker A tool for automatically generating 3D printable STLs from freely available lidar scan data. Screenshots Tutorial To use this script, g

Mike Abbott 51 Nov 06, 2022
GUI for visualization and interactive editing of SMPL-family body models ie. SMPL, SMPL-X, MANO, FLAME.

Body Model Visualizer Introduction This is a simple Open3D-based GUI for SMPL-family body models. This GUI lets you play with the shape, expression, a

Muhammed Kocabas 207 Jan 01, 2023
Plotting data from the landroid and a raspberry pi zero to a influx-db

landroid-pi-influx Plotting data from the landroid and a raspberry pi zero to a influx-db Dependancies Hardware: Landroid WR130E Raspberry Pi Zero Wif

2 Oct 22, 2021
Simple function to plot multiple barplots in the same figure.

Simple function to plot multiple barplots in the same figure. Supports padding and custom color.

Matthias Jakobs 2 Feb 21, 2022
demir.ai Dataset Operations

demir.ai Dataset Operations With this application, you can have the empty values (nan/null) deleted or filled before giving your dataset to machine le

Ahmet Furkan DEMIR 8 Nov 01, 2022
A Python package that provides evaluation and visualization tools for the DexYCB dataset

DexYCB Toolkit DexYCB Toolkit is a Python package that provides evaluation and visualization tools for the DexYCB dataset. The dataset and results wer

NVIDIA Research Projects 107 Dec 26, 2022