Synthetic data need to preserve the statistical properties of real data in terms of their individual behavior and (inter-)dependences

Overview

Overview

Synthetic data need to preserve the statistical properties of real data in terms of their individual behavior and (inter-)dependences. Copula and functional Principle Component Analysis (fPCA) are statistical models that allow these properties to be simulated (Joe 2014). As such, copula generated data have shown potential to improve the generalization of machine learning (ML) emulators (Meyer et al. 2021) or anonymize real-data datasets (Patki et al. 2016).

Synthia is an open source Python package to model univariate and multivariate data, parameterize data using empirical and parametric methods, and manipulate marginal distributions. It is designed to enable scientists and practitioners to handle labelled multivariate data typical of computational sciences. For example, given some vertical profiles of atmospheric temperature, we can use Synthia to generate new but statistically similar profiles in just three lines of code (Table 1).

Synthia supports three methods of multivariate data generation through: (i) fPCA, (ii) parametric (Gaussian) copula, and (iii) vine copula models for continuous (all), discrete (vine), and categorical (vine) variables. It has a simple and succinct API to natively handle xarray's labelled arrays and datasets. It uses a pure Python implementation for fPCA and Gaussian copula, and relies on the fast and well tested C++ library vinecopulib through pyvinecopulib's bindings for fast and efficient computation of vines. For more information, please see the website at https://dmey.github.io/synthia.

Table 1. Example application of Gaussian and fPCA classes in Synthia. These are used to generate random profiles of atmospheric temperature similar to those included in the source data. The xarray dataset structure is maintained and returned by Synthia.

Source Synthetic with Gaussian Copula Synthetic with fPCA
ds = syn.util.load_dataset() g = syn.CopulaDataGenerator() g = syn.fPCADataGenerator()
g.fit(ds, syn.GaussianCopula()) g.fit(ds)
g.generate(n_samples=500) g.generate(n_samples=500)
Source Gaussian fPCA

Documentation

For installation instructions, getting started guides and tutorials, background information, and API reference summaries, please see the website.

How to cite

If you are using Synthia, please cite the following two papers using their respective Digital Object Identifiers (DOIs). Citations may be generated automatically using Crosscite's DOI Citation Formatter or from the BibTeX entries below.

Synthia Software Software Application
DOI: 10.21105/joss.02863 DOI: 10.5194/gmd-14-5205-2021
@article{Meyer_and_Nagler_2021,
  doi = {10.21105/joss.02863},
  url = {https://doi.org/10.21105/joss.02863},
  year = {2021},
  publisher = {The Open Journal},
  volume = {6},
  number = {65},
  pages = {2863},
  author = {David Meyer and Thomas Nagler},
  title = {Synthia: multidimensional synthetic data generation in Python},
  journal = {Journal of Open Source Software}
}

@article{Meyer_and_Nagler_and_Hogan_2021,
  doi = {10.5194/gmd-14-5205-2021},
  url = {https://doi.org/10.5194/gmd-14-5205-2021},
  year = {2021},
  publisher = {Copernicus {GmbH}},
  volume = {14},
  number = {8},
  pages = {5205--5215},
  author = {David Meyer and Thomas Nagler and Robin J. Hogan},
  title = {Copula-based synthetic data augmentation for machine-learning emulators},
  journal = {Geoscientific Model Development}
}

If needed, you may also cite the specific software version with its corresponding Zendo DOI.

Contributing

If you are looking to contribute, please read our Contributors' guide for details.

Development notes

If you would like to know more about specific development guidelines, testing and deployment, please refer to our development notes.

Copyright and license

Copyright 2020 D. Meyer and T. Nagler. Licensed under MIT.

Acknowledgements

Special thanks to @letmaik for his suggestions and contributions to the project.

Comments
  • Explain how to run the test suite

    Explain how to run the test suite

    Describe the bug There is a test suite, but the documentation does not explain how to run it.

    Here is what works for me:

    1. Install pytest.
    2. Clone the source repository.
    3. Run pytest in the root directory of the repository.
    opened by khinsen 7
  • Review: Copula distribution usage and examples

    Review: Copula distribution usage and examples

    Your package offers support for simulating vine copulas. However, I don't see examples demonstrating how to simulate data from a vine copula given desired conditional dependency requirements.

    Is this possible with the current API? If not, how would I use the vine copula generator to achieve this?

    Otherwise, can examples show the difference between simulating Gaussian and vine copulas? I only see examples for the Gaussian copula.

    opened by mnarayan 5
  • fPCA documentation

    fPCA documentation

    Describe the bug

    The documentation page on fPCA says:

    PCA can be used to generate synthetic data for the high-dimensional vector $X$. For every instance $X_i$ in the data set, we compute the principal component scores $a_{i, 1}, \dots, a_{i, K}$. Because the principal components $v_1, \dots, v_K$ are orthogonal, the scores are necessarily uncorrelated and we may treat them as independent.
    

    The claim that "because the principal components $v_1, \dots, v_K$ are orthogonal, the scores are necessarily uncorrelated" looks wrong to me. These scores are projections of the $X_i$ onto the elements of an orthonormal basis. That doesn't make them uncorrelated. There are lots of orthonormal bases one can project on, and for most of them the projections are not uncorrelated. You need some property of the distribution of $X$ to derive a zero correlation, for example a Gaussian distribution, for which the PCA basis yields approximately uncorrelated projections.

    opened by khinsen 3
  • Review: Clarify API

    Review: Clarify API

    It would be helpful to add/explain what the different classes do Data Generators, Parametrizer, Transformers somewhere in the introduction or usage component of the documentation. Explain the different classes and what each is supposed to do. If it is similar to or inspired by well-known API of a different package, please point to it.

    I think generators and transformers are obvious but I only sort of understand Parametrizers. It is also confusing in the sense that people might think this has something to do with parametric distributions when you mean it to be something different.

    Is this API for Parametrizers inspired by some convention elsewhere? If so it would be helpful to point to that. For instance, the generators are very similar to statsmodel generators.

    opened by mnarayan 2
  • Small error in docs

    Small error in docs

    Hi, just letting you know I noticed a small error in the documentation.

    At the bottom of this page https://dmey.github.io/synthia/examples/fpca.html

    The error is in line [6] of the code, under "Plot the results".

    You have: plot_profiles(ds_true, 'temperature_fl')

    But I believe it should be: plot_profiles(ds_synth, 'temperature_fl')

    you want to plot results, not the original here.

    Cheers & thanks for the cool project!

    opened by BigTuna08 1
  • Review: Comparisons to other common packages

    Review: Comparisons to other common packages

    What are other packages people might use to simulate data (e.g. statsmodels comes to mind) and how is this package different? Your package supports generating data for multivariate copula distributions and via fPCA. I understand what this entails but I think this could use further elaboration.

    This package supports nonparametric distributions much more than the typical parametric data generators found in common packages and it would be useful to highlight these explicitly.

    opened by mnarayan 1
  • Support categorical data for pyvinecopulib

    Support categorical data for pyvinecopulib

    During fitting, category values are reindexed as integers starting from 0 and transformed to one-hot vectors. The opposite during generation. Any data type works for categories, including strings.

    opened by letmaik 0
  • Add support for categorical data

    Add support for categorical data

    We can treat categorical data as discrete but first we need to pre-process categorical values by one hot encoding to remove the order. Re API we can change the current version from

    # Assuming  an xarray datasets ds with X1 discrete and and X2 categorical 
    generator.fit(ds, copula=syn.VineCopula(controls=ctrl), is_discrete={'X1': True, 'X2': False})
    

    to something like

    with X3 continuous 
    g.fit(ds, copula=syn.VineCopula(controls=ctrl), types={'X1': 'disc', 'X2': 'cat', 'X3': 'cont'})
    
    opened by dmey 0
  • Add support for handling discrete quantities

    Add support for handling discrete quantities

    Introduces the option to specify and model discrete quantities as follows:

    # Assuming  an xarray datasets ds with X1 discrete and and X2 continuous 
    generator.fit(ds, copula=syn.VineCopula(controls=ctrl), is_discrete={'X1': True, 'X2': False})
    

    This option is only supported for vine copulas

    opened by dmey 0
Releases(1.1.0)
Dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

Dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

dbt Labs 6.3k Jan 08, 2023
Vectorizers for a range of different data types

Vectorizers for a range of different data types

Tutte Institute for Mathematics and Computing 69 Dec 29, 2022
Integrate bus data from a variety of sources (batch processing and real time processing).

Purpose: This is integrate bus data from a variety of sources such as: csv, json api, sensor data ... into Relational Database (batch processing and r

1 Nov 25, 2021
The Master's in Data Science Program run by the Faculty of Mathematics and Information Science

The Master's in Data Science Program run by the Faculty of Mathematics and Information Science is among the first European programs in Data Science and is fully focused on data engineering and data a

Amir Ali 2 Jun 17, 2022
Used for data processing in machine learning, and help us to construct ML model more easily from scratch

Used for data processing in machine learning, and help us to construct ML model more easily from scratch. Can be used in linear model, logistic regression model, and decision tree.

ShawnWang 0 Jul 05, 2022
pipeline for migrating lichess data into postgresql

How Long Does It Take Ordinary People To "Get Good" At Chess? TL;DR: According to 5.5 years of data from 2.3 million players and 450 million games, mo

Joseph Wong 182 Nov 11, 2022
Handle, manipulate, and convert data with units in Python

unyt A package for handling numpy arrays with units. Often writing code that deals with data that has units can be confusing. A function might return

The yt project 304 Jan 02, 2023
Data Competition: automated systems that can detect whether people are not wearing masks or are wearing masks incorrectly

Table of contents Introduction Dataset Model & Metrics How to Run Quickstart Install Training Evaluation Detection DATA COMPETITION The COVID-19 pande

Thanh Dat Vu 1 Feb 27, 2022
Pyspark Spotify ETL

This is my first Data Engineering project, it extracts data from the user's recently played tracks using Spotify's API, transforms data and then loads it into Postgresql using SQLAlchemy engine. Data

16 Jun 09, 2022
Python script for transferring data between three drives in two separate stages

Waterlock Waterlock is a Python script meant for incrementally transferring data between three folder locations in two separate stages. It performs ha

David Swanlund 13 Nov 10, 2021
MapReader: A computer vision pipeline for the semantic exploration of maps at scale

MapReader A computer vision pipeline for the semantic exploration of maps at scale MapReader is an end-to-end computer vision (CV) pipeline designed b

Living with Machines 25 Dec 26, 2022
Python package for analyzing behavioral data for Brain Observatory: Visual Behavior

Allen Institute Visual Behavior Analysis package This repository contains code for analyzing behavioral data from the Allen Brain Observatory: Visual

Allen Institute 16 Nov 04, 2022
Autopsy Module to analyze Registry Hives based on bookmarks provided by EricZimmerman for his tool RegistryExplorer

Autopsy Module to analyze Registry Hives based on bookmarks provided by EricZimmerman for his tool RegistryExplorer

Mohammed Hassan 13 Mar 31, 2022
Hidden Markov Models in Python, with scikit-learn like API

hmmlearn hmmlearn is a set of algorithms for unsupervised learning and inference of Hidden Markov Models. For supervised learning learning of HMMs and

2.7k Jan 03, 2023
Python for Data Analysis, 2nd Edition

Python for Data Analysis, 2nd Edition Materials and IPython notebooks for "Python for Data Analysis" by Wes McKinney, published by O'Reilly Media Buy

Wes McKinney 18.6k Jan 08, 2023
Pizza Orders Data Pipeline Usecase Solved by SQL, Sqoop, HDFS, Hive, Airflow.

PizzaOrders_DataPipeline There is a Tony who is owning a New Pizza shop. He knew that pizza alone was not going to help him get seed funding to expand

Melwin Varghese P 4 Jun 05, 2022
ped-crash-techvol: Texas Ped Crash Tech Volume Pack

ped-crash-techvol: Texas Ped Crash Tech Volume Pack In conjunction with the Final Report "Identifying Risk Factors that Lead to Increase in Fatal Pede

Network Modeling Center; Center for Transportation Research; The University of Texas at Austin 2 Sep 28, 2022
Tools for the analysis, simulation, and presentation of Lorentz TEM data.

ltempy ltempy is a set of tools for Lorentz TEM data analysis, simulation, and presentation. Features Single Image Transport of Intensity Equation (SI

McMorran Lab 1 Dec 26, 2022
Gathering data of likes on Tinder within the past 7 days

tinder_likes_data Gathering data of Likes Sent on Tinder within the past 7 days. Versions November 25th, 2021 - Functionality to get the name and age

Alex Carter 12 Jan 05, 2023
A highly efficient and modular implementation of Gaussian Processes in PyTorch

GPyTorch GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian

3k Jan 02, 2023