Vectorizers for a range of different data types

Overview

Vectorizers Logo

Travis AppVeyor Codecov CircleCI ReadTheDocs

Vectorizers

There are a large number of machine learning tools for effectively exploring and working with data that is given as vectors (ideally with a defined notion of distance as well). There is also a large volume of data that does not come neatly packaged as vectors. It could be text data, variable length sequence data (either numeric or categorical), dataframes of mixed data types, sets of point clouds, or more. Usually, one way or another, such data can be wrangled into vectors in a way that preserves some relevant properties of the original data. This library seeks to provide a suite of a wide variety of general purpose techniques for such wrangling, making it easier and faster for users to get various kinds of unstructured sequence data into vector formats for exploration and machine learning.

Why use Vectorizers?

Data wrangling can be tedious, error-prone, and fragile when trying to integrate it into production pipelines. The vectorizers library aims to provide a set of easy to use tools for turning various kinds of unstructured sequence data into vectors. By following the scikit-learn transformer API we ensure that any of the vectorizer classes can be trivially integrated into existing sklearn workflows or pipelines. By keeping the vectorization approaches as general as possible (as opposed to specialising on very specific data types), we aim to ensure that a very broad range of data can be handled efficiently. Finally we aim to provide robust techniques with sound mathematical foundations over potentially more powerful but black-box approaches for greater transparency in data processing and transformation.

How to use Vectorizers

Quick start examples to be added soon ...

For further examples on using this library for text we recommend checking out the documentation written up in the EasyData reproducible data science framework by some of our colleagues over at: https://github.com/hackalog/vectorizers_playground

Installing

Vectorizers is designed to be easy to install being a pure python module with relatively light requirements:

  • numpy
  • scipy
  • scikit-learn >= 0.22
  • numba >= 0.51

In the near future the package should be pip installable -- check back for updates:

pip install vectorizers

To manually install this package:

wget https://github.com/TutteInstitute/vectorizers/archive/master.zip
unzip master.zip
rm master.zip
cd vectorizers-master
python setup.py install

Help and Support

This project is still young. The documentation is still growing. In the meantime please open an issue and we will try to provide any help and guidance that we can. Please also check the docstrings on the code, which provide some descriptions of the parameters.

Contributing

Contributions are more than welcome! There are lots of opportunities for potential projects, so please get in touch if you would like to help out. Everything from code to notebooks to examples and documentation are all equally valuable so please don't feel you can't contribute. We would greatly appreciate the contribution of tutorial notebooks applying vectorizer tools to diverse or interesting datasets. If you find vectorizers useful for your data please consider contributing an example showing how it can apply to the kind of data you work with!

To contribute please fork the project make your changes and submit a pull request. We will do our best to work through any issues with you and get your code merged into the main branch.

License

The vectorizers package is 2-clause BSD licensed.

Comments
  • Added SignatureVectorizer (iisignature)

    Added SignatureVectorizer (iisignature)

    I've implemented SignatureVectorizer, which returns the path signatures for a collection of paths.

    This vectorizer essentially wraps the iisignature package such that it fits into the standard sklearn style fit_transform pipeline. While it does require iisignature, the imports are written such that the rest of the library can still be used if the user does not have iisignature installed.

    For more details on the path signature technique, I've found this paper quite instructive: A Primer on the Signature Method in Machine Learning (Chevyrev, I.)

    opened by jh83775 6
  • Add Compression Vectorizers

    Add Compression Vectorizers

    Add Lempel-Ziv and Byte Pair Encodign based vectorizers allowing for vectorization of non-tokenized strings.

    Also includes a basic outline of a distribution vectorizer, but this may require something more powerful than pomegranate.

    opened by lmcinnes 3
  • SlidingWindowTransformer for working with time-series like data

    SlidingWindowTransformer for working with time-series like data

    This is essentially just a Taken's embedding, but gives it tools to work with classical time-series data. I tried to have a reasonable range of options/flexibility in how you sample windows, but would welcome any suggestions for further options. In principle it might be possible to "learn" good window parameters from the input data (right now fit does nothing beyond verifying parameters) but I don't quite know what the right way to do that would be exactly.

    opened by lmcinnes 3
  • Count feature compressor

    Count feature compressor

    It turns our that our data-prep tricks prior and after SVDs for count-based data are generically useful. I tried applying them to info-weighted bag-of-words on 20-newsgroups instead of just a straight SVD and ...

    image

    I decided this is going to be too generically useful not to turn into a standard transformer. In due course we can potentially use this as part of a pipeline for word vectors instead of the reduce_dimension method we have now.

    opened by lmcinnes 3
  • [Question] Vectorizing Terabyte-order data

    [Question] Vectorizing Terabyte-order data

    Hello and thank you for a great package!

    I was wondering whether (and how) ApproximateWassersteinVectorizer would be able to scale up to terabyte-order data or whether you had any pointers for dealing with data of that scale.

    opened by cakiki 2
  • Add more tests; start testing distances

    Add more tests; start testing distances

    Started testing most of the vectorizers at a very basic level. Began a testing module for the distances. I would welcome help writing extra tests for those.

    opened by lmcinnes 2
  • The big cooccurrence refactor

    The big cooccurrence refactor

    The big refactoring is basically complete. The only draw back is that your kernel functions (if you are doing multiple window sizes) need to all be the same. This is a numba list of functions issue. It will take some work to add this back in I think... it can be another PR.

    opened by cjweir 1
  • Ensure contiguous arrays in optimal transport vectorizers

    Ensure contiguous arrays in optimal transport vectorizers

    If a user passes in 'F' layout 2D arrays for vectors it can cause an error from numba that is hard for a user to decode. Remedy this by simply ensuring all the arrays being added are 'C' layout.

    opened by lmcinnes 1
  • added EdgeListVectorizer

    added EdgeListVectorizer

    Added a new class for vectorizing data in the form of row_name, column_name, count triples.
    Added some unit tests. Documentation and an expanded functionality to come at a later date.

    opened by jc-healy 1
  • Add named function kernels for sliding windows

    Add named function kernels for sliding windows

    For now just a couple of simple test kernels; one for count time series anomaly detection, and another to work with time interval time series anomaly detection, which is similar. Enough basics to test out that the named function code-path works.

    opened by lmcinnes 1
  • SequentialDifferenceTransformer, function kernels

    SequentialDifferenceTransformer, function kernels

    Add in a SequentialDifferenceTransformer as a simple to use approach to generating inter-arrival times or similar.

    Allow the SlidingWindowTransformer to take function kernels (numba jitted obviously) for much greater flexibility in how kernels perform.

    opened by lmcinnes 1
  • InformationWeightTransform hates empty columns

    InformationWeightTransform hates empty columns

    We seem to crash kernels when we pass a sparse matrix with an empty column to InformationWeight transformers fit transform.

    It comes up when we've got a fixed token dictionary but our training data is missing some of our vocabulary:

    vect = NgramVectorizer(token_dictionary=token_dictionary).fit_transform(eventid_list)
    vect1 = InformationWeightTransformer().fit_transform(vect)
    
    opened by jc-healy 0
  • [BUG] Can't print TokenCooccurrenceVectorizer object

    [BUG] Can't print TokenCooccurrenceVectorizer object

    Hello!

    Not sure this is actually a bug, but it seemed a bit odd so I figured I'd report it. I just trained a vectorizer using the following:

    %%time
    word_vectorizer = vectorizers.TokenCooccurrenceVectorizer(
        min_document_occurrences=2,
        window_radii=20,          
        window_functions='variable',
        kernel_functions='geometric',            
        n_iter = 3,
        normalize_windows=True,
    ).fit(subset['tokenized'])
    

    When I try to display or print it in a Jupyter Notebook cell I get the following error (actually repeated a few times):

    AttributeError: 'TokenCooccurrenceVectorizer' object has no attribute 'coo_initial_memory'
    
    bug 
    opened by cakiki 3
  • Contributions

    Contributions

    We would greatly appreciate the contribution of tutorial notebooks applying vectorizer tools to diverse or interesting datasets

    Hi! I was wondering what sort of contributions/tutorials you were looking for, or whether you had more concrete contribution wishes.

    opened by cakiki 2
  • Needed to downgrade `numpy` and reinstall `libllvmlite` & `numba`

    Needed to downgrade `numpy` and reinstall `libllvmlite` & `numba`

    I tried installing vectorizers on a fresh pyenv and I got this error:

    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "vectorizers-master/vectorizers/__init__.py", line 1, in <module>
        from .token_cooccurrence_vectorizer import TokenCooccurrenceVectorizer
      File "vectorizers/token_cooccurrence_vectorizer.py", line 1, in <module>
        from .ngram_vectorizer import ngrams_of
      File "vectorizers/ngram_vectorizer.py", line 2, in <module>
        import numba
      File "numba-0.54.1-py3.9-macosx-11.3-x86_64.egg/numba/__init__.py", line 19, in <module>
        from numba.core import config
      File "numba-0.54.1-py3.9-macosx-11.3-x86_64.egg/numba/core/config.py", line 16, in <module>
        import llvmlite.binding as ll
      File "llvmlite-0.37.0-py3.9.egg/llvmlite/binding/__init__.py", line 4, in <module>
        from .dylib import *
      File "llvmlite-0.37.0-py3.9.egg/llvmlite/binding/dylib.py", line 3, in <module>
        from llvmlite.binding import ffi
      File "llvmlite-0.37.0-py3.9.egg/llvmlite/binding/ffi.py", line 191, in <module>
        raise OSError("Could not load shared object file: {}".format(_lib_name))
    OSError: Could not load shared object file: libllvmlite.dylib
    

    This was resolvable for me by uninstalling and reinstalling libllvmlite & numba after ensuring numpy<1.21, e.g.

    pip install numpy==1.20
    pip uninstall libllvmlite
    pip uninstall numba
    pip install libllvmlite
    pip install numba
    

    This is obviously not a big deal, but in case others bump this, maybe a ref in the README can save them a google. 🤷

    opened by BBischof 2
  • Set up ``__getstate__`` and ``__setstate__`` and add serialization tests

    Set up ``__getstate__`` and ``__setstate__`` and add serialization tests

    Most things should just pickle, however numba based attributes (functions and types such as numba.typed.List) need special handling with __getstate__ and __setstate__ methods to handle conversion of numba bits and pieces.

    To aid in this we should have tests that all classes can be pickled and unpickled successfully. This will help highlight any shortcomings.

    See also #62

    enhancement 
    opened by lmcinnes 0
  • Serialization

    Serialization

    Have you thought about adding serialization/saving to disk? At the very least, would you point out what to store for, say, the ngram vectorizer?

    Thank you very much for the examples on training word/document vectors using this and comparing to USE! Giving it a try now on my data and it looks pretty good! Thank you!

    opened by stevemarin 3
Releases(v0.01)
Owner
Tutte Institute for Mathematics and Computing
Tutte Institute for Mathematics and Computing
Tutte Institute for Mathematics and Computing
Leverage Twitter API v2 to analyze tweet metrics such as impressions and profile clicks over time.

Tweetmetric Tweetmetric allows you to track various metrics on your most recent tweets, such as impressions, retweets and clicks on your profile. The

Mathis HAMMEL 29 Oct 18, 2022
Extract data from a wide range of Internet sources into a pandas DataFrame.

pandas-datareader Up to date remote data access for pandas, works for multiple versions of pandas. Installation Install using pip pip install pandas-d

Python for Data 2.5k Jan 09, 2023
In this tutorial, raster models of soil depth and soil water holding capacity for the United States will be sampled at random geographic coordinates within the state of Colorado.

Raster_Sampling_Demo (Resulting graph of this demo) Background Sampling values of a raster at specific geographic coordinates can be done with a numbe

2 Dec 13, 2022
Repository created with LinkedIn profile analysis project done

EN/en Repository created with LinkedIn profile analysis project done. The datase

Mayara Canaver 4 Aug 06, 2022
Python Practicum - prepare for your Data Science interview or get a refresher.

Python-Practicum Python Practicum - prepare for your Data Science interview or get a refresher. Data Data visualization using data on births from the

Jovan Trajceski 1 Jul 27, 2021
Gaussian processes in TensorFlow

Website | Documentation (release) | Documentation (develop) | Glossary Table of Contents What does GPflow do? Installation Getting Started with GPflow

GPflow 1.7k Jan 06, 2023
Snakemake workflow for converting FASTQ files to self-contained CRAM files with maximum lossless compression.

Snakemake workflow: name A Snakemake workflow for description Usage The usage of this workflow is described in the Snakemake Workflow Catalog. If

Algorithms for reproducible bioinformatics (Koesterlab) 1 Dec 16, 2021
A project consists in a set of assignements corresponding to a BI process: data integration, construction of an OLAP cube, qurying of a OPLAP cube and reporting.

TennisBusinessIntelligenceProject - A project consists in a set of assignements corresponding to a BI process: data integration, construction of an OLAP cube, qurying of a OPLAP cube and reporting.

carlo paladino 1 Jan 02, 2022
Maximum Covariance Analysis in Python

xMCA | Maximum Covariance Analysis in Python The aim of this package is to provide a flexible tool for the climate science community to perform Maximu

Niclas Rieger 39 Jan 03, 2023
collect training and calibration data for gaze tracking

Collect Training and Calibration Data for Gaze Tracking This tool allows collecting gaze data necessary for personal calibration or training of eye-tr

Pascal 5 Dec 17, 2022
talkbox is a scikit for signal/speech processing, to extend scipy capabilities in that domain.

talkbox is a scikit for signal/speech processing, to extend scipy capabilities in that domain.

David Cournapeau 76 Nov 30, 2022
Statistical Analysis 📈 focused on statistical analysis and exploration used on various data sets for personal and professional projects.

Statistical Analysis 📈 This repository focuses on statistical analysis and the exploration used on various data sets for personal and professional pr

Andy Pham 1 Sep 03, 2022
.npy, .npz, .mtx converter.

npy-converter Matrix Data Converter. Expand matrix for multi-thread, multi-process Divid matrix for multi-thread, multi-process Support: .mtx, .npy, .

taka 1 Feb 07, 2022
Supply a wrapper ``StockDataFrame`` based on the ``pandas.DataFrame`` with inline stock statistics/indicators support.

Stock Statistics/Indicators Calculation Helper VERSION: 0.3.2 Introduction Supply a wrapper StockDataFrame based on the pandas.DataFrame with inline s

Cedric Zhuang 1.1k Dec 28, 2022
Feature Detection Based Template Matching

Feature Detection Based Template Matching The classification of the photos was made using the OpenCv template Matching method. Installation Use the pa

Muhammet Erem 2 Nov 18, 2021
4CAT: Capture and Analysis Toolkit

4CAT: Capture and Analysis Toolkit 4CAT is a research tool that can be used to analyse and process data from online social platforms. Its goal is to m

Digital Methods Initiative 147 Dec 20, 2022
pandas: powerful Python data analysis toolkit

pandas is a Python package that provides fast, flexible, and expressive data structures designed to make working with "relational" or "labeled" data both easy and intuitive.

pandas 36.4k Jan 03, 2023
A real data analysis and modeling project - restaurant inspections

A real data analysis and modeling project - restaurant inspections Jafar Pourbemany 9/27/2021 This project represents data analysis and modeling of re

Jafar Pourbemany 2 Aug 21, 2022
PySpark bindings for H3, a hierarchical hexagonal geospatial indexing system

h3-pyspark: Uber's H3 Hexagonal Hierarchical Geospatial Indexing System in PySpark PySpark bindings for the H3 core library. For available functions,

Kevin Schaich 12 Dec 24, 2022
Vectorizers for a range of different data types

Vectorizers for a range of different data types

Tutte Institute for Mathematics and Computing 69 Dec 29, 2022