MinHash, LSH, LSH Forest, Weighted MinHash, HyperLogLog, HyperLogLog++, LSH Ensemble

Overview

datasketch: Big Data Looks Small

datasketch gives you probabilistic data structures that can process and search very large amount of data super fast, with little loss of accuracy.

This package contains the following data sketches:

Data Sketch Usage
MinHash estimate Jaccard similarity and cardinality
Weighted MinHash estimate weighted Jaccard similarity
HyperLogLog estimate cardinality
HyperLogLog++ estimate cardinality

The following indexes for data sketches are provided to support sub-linear query time:

Index For Data Sketch Supported Query Type
MinHash LSH MinHash, Weighted MinHash Jaccard Threshold
MinHash LSH Forest MinHash, Weighted MinHash Jaccard Top-K
MinHash LSH Ensemble MinHash Containment Threshold

datasketch must be used with Python 2.7 or above and NumPy 1.11 or above. Scipy is optional, but with it the LSH initialization can be much faster.

Note that MinHash LSH and MinHash LSH Ensemble also support Redis and Cassandra storage layer (see MinHash LSH at Scale).

Install

To install datasketch using pip:

pip install datasketch

This will also install NumPy as dependency.

To install with Redis dependency:

pip install datasketch[redis]

To install with Cassandra dependency:

pip install datasketch[cassandra]

To install with Scipy for faster MinHashLSH initialization:

pip install datasketch[scipy]
Comments
  • Redis backend for datasketch LSH

    Redis backend for datasketch LSH

    Summary

    We have been using datasketch extensively in data-driven applications (primarily the MinHashLSH class). The existing implementation requires all data to be stored in memory, which is a severe limitation for very large datasets. We add support for a Redis backend. Default behavior remains in-memory storage.

    Changes

    • Redis backend for MinHashLSH
    • Insertion session for bulk insertion (reduces network calls when inserting with a Redis backend)
    • Functionality to view LSH bucket sizes (useful for diagnosis/debugging)
    • Accompanying docs and tests

    Examples

    For examples, see the additions to the documentation.

    opened by ae-foster 25
  • Add Cassandra storage layer

    Add Cassandra storage layer

    This is a storage layer that uses Cassandra as a backend. It is optimized to take into account Cassandra internals and its (intrinsic) limitations (see for example how all the keys are retrieved and how the list abstraction is implemented). I have also added a new batch of tests to both TestMinHashLSH and TestWeightedMinHashLSH. Since Cassandra is required (Cassandra is not mocked) they can be enabled by flipping the DO_TEST_CASSANDRA variable.

    opened by ostefano 18
  • Implement #123 arbitrary Mongo URL & collection name

    Implement #123 arbitrary Mongo URL & collection name

    I have also updated the documentation a bit.

    The purpose of the function AsyncIOMotorClient.get_default_database is to allow for the Mongo URL to define a database. If the URL contains a database, that database in the URL will be used, otherwise, the calculated database name is used. As for get_collection, I changed it for consistency only.

    I wanted to update the unit tests, but they were skipped… and when I tried to un-skip them everything failed spetacularly. I'm not sure what to do about that.

    Cheers!

    opened by ekevoo 14
  • Easy computation of all duplicates?

    Easy computation of all duplicates?

    Thanks for this library!

    One question: Is there an easy way to get all pairs from a LSH? In other words, instead of a query for a single minhash, I need all pairs of records inside a same bucket.

    I tried to dive into the code, but I didn't understand very well how hashranges/hashtables work. What I need is similar to candidate_duplicates of this other implementation: https://mattilyra.github.io/2017/05/23/document-deduplication-with-lsh.html

    opened by fjsj 14
  • Performance improvements

    Performance improvements

    I've made some performance improvements in the MinHash class and added a helper method for bulk computation.

    1. Precasting _mersenne_prime and _max_hash, they don't need to be cast multiple times.
    2. I've modified the update method to accept a list of bytes so hash values can be computed in batches. This avoids using for loops in python space and packs all those operations into the numpy calls once. Only caveat here is that the matrix operations use more memory and you should do chunking if you're dealing with huge sets.
    3. The bulk helper removes unnecessary initialization overhead of the permutations and initial hashvalues. They are computed once then passed on to new MinHash objects as they are produced.

    These changes give some pretty nice performance improvements, up to 3X for updates, and anywhere from 5-25X for bulk computing. See my benchmark gist here

    opened by Sinusoidal36 10
  • AsyncMinhashLSH doesn't use index key with mongodb backend, and hence is very slow.

    AsyncMinhashLSH doesn't use index key with mongodb backend, and hence is very slow.

    We'd struggled with AsyncMinhashLSH's query performance, eventually finding out it wasn't using the default collection index key ('_id'), and instead querying by it's 'key' field. (see: here )

    This can either be fixed by inserting into the collections with primary key '_id', or else creating indices on the 'key' field for all collections in the database.

    For anyone looking for the quickfix solution, simply run:

    db.lsh_...._bucket_#.createIndex( { key: -1 } )

    on each of the buckets and key collections in the database. We found performance went from 11 seconds/ query to >1ms/ query pretty much instantly.

    I believe this should be added to the documentation somewhere, or fixed in the code (although it'd break existing databases). I'd volunteer to do either!

    opened by oisincar 8
  • add ability to set storage basename

    add ability to set storage basename

    If you are connecting to a redis database and you lose the reference to the connection, you will lose reference to all data in your hash and you also lose the ability to remove it. By having the ability to set the base name, you can alway reference the same keys no matter if you create a new connection or use the old one.

    opened by KevinMann13 8
  • how to restore minhashes from MinHashLSH with redis backend?

    how to restore minhashes from MinHashLSH with redis backend?

    I what to find duplicates,so I need pop a hash then compare left hashes.

    or should I make a copy of original data to redis? or just not use the redis storge wait the dataset feed up then use MinHashLSH

    opened by bung87 8
  • Persistent storage on MongoDB

    Persistent storage on MongoDB

    Hi guys,

    Question, I'd like to store a MinHashLSH on a Mongo-db instance. I've seen you have Redis & Cassandra as storage-types in the code; is there an easy way for me to store in on mongo as well?

    Greets Tb

    opened by blah-crusader 7
  • Combining/Storing LSH with different thresholds

    Combining/Storing LSH with different thresholds

    Hi,

    First of all thanks for this amazing package. I have two LSH, one optimized with a threshold of 0.65 and the other with a threshold of 0.55. I found the results from both these LSH are relevant to me, and I would finally take a union of the results of both of these, and get my duplicate contents.

    In order to store both LSH, I'm using Redis caching mechanism, but later I will have to call both the LSH, run individually and take the union of the results.

    Is there any way I could optimize so that I could do the same with only one LSH if possible or some method to create a special type of indexes in Redis such that I don't have to call the LSH's Individually. This would help me save Memory and Computational Time.

    opened by thakur-nandan 7
  • Question: Setting Expiry Time for Entries in Redis LSH?

    Question: Setting Expiry Time for Entries in Redis LSH?

    Hi,

    First of all thanks for this great library :)

    I wanted to know if there is any way to set an expiry time for entries being added into an LSH connected to Redis? Or any way to keep a default expiry time for all keys?

    Should I instead set a maxmemory policy for my Redis with an allkeys-lru or allkeys-random eviction policy? Is this recommended or will it cause issues with the LSH?

    Regards, Spandan

    opened by SpandanThakur 7
  • Support numpy>=1.20.0

    Support numpy>=1.20.0

    numpy.int was only ever an alias for the built-in, and is now deprecated from version 1.20.0

    See: https://numpy.org/doc/stable/release/1.20.0-notes.html#using-the-aliases-of-builtin-types-like-np-int-is-deprecated

    opened by joehalliwell 0
  • Given a minHash instance or minHash LSH , How to find most unsimilar instance with an existed minHash LSH?

    Given a minHash instance or minHash LSH , How to find most unsimilar instance with an existed minHash LSH?

    In API document , we can learn how to find approximate neighbours with Jaccard similarity more than threshold . On the other hand, how to find most unsimilar instance. Further, given a minHash LSH which initialized by thousands of plain text - set A, and given other thousands of plain text - set B, how to discover most unsimilar text of set B with set A , or how to discover text of set B which jaccard similarity less than threshold . Is there as possible as fast solution to deal with this problem ? Thanks !

    opened by loulianzhang 2
  • Getting `pymongo.errors.ExecutionTimeout`  for using more than 1 instance of AsyncMinHashLSH

    Getting `pymongo.errors.ExecutionTimeout` for using more than 1 instance of AsyncMinHashLSH

    I have a use case where i have to store min hash of (n) different categories of file and the query them. For example if i have documents of category A and I want to store all of them in one db and then query at later point.

    I am initializing this in a microservice as (fastapi) as follows

    @app.on_event("startup")
    async def startup_event():
        LSHService.lsh_js, LSHService.lsh_css, LSHService.lsh_html, = await asyncio.gather(
            *[
                await AsyncMinHashLSH(
                    storage_config={
                        **_storage,
                        "basename": "category-a".encode(),
                    },
                    threshold=0.9,
                    num_perm=256,
                ),
                await AsyncMinHashLSH(
                    storage_config={
                        **_storage,
                        "basename": "category-b".encode(),
                    },
                    threshold=0.9,
                    num_perm=256,
                ),
                await AsyncMinHashLSH(
                    storage_config={
                        **_storage,
                        "basename": "category-b".encode(),
                    },
                    threshold=0.9,
                    num_perm=256,
                ),
            ]
        )
    

    If i initialize like this i get
    Getting pymongo.errors.ExecutionTimeout Error

    However If i just maintain 1 object then I dont get any error.

    What can I do mitigate this? Thanks in advance

    opened by RonaldRegan69 3
  • Question: Why getting a min-hash for a single sample is possible?

    Question: Why getting a min-hash for a single sample is possible?

    Say if we have three documents A, B and C. Each document might contains different words.

    According to the document of data sketch.MinHash, we can get a min-hash for A with

    minxish = Minhash(num_perm=128) minhash.update(A.encode('utf-8')) vector = minhash.digest()

    But isn't that we need to create a vocabulary consisting of all words from A, B and C before getting the vector?

    opened by bdeng3 2
Releases(v1.5.8)
  • v1.5.8(Aug 21, 2022)

    What's Changed

    • Add GitHub URL for PyPi by @andriyor in https://github.com/ekzhu/datasketch/pull/179
    • Support asyncio redis by @long2ice in https://github.com/ekzhu/datasketch/pull/185
    • Fix name construction for all values of b by @SenadI in https://github.com/ekzhu/datasketch/pull/190

    New Contributors

    • @andriyor made their first contribution in https://github.com/ekzhu/datasketch/pull/179
    • @long2ice made their first contribution in https://github.com/ekzhu/datasketch/pull/185
    • @SenadI made their first contribution in https://github.com/ekzhu/datasketch/pull/190

    Full Changelog: https://github.com/ekzhu/datasketch/compare/v1.5.7...v1.5.8

    Source code(tar.gz)
    Source code(zip)
  • v1.5.7(Feb 4, 2022)

    What's Changed

    • Unable to create multiple lsh indices each one in its own keyspace - issue #171 by @ronassa in https://github.com/ekzhu/datasketch/pull/172

    New Contributors

    • @ronassa made their first contribution in https://github.com/ekzhu/datasketch/pull/172

    Full Changelog: https://github.com/ekzhu/datasketch/compare/v1.5.6...v1.5.7

    Source code(tar.gz)
    Source code(zip)
  • v1.5.6(Dec 27, 2021)

  • v1.5.5(Dec 16, 2021)

    What's Changed

    • Adding minhash_many to WeightedMinHashGenerator. by @jroose-jv in https://github.com/ekzhu/datasketch/pull/165
    • Add query buffer by @hguhlich in https://github.com/ekzhu/datasketch/pull/167

    New Contributors

    • @jroose-jv made their first contribution in https://github.com/ekzhu/datasketch/pull/165
    • @hguhlich made their first contribution in https://github.com/ekzhu/datasketch/pull/167

    Full Changelog: https://github.com/ekzhu/datasketch/compare/v1.5.4...v1.5.5

    Source code(tar.gz)
    Source code(zip)
  • v1.5.4(Dec 4, 2021)

    What's Changed

    • Fixes #146; MinhashLSH creates mongo index key. by @oisincar in https://github.com/ekzhu/datasketch/pull/148
    • Add redis_buffer configuration. by @QthCN in https://github.com/ekzhu/datasketch/pull/152
    • minhash: Get rid of deprecation warning by @xkubov in https://github.com/ekzhu/datasketch/pull/156

    New Contributors

    • @oisincar made their first contribution in https://github.com/ekzhu/datasketch/pull/148
    • @QthCN made their first contribution in https://github.com/ekzhu/datasketch/pull/152
    • @xkubov made their first contribution in https://github.com/ekzhu/datasketch/pull/156

    Full Changelog: https://github.com/ekzhu/datasketch/compare/1.5.2...v1.5.4

    Source code(tar.gz)
    Source code(zip)
  • 1.5.2(Dec 15, 2020)

    • Performance improvement for MinHash's update method.
    • Make MinHash updates 4.5X faster by using update_batch method for bulk update on MinHash. [See API doc].(http://ekzhu.com/datasketch/documentation.html#datasketch.MinHash.update_batch)
    • Further performance gain by using bulk generation of MinHash using MinHash.bulk or MinHash.generator. See API doc and pull request.
    • Optional compression for MinHash LSH index by hashing the bucket key produced by MinHashLSH._H. See pull request. This leads to saving of memory/storage space used by the index.

    Thank you @Sinusoidal36!

    Source code(tar.gz)
    Source code(zip)
  • v1.5.0(Nov 26, 2019)

    • Minor bug fixes
    • Cassandra storage layer, thank @ostefano! Now you can specify the Cassandra config just like the Redis one.
    from datasketch import MinHashLSH
    
    lsh = MinHashLSH(
        threashold=0.5, num_perm=128, storage_config={
            'type': 'cassandra',
            'cassandra': {
                'seeds': ['127.0.0.1'],
                'keyspace': 'lsh_test',
                'replication': {
                    'class': 'SimpleStrategy',
                    'replication_factor': '1',
                },
                'drop_keyspace': False,
                'drop_tables': False,
            }
        }
    )
    
    Source code(tar.gz)
    Source code(zip)
  • v1.4.0(Jan 6, 2019)

    Now support hashfunc parameter for MinHash and HyperLogLog. The old parameter hashobj is removed.

    # Let's use MurmurHash3.
    import mmh3
    
    # We need to define a new hash function that outputs an integer that
    # can be encoded in 32 bits.
    def _hash_func(d):
        return mmh3.hash32(d)
    
    # Use this function in MinHash constructor.
    m = MinHash(hashfunc=_hash_func)
    
    Source code(tar.gz)
    Source code(zip)
  • v.1.3.0(Dec 27, 2018)

  • v1.2.10(Nov 2, 2018)

    • Adding batch removal functionality for Async MinHashLSH
    • Because Redis does not support async operation, removed Redis support from Async MinHashLSH

    For details see Pull #70 Thanks @aastafiev for the contribution.

    Source code(tar.gz)
    Source code(zip)
  • v1.2.9(Oct 22, 2018)

  • v1.2.7(Jul 27, 2018)

    • Added Asynchronous MinHash LSH module. Thanks @aastafiev!
    • Added ability to set the base name in storage config. Base name is used as the prefix for generating keys in the underlying storage (e.g., Redis). This change allows client to "reconnect" to an existing LSH index in the storage through its base name.
    Source code(tar.gz)
    Source code(zip)
  • v1.2.6(Jul 21, 2018)

  • v1.2.5(Nov 21, 2017)

  • v1.2.4(Nov 8, 2017)

  • v1.2.3(Sep 8, 2017)

  • v1.2.0(Mar 31, 2017)

  • v1.1.3(Mar 26, 2017)

    MinHash now uses Numpy's random number generator instead of Python's built-in random. This makes MinHash generate consistent hash values across different Python versions.

    The side-effect is that now MinHash created before version 1.1.3 won’t work (i.e., jaccard, merge and union) correctly with those created after.

    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Mar 15, 2017)

    • LeanMinHash is a subclass of MinHash. It uses less memory and allows faster (de)serialization. See documentation for details.
    • Removed serialize, deserialize, and bytesize methods from MinHash. These are supported in LeanMinHash instead.
    • Serialized MinHash objects before this version will not be deserialized properly. To migrate see here.
    • Documentation now have its own website!
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Feb 12, 2017)

    After nearly 2 years working on this project on-and-off, the API is now stable, and the features of MinHash-related sketches are completed.

    I will continue to add more data sketches and indexes.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.6(Jan 7, 2017)

    • MinHash LSH Forest implementation and benchmark using synthetic data
    • Improve existing MinHash LSH benchmark using synthetic data for more tunable data distributions
    • Improve MinHash and LSH performance
    Source code(tar.gz)
    Source code(zip)
  • v0.2.4(Jul 10, 2016)

  • v0.2.3(Apr 12, 2016)

  • v0.2.1(Apr 12, 2016)

The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks

The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks This folder contains the code to reproduce the data in "The Implicit Bias o

Samuel Lippl 0 Feb 05, 2022
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

Sagor Saha 4 Sep 04, 2021
Dialect classification

Dialect-Classification This repository presents the data that was used in a talk at ICKL-5 (5th International Conference on Kurdish Linguistics) at th

Kurdish-BLARK 0 Nov 12, 2021
Implementation of MA-Trace - a general-purpose multi-agent RL algorithm for cooperative environments.

Off-Policy Correction For Multi-Agent Reinforcement Learning This repository is the official implementation of Off-Policy Correction For Multi-Agent R

4 Aug 18, 2022
This Deep Learning Model Predicts that from which disease you are suffering.

Deep-Learning-Project This Deep Learning Model Predicts that from which disease you are suffering. This Project Covers the Topics of Deep Learning Int

Jai Viral Doshi 0 Jan 20, 2022
PyTorch implementation of PSPNet

PSPNet with PyTorch Unofficial implementation of "Pyramid Scene Parsing Network" (https://arxiv.org/abs/1612.01105). This repository is just for caffe

Kazuto Nakashima 52 Nov 16, 2022
This repository contains the map content ontology used in narrative cartography

Narrative-cartography-ontology This repository contains the map content ontology used in narrative cartography, which is associated with a submission

Weiming Huang 0 Oct 31, 2021
Official PyTorch implementation of PS-KD

Self-Knowledge Distillation with Progressive Refinement of Targets (PS-KD) Accepted at ICCV 2021, oral presentation Official PyTorch implementation of

61 Dec 28, 2022
PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis

WaveGrad2 - PyTorch Implementation PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis. Status (202

Keon Lee 59 Dec 06, 2022
Temporal Knowledge Graph Reasoning Triggered by Memories

MTDM Temporal Knowledge Graph Reasoning Triggered by Memories To alleviate the time dependence, we propose a memory-triggered decision-making (MTDM) n

4 Sep 25, 2022
Pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering".

TRAnsformer Routing Networks (TRAR) This is an official implementation for ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visu

Ren Tianhe 49 Nov 10, 2022
Who calls the shots? Rethinking Few-Shot Learning for Audio (WASPAA 2021)

rethink-audio-fsl This repo contains the source code for the paper "Who calls the shots? Rethinking Few-Shot Learning for Audio." (WASPAA 2021) Table

Yu Wang 34 Dec 24, 2022
A human-readable PyTorch implementation of "Self-attention Does Not Need O(n^2) Memory"

memory_efficient_attention.pytorch A human-readable PyTorch implementation of "Self-attention Does Not Need O(n^2) Memory" (Rabe&Staats'21). def effic

Ryuichiro Hataya 7 Dec 26, 2022
Official Implementation of "LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks"

LUNAR Official Implementation of "LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks" Adam Goodge, Bryan Hooi, Ng See Kiong and

Adam Goodge 25 Dec 28, 2022
Jiminy Cricket Environment (NeurIPS 2021)

Jiminy Cricket This is the repository for "What Would Jiminy Cricket Do? Towards Agents That Behave Morally" by Dan Hendrycks*, Mantas Mazeika*, Andy

Dan Hendrycks 15 Aug 29, 2022
Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network

DeepCDR Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network This work has been accepted to ECCB2020 and was also published in the

Qiao Liu 50 Dec 18, 2022
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Good news! We release a clean version of PVNet: clean-pvnet, including how to train the PVNet on the custom dataset. Use PVNet with a detector. The tr

ZJU3DV 722 Dec 27, 2022
Reference code for the paper CAMS: Color-Aware Multi-Style Transfer.

CAMS: Color-Aware Multi-Style Transfer Mahmoud Afifi1, Abdullah Abuolaim*1, Mostafa Hussien*2, Marcus A. Brubaker1, Michael S. Brown1 1York University

Mahmoud Afifi 36 Dec 04, 2022
Image super-resolution through deep learning

srez Image super-resolution through deep learning. This project uses deep learning to upscale 16x16 images by a 4x factor. The resulting 64x64 images

David Garcia 5.3k Dec 28, 2022
Code for the paper "Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness"

DU-VAE This is the pytorch implementation of the paper "Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness" Acknowledgement

Dazhong Shen 4 Oct 19, 2022