SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking

Related tags

Deep Learningsplade
Overview

SPLADE 🍴 + 🥄 = 🔎

This repository contains the weights for four models as well as the code for running inference for our two papers:

  • [v1]: SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking, Thibault Formal, Benjamin Piwowarski and Stéphane Clinchant. SIGIR21 short paper. link
  • [v2]: SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval, Thibault Formal, Benjamin Piwowarski, Carlos Lassance, and Stéphane Clinchant. arxiv. link

We also provide some scripts to run evaluation on the BEIR benchmark in the beir_evaluation folder, as well as training code in the training_with_sentence_transformers folder.

TL; DR
Recently, dense retrieval with approximate nearest neighbors search based on BERT has demonstrated its strength for first-stage retrieval, questioning the competitiveness of traditional sparse models like BM25. In this work, we have proposed SPLADE, a sparse model revisiting query/document expansion. Our approach relies on in-batch negatives, logarithmic activation and FLOPS regularization to learn effective and efficient sparse representations. SPLADE is an appealing candidate for first-stage retrieval: it rivals the latest state-of-the-art dense retrieval models, its training procedure is straightforward, and its efficiency (sparsity/FLOPS) can be controlled explicitly through the regularization such that it can be operated on inverted indexes. In reason of its simplicity, SPLADE is a solid basis for further improvements in this line of research.

splade: a spork that is sharp along one edge or both edges, enabling it to be used as a knife, a fork and a spoon.

Updates

  • 24/09/2021: add the weights for v2 version of SPLADE (max pooling and margin-MSE distillation training) + add scripts to evaluate the model on the BEIR benchmark.
  • 16/11/2021: add code for training SPLADE using the Sentence Transformers framework + update LICENSE to properly include BEIR and Sentence Transformers.

SPLADE

We give a brief overview of the model architecture and the training strategy. Please refer to the paper for further details ! You can also have a look at our blogpost for additional insights and examples ! Feel also free to contact us via Twitter or mail @ [email protected] !

SPLADE architecture (see below) is rather simple: queries/documents are fed to BERT, and we rely on the MLM head used for pre-training to actually predict term importance in BERT vocabulary space. Thus, the model implicitly learns expansion. We also added a log activation that greatly helped making the representations sparse. Relevance is computed via dot product.

SPLADE architecture

The model thus represents queries and documents in the vocabulary space. In order to make these representations sparse -- so that we can use an inverted index, we explicitly train the model with regularization on q/d representations (L1 or FLOPS) as shown below:

splade training

SPLADE learns how to balance between effectiveness (via the ranking loss) and efficiency (via the regularization loss). By controlling lambda, we can adjust the trade-off.

How to use the code for inference

  • See inference_SPLADE.ipynb and beir_evaluation/splade_beir.ipynb

Training Splade

  • See training_with_sentence_transformers folder

Requirements

Requirements can be found in requirements.txt. In order to get the weights, be sure to have git lfs installed.

Main Results on MS MARCO (dev set) and TREC DL 2019 passage ranking

  • Below is a table of results comparing SPLADE to several competing baselines:

res

  • One can adjust the regularization strength for SPLADE to reach the optimal tradeoff between performance and efficiency:

perf vs flops

Cite

Please cite our work as:

@inproceedings{Formal2021_splade,
 author = {Thibault Formal, Benjamin Piwowarski and Stéphane Clinchant},
 title = {{SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking}},
 booktitle = {Proc. of SIGIR},
 year = {2021},
}

License

SPLADE Copyright (c) 2021-present NAVER Corp.

SPLADE is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. (see license)

You should have received a copy of the license along with this work. If not, see http://creativecommons.org/licenses/by-nc-sa/4.0/ .

Comments
  • Evaluation on MSMARCO?

    Evaluation on MSMARCO?

    Hi, thanks for your very interesting work.

    Could you share how you evaluate to get the results here. Did you use inverted indexing or use this code? I am trying the later approach, but it is very slow on MSMARCO. Thank you

    opened by thongnt99 8
  • Cannot train SPLADEv2 to achieve the reported performance.

    Cannot train SPLADEv2 to achieve the reported performance.

    opened by namespace-Pt 6
  • FLOPs calculation

    FLOPs calculation

    I recently read your SPLADE paper and I think it's quite interesting. I have a question concerning FLOPs calculation in the paper.

    I think computing FLOPs for an inverted index involves the length of the activated posting lists(the overlapping terms in query and document). For example, a query a b c and a document c a e, since we must inspect the posting list of the overlapping terms a and c, the flops should at least be

    posting_length(a) + posting_length(c)
    

    because we perform summation for each entry in the posting list. However, in the paper you compute FLOPs by the probability that a, b, c are activated in the query and c, a, e are activated in the document. I think this may underestimate the flops of SPLADE because the less sparse the document, the longer posting lists in the inverted index.

    opened by namespace-Pt 6
  • move all source to splade/ module

    move all source to splade/ module

    Hi,

    I'd like to build client code that depends on SPLADE. Please would you consider this PR, which moves all source code into a splade folder, rather than a src/ folder. This appears to work satisfactorily for my use case.

    Craig

    opened by cmacdonald 2
  • configuration for splade++ results

    configuration for splade++ results

    Hi-- thanks for the nice work.

    I'm trying to index+retrieve using the naver/splade-cocondenser-ensembledistil model. Following the readme, I've done:

    export SPLADE_CONFIG_FULLPATH="config_default.yaml"
    python3 -m src.index \
      init_dict.model_type_or_dir=naver/splade-cocondenser-ensembledistil \ # <--- (from readme, using the new model)
      config.pretrained_no_yamlconfig=true \
      config.index_dir=experiments/pre-trained/index \
      index=msmarco  # <--- added
    
    export SPLADE_CONFIG_FULLPATH="config_default.yaml"
    python3 -m src.retrieve \
      init_dict.model_type_or_dir=naver/splade-cocondenser-ensembledistil \ # <--- (from readme, using the new model)
      config.pretrained_no_yamlconfig=true \
      config.index_dir=experiments/pre-trained/index \
      config.out_dir=experiments/pre-trained/out-dl19 \
      index=msmarco \  # <--- added
      retrieve_evaluate=msmarco # <--- added
    

    Everything runs just fine, but I'm getting rather poor results in the end:

    [email protected]: 0.18084248646927734
    recall ==> {'recall_5': 0.2665353390639923, 'recall_10': 0.3298710601719197, 'recall_15': 0.3694364851957974, 'recall_20': 0.3951050620821394, 'recall_30': 0.4270654250238777, 'recall_100': 0.5166069723018146, 'recall_200': 0.5560768863419291, 'recall_500': 0.606984240687679, 'recall_1000': 0.6402578796561604}
    

    I suspect it's a configuration problem on my end, but since the indexing process takes a bit of time, I thought I'd just ask before diving too far into the weeds: Is there a configuration file to use for the splade++ results, and how do I use it?

    Thanks!

    opened by seanmacavaney 2
  • Training by dot product and evaluation via inverted index?

    Training by dot product and evaluation via inverted index?

    Hey, I recently read your SPLADEv2 paper. That's so insightful! But I still have a few questions about it.

    1. Is the model trained with dot product similarity function included in the contrastive loss?
    2. Evaluation on MS MARCO is performed via inverted index backed by anserine?
    3. Evaluation on BEIR is implemented with sentencetransformer hence also via dot product?
    4. How much can you gurantee the sparsity of learned representation since it's softly regularized by L1 and FLOPS loss? Did you use a tuned threshold to ''zerofy'' ~0 value?
    opened by jordane95 2
  • Equation (1) and (4)

    Equation (1) and (4)

    In your paper, you said equation (1) is equivalent to the MLM prediction and E_j in equation (1) denotes the BERT input embedding for token j. If you use the default implementation of HuggingFace Transformers, E_j is not from the input layer but another embeddings matrix, which is called "decoder" in the "BertLMPredictionHead" (if you use BERT). Did you manually set the "decoder" weights to the input embedding weights?

    My other question is concerning equation (4). It computes the summation of the weights of the document/query terms. In the "forward" function of the Splade class (models.py) however, you use "torch.max" function. Can you explain this issue?

    opened by hguan6 2
  • When do you drop a term?

    When do you drop a term?

    I understand that the log-saturation function and regularization loss suppress the weights of the frequent terms. But when do you drop a term (setting the term weight to zero)? Is it when the logit is less or equal to zero, so that the log(1+ReLu(.)) function outputs zero?

    opened by hguan6 2
  • Benchmark Performance After Re-ranking?

    Benchmark Performance After Re-ranking?

    I'm curious if you've run your model with a "second-stage" reranker, on the BEIR benchmarks. Would you expect much benefit from this?

    Thank you, and excellent work!

    opened by mattare2 1
  • Initial pull request for efficient splade

    Initial pull request for efficient splade

    Initial pull request to add networks from https://dl.acm.org/doi/10.1145/3477495.3531833

    Networks are now available on huggingface as well:

    V) https://huggingface.co/naver/efficient-splade-V-large-doc https://huggingface.co/naver/efficient-splade-V-large-query

    VI) https://huggingface.co/naver/efficient-splade-VI-BT-large-doc https://huggingface.co/naver/efficient-splade-VI-BT-large-query

    Still need to add the links in the naverlabs website for the small and medium networks

    opened by cadurosar 0
  • Instructions on Using Pisa for Splade

    Instructions on Using Pisa for Splade

    Firstly, thanks for your series of amazing papers and well-organized code implementations.

    The two papers Wacky Weights in Learned Sparse Representations and the Revenge of Score-at-a-Time Query Evaluation and From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective show that using Pisa can make query retrieval much faster compared to using Anserini or code from the repo for Splade.

    The folder efficient_splade_pisa/ in the repo contains the instructions on using Pisa for Splade but the instructions are only for processed queries and indexes. If I only have a well-trained Splade model, how can I process the outputs of the Splade model (sparse vectors or its quantized version for Anserini) to make them suitable for Pisa? Can you provide more specific instructions on this?

    Best wishes

    opened by HansiZeng 1
  • Flops calcualtion

    Flops calcualtion

    Hello!

    I find that when I run flops, it always returns Nan.

    I see your last commit fixed "force new", and changed line 25 in transformer_evaluator.py to force_new=True, but in inverted_index.py line 23, seems that the self.n will return 0 if force_new is True.

    The flops no longer return nan after I remove the "force_new=True".

    Am I doing sth wrong here? And how should I get the correct flops..

    Thank you! Allen

    opened by wolu0901 2
Releases(v0.1.1)
  • v0.1.1(May 11, 2022)

  • v0.0.1(May 10, 2022)

    Release v0.0.1

    This release includes our initial raw version of the code

    • inference notebook and weights available
    • training is done via SentenceTransformers
    • evaluation is not available
    • we provide evaluation on the BEIR benchmark
    • the code is not really practical and every step is independent
    Source code(tar.gz)
    Source code(zip)
Owner
NAVER
NAVER
minimizer-space de Bruijn graphs (mdBG) for whole genome assembly

rust-mdbg: Minimizer-space de Bruijn graphs (mdBG) for whole-genome assembly rust-mdbg is an ultra-fast minimizer-space de Bruijn graph (mdBG) impleme

Barış Ekim 148 Dec 01, 2022
Code for ICCV 2021 paper Graph-to-3D: End-to-End Generation and Manipulation of 3D Scenes using Scene Graphs

Graph-to-3D This is the official implementation of the paper Graph-to-3d: End-to-End Generation and Manipulation of 3D Scenes Using Scene Graphs | arx

Helisa Dhamo 33 Jan 06, 2023
MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction

MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction This is the official implementation for the ICCV 2021 paper Learning Sign

110 Dec 20, 2022
Scripts and misc. stuff related to the PortSwigger Web Academy

PortSwigger Web Academy Notes Mostly scripts to automate the exploits. Going in the order of the recomended learning path - starting with SQLi. Commun

pageinsec 17 Dec 30, 2022
Text Summarization - WCN — Weighted Contextual N-gram method for evaluation of Text Summarization

Text Summarization WCN — Weighted Contextual N-gram method for evaluation of Text Summarization In this project, I fine tune T5 model on Extreme Summa

Aditya Shah 1 Jan 03, 2022
Improving Generalization Bounds for VC Classes Using the Hypergeometric Tail Inversion

Improving Generalization Bounds for VC Classes Using the Hypergeometric Tail Inversion Preface This directory provides an implementation of the algori

Jean-Samuel Leboeuf 0 Nov 03, 2021
CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images

CurriculumNet Introduction This repo contains related code and models from the ECCV 2018 CurriculumNet paper. CurriculumNet is a new training strategy

156 Jul 04, 2022
NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021

NAS-Bench-Macro This repository includes the benchmark and code for NAS-Bench-Macro in paper "Prioritized Architecture Sampling with Monto-Carlo Tree

35 Jan 03, 2023
Efficient Training of Visual Transformers with Small Datasets

Official codes for "Efficient Training of Visual Transformers with Small Datasets", NerIPS 2021.

Yahui Liu 112 Dec 25, 2022
A tensorflow implementation of GCN-LPA

GCN-LPA This repository is the implementation of GCN-LPA (arXiv): Unifying Graph Convolutional Neural Networks and Label Propagation Hongwei Wang, Jur

Hongwei Wang 83 Nov 28, 2022
Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loop Story Generation"

Storium GPT-2 Models This is the official repository for the GPT-2 models described in the EMNLP 2020 paper [STORIUM: A Dataset and Evaluation Platfor

Nader Akoury 27 Dec 20, 2022
Constructing interpretable quadratic accuracy predictors to serve as an objective function for an IQCQP problem that represents NAS under latency constraints and solve it with efficient algorithms.

IQNAS: Interpretable Integer Quadratic programming Neural Architecture Search Realistic use of neural networks often requires adhering to multiple con

0 Oct 24, 2021
Search and filter videos based on objects that appear in them using convolutional neural networks

Thingscoop: Utility for searching and filtering videos based on their content Description Thingscoop is a command-line utility for analyzing videos se

Anastasis Germanidis 354 Dec 04, 2022
On Nonlinear Latent Transformations for GAN-based Image Editing - PyTorch implementation

On Nonlinear Latent Transformations for GAN-based Image Editing - PyTorch implementation On Nonlinear Latent Transformations for GAN-based Image Editi

Valentin Khrulkov 22 Oct 24, 2022
A clear, concise, simple yet powerful and efficient API for deep learning.

The Gluon API Specification The Gluon API specification is an effort to improve speed, flexibility, and accessibility of deep learning technology for

Gluon API 2.3k Dec 17, 2022
Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral

Temporally Efficient Vision Transformer for Video Instance Segmentation Temporally Efficient Vision Transformer for Video Instance Segmentation (CVPR

Hust Visual Learning Team 203 Dec 31, 2022
This repository contains a set of codes to run (i.e., train, perform inference with, evaluate) a diarization method called EEND-vector-clustering.

EEND-vector clustering The EEND-vector clustering (End-to-End-Neural-Diarization-vector clustering) is a speaker diarization framework that integrates

45 Dec 26, 2022
QI-Q RoboMaster2022 CV Algorithm

QI-Q RoboMaster2022 CV Algorithm

2 Jan 10, 2022
Genetic feature selection module for scikit-learn

sklearn-genetic Genetic feature selection module for scikit-learn Genetic algorithms mimic the process of natural selection to search for optimal valu

Manuel Calzolari 260 Dec 14, 2022
Official PyTorch implementation of "Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics".

Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics This repository is the official PyTorch implementation of "Physics-aware Differ

USC-Melady 46 Nov 20, 2022