Our CIKM21 Paper "Incorporating Query Reformulating Behavior into Web Search Evaluation"

Overview

Reformulation-Aware-Metrics

License made-with-python

Introduction

This codebase contains source-code of the Python-based implementation of our CIKM 2021 paper.

Requirements

  • python 2.7
  • sklearn
  • scipy

Data Preparation

Preprocess two datasets TianGong-SS-FSD and TianGong-Qref into the the following format:

[Reformulation Type][Click List][Usefulness List][Satisfaction Label]
  • Reformulation Type: A (Add), D (Delete), K (Keep), T (Transform or Change), O (Others), F (First Query).
  • Click List: 1 -- Clicked, 0 -- Not Clicked.
  • Usefulness List: Usefulness or Relevance, 4-scale in TianGong-QRef, 5-scale in TianGong-SS-FSD.
  • Satisfaction Label: 5-scale for both datasets.

Then, bootsrap them into N samples and put the bootstapped data (directories) into ./data/bootstrap_fsd and ./data/bootstrap_qref.

Results

The results for each metrics are shown in the following table:

Metric Qref-Spearman Qref-Pearson Qref-MSE FSD-Spearman FSD-Pearson FSD-MSE
RBP 0.4375 0.4180 N/A 0.4898 0.5222 N/A
DCG 0.4434 0.4182 N/A 0.5022 0.5290 N/A
BPM 0.4552 0.3915 N/A 0.5801 0.6052 N/A
RBP sat 0.4389 0.4170 N/A 0.5165 0.5527 N/A
DCG sat 0.4446 0.4166 N/A 0.5047 0.5344 N/A
BPM sat 0.4622 0.3674 N/A 0.5960 0.6029 N/A
rrDBN 0.4123 0.3670 1.1508 0.5908 0.5602 1.0767
rrSDBN 0.4177 0.3713 1.1412 0.5991 0.5703 1.0524
uUBM 0.4812 0.4303 1.0607 0.6242 0.5775 0.8795
uPBM 0.4827 0.4369 1.0524 0.6210 0.5846 0.8644
uSDBN 0.4837 0.4375 1.1443 0.6290 0.6081 0.8840
uDBN 0.4928 0.4458 1.0801 0.6339 0.6207 0.8322

To reproduce the results of traditional metrics such as RBP, DCG and BPM, we recommend you to use this repo: cwl_eval. 🤗

Quick Start

To train RAMs, run the script as follows:

python run.py --click_model DBN \
	--data qref \
	--id 0 \
	--metric_type expected_utility \
	--max_usefulness 3 \
	--k_num 6 \
	--max_dnum 10 \
	--iter_num 10000 \
	--alpha 0.01 \
	--alpha_decay 0.99 \
	--lamda 0.85 \
	--patience 5 \
	--use_knowledge True
  • click_model: options: ['DBN', 'SDBN', 'UBM', 'PBM']
  • data: options: ['fsd', 'qref']
  • metric_type: options: ['expected_utility', 'effort']
  • id: the bootstrapped sample id.
  • k_num: the number of user intent shift type will be considered, should be less than or equal to six.
  • max_dnum: the maximum number of top documents to be considered for a specific query.
  • use_knowledge: whether to use the transition probability from syntactic reformulation types to intent-level ones derived from the TianGong-Qref dataset.

Citation

If you find the resources in this repo useful, please do not save your star and cite our work:

@inproceedings{chen2021incorporating,
  title={Incorporating Query Reformulating Behavior into Web Search Evaluation},
  author={Chen, Jia and Liu, Yiqun and Mao, Jiaxin and Zhang, Fan and Sakai, Tetsuya and Ma, Weizhi and Zhang, Min and Ma, Shaoping},
  booktitle={Proceedings of the 30th ACM International Conference on Information and Knowledge Management},
  year={2021},
  organization={ACM}
}

Contact

If you have any questions, please feel free to contact me via [email protected] or open an issue.

Owner
xuanyuan14
Jia Chen 陈佳
xuanyuan14
PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models

PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models This repository is the official implementation of the fol

DistributedML 41 Dec 06, 2022
Official Pytorch implementation of 'GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network' (NeurIPS 2020)

Official implementation of GOCor This is the official implementation of our paper : GOCor: Bringing Globally Optimized Correspondence Volumes into You

Prune Truong 71 Nov 18, 2022
A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for ONNX.

sam4onnx A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for

Katsuya Hyodo 6 May 15, 2022
Pytorch implementation of ProjectedGAN

ProjectedGAN-pytorch Pytorch implementation of ProjectedGAN (https://arxiv.org/abs/2111.01007) Note: this repository is still under developement. @InP

Dominic Rampas 17 Dec 14, 2022
An AI made using artificial intelligence (AI) and machine learning algorithms (ML) .

DTech.AIML An AI made using artificial intelligence (AI) and machine learning algorithms (ML) . This is created by help of some members in my team and

1 Jan 06, 2022
A tool to analyze leveraged liquidity mining and find optimal option combination for hedging.

LP-Option-Hedging Description A Python program to analyze leveraged liquidity farming/mining and find the optimal option combination for hedging imper

Aureliano 18 Dec 19, 2022
Implementation of Hire-MLP: Vision MLP via Hierarchical Rearrangement and An Image Patch is a Wave: Phase-Aware Vision MLP.

Hire-Wave-MLP.pytorch Implementation of Hire-MLP: Vision MLP via Hierarchical Rearrangement and An Image Patch is a Wave: Phase-Aware Vision MLP Resul

Nevermore 29 Oct 28, 2022
Convert Python 3 code to CUDA code.

Py2CUDA Convert python code to CUDA. Usage To convert a python file say named py_file.py to CUDA, run python generate_cuda.py --file py_file.py --arch

Yuval Rosen 3 Jul 14, 2021
Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

AutoInt: Automatic Integration for Fast Neural Volume Rendering CVPR 2021 Project Page | Video | Paper PyTorch implementation of automatic integration

Stanford Computational Imaging Lab 149 Dec 22, 2022
Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide.

SARS-CoV-2 processing requests Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide. Prerequisites This autom

useGalaxy.eu 17 Aug 13, 2022
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)

Depth-supervised NeRF: Fewer Views and Faster Training for Free Project | Paper | YouTube Pytorch implementation of our method for learning neural rad

524 Jan 08, 2023
BBScan py3 - BBScan py3 With Python

BBScan_py3 This repository is forked from lijiejie/BBScan 1.5. I migrated the fo

baiyunfei 12 Dec 30, 2022
Portfolio asset allocation strategies: from Markowitz to RNNs

Portfolio asset allocation strategies: from Markowitz to RNNs Research project to explore different approaches for optimal portfolio allocation starti

Luigi Filippo Chiara 1 Feb 05, 2022
The ARCA23K baseline system

ARCA23K Baseline System This is the source code for the baseline system associated with the ARCA23K dataset. Details about ARCA23K and the baseline sy

4 Jul 02, 2022
PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners

Masked Autoencoders: A PyTorch Implementation This is a PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners: @

Meta Research 4.8k Jan 04, 2023
Python library containing BART query generation and BERT-based Siamese models for neural retrieval.

Neural Retrieval Embedding-based Zero-shot Retrieval through Query Generation leverages query synthesis over large corpuses of unlabeled text (such as

Amazon Web Services - Labs 35 Apr 14, 2022
OCR Post Correction for Endangered Language Texts

📌 Coming soon: an update to the software including features from our paper on semi-supervised OCR post-correction, to be published in the Transaction

Shruti Rijhwani 96 Dec 31, 2022
Clockwork Convnets for Video Semantic Segmentation

Clockwork Convnets for Video Semantic Segmentation This is the reference implementation of arxiv:1608.03609: Clockwork Convnets for Video Semantic Seg

Evan Shelhamer 141 Nov 21, 2022
Reproducing Results from A Hybrid Approach to Targeting Social Assistance

title author date output Reproducing Results from A Hybrid Approach to Targeting Social Assistance Lendie Follett and Heath Henderson 12/28/2021 html_

Lendie Follett 0 Jan 06, 2022
Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style

Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style [NeurIPS 2021] Official code to reproduce the results and data p

Yash Sharma 27 Sep 19, 2022