Code for ACL2021 long paper: Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases

Related tags

Deep LearningLANKA
Overview

LANKA

This is the source code for paper: Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases (ACL 2021, long paper)

Reference

If this repository helps you, please kindly cite the following bibtext:

@inproceedings{cao-etal-2021-knowledgeable,
    title = "Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases",
    author = "Cao, Boxi  and
      Lin, Hongyu  and
      Han, Xianpei  and
      Sun, Le  and
      Yan, Lingyong  and
      Liao, Meng  and
      Xue, Tong  and
      Xu, Jin",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-long.146",
    pages = "1860--1874",

Usage

To reproduce our results:

1. Create conda environment and install requirements

git clone https://github.com/c-box/LANKA.git
cd LANKA
conda create --name lanka python=3.7
conda activate lanka
pip install -r requirements.txt

2. Download the data

3. Run the experiments

If your GPU is smaller than 24G, please adjust batch size using "--batch-size" parameter.

3.1 Prompt-based Retrieval

  • Evaluate the precision on LAMA and WIKI-UNI using different prompts:

    • Manually prompts created by Petroni et al. (2019)

      python -m scripts.run_prompt_based --relation-type lama_original --model-name bert-large-cased --method evaluation --cuda-device [device] --batch-size [batch_size]
    • Mining-based prompts by Jiang et al. (2020b)

      python -m scripts.run_prompt_based --relation-type lama_mine --model-name bert-large-cased --method evaluation --cuda-device [device]
    • Automatically searched prompts from Shin et al. (2020)

      python -m scripts.run_prompt_based --relation-type lama_auto --model-name bert-large-cased --method evaluation --cuda-device [device]
  • Store various distributions needed for subsequent experiments:

    python -m scripts.run_prompt_based --model-name bert-large-cased --method store_all_distribution --cuda-device [device]
  • Calculate the average percentage of instances being covered by top-k answers or predictions (Table 1):

    python -m scripts.run_prompt_based --model-name bert-large-cased --method topk_cover --cuda-device [device]
  • Calculate the Pearson correlations of the prediction distributions on LAMA and WIKI-UNI (Figure 3, the figures will be stored in the 'pics' folder):

    python -m scripts.run_prompt_based --model-name bert-large-cased --method prediction_corr --cuda-device [device]
  • Calculate the Pearson correlations between the prompt-only distribution and prediction distribution on WIKI-UNI (Figure 4):

    python -m scripts.run_prompt_based --model-name bert-large-cased --method prompt_only_corr --cuda-device [device]
  • Calculate the KL divergence between the prompt-only distribution and golden answer distribution of LAMA (Table 2):

    python -m scripts.run_prompt_based --relation-type [relation_type] --model-name bert-large-cased --method cal_prompt_only_div --cuda-device [device]

3.2 Case-based Analogy

  • Evaluate case-based paradigm:

    python -m scripts.run_case_based --model-name bert-large-cased --task evaluate_analogy_reasoning --cuda-device [device]
  • Detailed comparison for prompt-based and case-based paradigms (precision, type precision, type change, etc.) (Table 4):

    python -m scripts.run_case_based --model-name bert-large-cased --task type_precision --cuda-device [device]
  • Calculate the in-type rank change (Figure 6):

    python -m scripts.run_case_based --model-name bert-large-cased --task type_rank_change --cuda-device [device]

3.3 Context-based Inference

  • For explicit answer leakage (Table 5 and 6):

    python -m scripts.run_context_based --model-name bert-large-cased --method explicit_leak --cuda-device [device]
  • For implicit answer leakage (Table 7):

    python -m scripts.run_context_based --model-name bert-large-cased --method implicit_leak --cuda-device [device]
Owner
Boxi Cao
NLP
Boxi Cao
Spatial Sparse Convolution Library

SpConv: Spatially Sparse Convolution Library PyPI Install Downloads CPU (Linux Only) pip install spconv CUDA 10.2 pip install spconv-cu102 CUDA 11.1 p

Yan Yan 1.2k Jan 07, 2023
GraphLily: A Graph Linear Algebra Overlay on HBM-Equipped FPGAs

GraphLily: A Graph Linear Algebra Overlay on HBM-Equipped FPGAs GraphLily is the first FPGA overlay for graph processing. GraphLily supports a rich se

Cornell Zhang Research Group 39 Dec 13, 2022
This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation".

[CVPRW 2021] - Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation

Anirudh S Chakravarthy 6 May 03, 2022
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (Local-Lip)

Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (Local-Lip) Introduction TL;DR: We propose an efficient and trainabl

17 Dec 01, 2022
PushForKiCad - AISLER Push for KiCad EDA

AISLER Push for KiCad Push your layout to AISLER with just one click for instant

AISLER 31 Dec 29, 2022
Implementation of H-UCRL Algorithm

Implementation of H-UCRL Algorithm This repository is an implementation of the H-UCRL algorithm introduced in Curi, S., Berkenkamp, F., & Krause, A. (

Sebastian Curi 25 May 20, 2022
Fine-grained Control of Image Caption Generation with Abstract Scene Graphs

Faster R-CNN pretrained on VisualGenome This repository modifies maskrcnn-benchmark for object detection and attribute prediction on VisualGenome data

Shizhe Chen 7 Apr 20, 2021
Fast, flexible and fun neural networks.

Brainstorm Discontinuation Notice Brainstorm is no longer being maintained, so we recommend using one of the many other,available frameworks, such as

IDSIA 1.3k Nov 21, 2022
Official PyTorch implementation of the ICRA 2021 paper: Adversarial Differentiable Data Augmentation for Autonomous Systems.

Adversarial Differentiable Data Augmentation This repository provides the official PyTorch implementation of the ICRA 2021 paper: Adversarial Differen

Manli 3 Oct 15, 2022
A collection of implementations of deep domain adaptation algorithms

Deep Transfer Learning on PyTorch This is a PyTorch library for deep transfer learning. We divide the code into two aspects: Single-source Unsupervise

Yongchun Zhu 647 Jan 03, 2023
Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).

Fisher Induced Sparse uncHanging (FISH) Mask This repo contains the code for Fisher Induced Sparse uncHanging (FISH) Mask training, from "Training Neu

Varun Nair 37 Dec 30, 2022
🧮 Matrix Factorization for Collaborative Filtering is just Solving an Adjoint Latent Dirichlet Allocation Model after All

Accompanying source code to the paper "Matrix Factorization for Collaborative Filtering is just Solving an Adjoint Latent Dirichlet Allocation Model A

Florian Wilhelm 39 Dec 03, 2022
Malmo Collaborative AI Challenge - Team Pig Catcher

The Malmo Collaborative AI Challenge - Team Pig Catcher Approach The challenge involves 2 agents who can either cooperate or defect. The optimal polic

Kai Arulkumaran 66 Jun 29, 2022
Code to train models from "Paraphrastic Representations at Scale".

Paraphrastic Representations at Scale Code to train models from "Paraphrastic Representations at Scale". The code is written in Python 3.7 and require

John Wieting 71 Dec 19, 2022
dyld_shared_cache processing / Single-Image loading for BinaryNinja

Dyld Shared Cache Parser Author: cynder (kat) Dyld Shared Cache Support for BinaryNinja Without any of the fuss of requiring manually loading several

cynder 76 Dec 28, 2022
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

Daniel Hirsch 13 Nov 04, 2022
Learning from graph data using Keras

Steps to run = Download the cora dataset from this link : https://linqs.soe.ucsc.edu/data unzip the files in the folder input/cora cd code python eda

Mansar Youness 64 Nov 16, 2022
Benchmarks for the Optimal Power Flow Problem

Power Grid Lib - Optimal Power Flow This benchmark library is curated and maintained by the IEEE PES Task Force on Benchmarks for Validation of Emergi

A Library of IEEE PES Power Grid Benchmarks 207 Dec 08, 2022
Introducing neural networks to predict stock prices

IntroNeuralNetworks in Python: A Template Project IntroNeuralNetworks is a project that introduces neural networks and illustrates an example of how o

Vivek Palaniappan 637 Jan 04, 2023
Yolo ros - YOLO-ROS for HUAWEI ATLAS200

YOLO-ROS YOLO-ROS for NVIDIA YOLO-ROS for HUAWEI ATLAS200, please checkout for b

ChrisLiu 5 Oct 18, 2022