Weakly Supervised Text-to-SQL Parsing through Question Decomposition

Overview

Weakly Supervised Text-to-SQL Parsing through Question Decomposition

The official repository for the paper "Weakly Supervised Text-to-SQL Parsing through Question Decomposition" by Tomer Wolfson, Daniel Deutch and Jonathan Berant, accepted to the Finings of NAACL 2022.

This repository contains the code and data used in our paper:

  1. Code for automatically synthesizing SQL queries from question decompositions + answers
  2. Code for the models used in our paper mapping text-to-SQL and text-to-QDMR

Setup πŸ™ŒπŸΌ

  1. Create the virtual environment
conda create -n [ENV_NAME] python=3.8
conda activate [ENV_NAME]
  1. Clone the repository
git clone https://github.com/tomerwolgithub/question-decomposition-to-sql
cd question-decomposition-to-sql
  1. Install the relevant requirements
pip install -r requirements.txt 
python -m spacy download en_core_web_lg
  1. To train the QDMR parser model please setup a separate environment (due to different Hugginface versions):
conda create -n qdmr_parser_env python=3.8
conda activate qdmr_parser_env
pip install -r requirements_qdmr_parser.txt 
python -m spacy download en_core_web_lg

Download Resources πŸ—οΈ

1. QDMR Parsing Datasets:

2. Text-to-SQL Datasets:

3. Databases (schema & contents):

Convert the MySQL databases of Academic, IMDB, Yelp and GeoQuery to sqlite format using the tool of Jean-Luc Lacroix:

./mysql2sqlite academic_mysql.sql | sqlite3 academic_sqlite.db

Data Generation πŸ”¨

Our SQL synthesis is given examples of <QDMR, database, answer> and automatically generates a SQL that executes to the correct answer. The QDMR decompositions are either manually annotated or automatically predicted by a trained QDMR parser.

Begin by copying all relevant sqlite databases to the data_generation directory.

mkdir data_generation/data
mkdir data_generation/data/spider_databases # copy Spider databases here
mkdir data_generation/data/other_databases # copy Academic, IMDB, Yelp and Geo databases here
  1. The SQL synthesis expects a formatted csv file, see example. Note that the SQL query in these files is only used to compute the answer.
  2. This may take several hours, as multiple candidate SQL are being executed on their respective database.
  3. To synthesize SQL from the <QDMR, database, answer> examples run:
python data_generation/main.py \
--input_file input_qdmr_examples.csv \
--output_file qdmr_grounded_sql.csv \
--json_steps True

Synthesized Data

The SQL synthesized using QDMR + answer supervision is available for each dataset in the data/sql_synthesis_results/ directory.

  • data/sql_synthesis_results/gold_qdmr_supervision: contains SQL synthesized using gold QDMRs that are manually annotated
  • data/sql_synthesis_results/predicted_qdmr_supervision: contains SQL synthesized using QDMRs predicted by a trained parser

Models πŸ—‚οΈ

QDMR Parser

The QDMR parser is a T5-large sequence-to-sequence model that is finetuned to map questions to their QDMR. The model expects as input two csv files as its train and dev sets. Use the files from the downloaded Break dataset to train the parser. Make sure that you are in the relevant python environment (requirements_qdmr_parser.txt).

To train the QDMR parser configure the following parameters in train.py:

  • data_dir: the path to the directory containing the NL to QDMR datasets
  • training_set_file: name of the train set csv (e.g. break_train.csv)
  • dev_set_file: name of the dev set csv (e.g. break_dev.csv)
  • output_dir: the directory to store the trained model

After configuration, train the model as follows:

TOKENIZERS_PARALLELISM=false CUDA_VISIBLE_DEVICES=0 python src/qdmr_parser/train.py

To test a trained model and store its predictions, configure the following parameters in test.py:

  • checkpoint_path: path to the trained QDMR parser model to be evaluated
  • dev_set_file: name of the dev set csv to generate predictions for
  • predictions_output_file: the output file to store the parser's generated predictions

And run the following command:

TOKENIZERS_PARALLELISM=false CUDA_VISIBLE_DEVICES=0 python src/qdmr_parser/test.py

Text-to-SQL

The text-to-SQL models are T5-large sequence-to-sequence models, finetuned to map questions to executable SQL queries. We compare the models trained on gold SQL queries, annotated by experts, to our synthesized SQL from QDMR and answer supervision.

1. Setup directory

Setup the data for the text-to-SQL experiments as follows:

data
β”œβ”€β”€ tables.json			# Spider tables.json
└── databases
β”‚   └── academic			
β”‚       └── academic.sqlite	# Sqlite version of the populated Academic database (see downloads)
β”‚   └── geo			
β”‚       └── geo.sqlite		# Sqlite version of the populated Geo database (see downloads)
β”‚   └── imdb			
β”‚       └── imdb.sqlite		# Sqlite version of the populated IMDB database (see downloads)
β”‚   └── spider_databases 	# Spider databases directory
β”‚       └── activity_1
β”‚           └── activity_1.sqlite
β”‚       └── ...   
β”‚   └── yelp			
β”‚       └── yelp.sqlite		# Sqlite version of the populated Yelp database (see downloads)
└── queries
    └── geo	# See experiments data
        β”œβ”€β”€ geo_qdmr_train.json
	└── geo_qdmr_predicted_train.json
	└── geo_gold_train.json
	└── geo_gold_dev.json
	└── geo_gold_test.json
	└── geo_gold_train.sql
	└── geo_gold_dev.sql
	└── geo_gold_test.sql
    └── spider
        β”œβ”€β”€ spider_qdmr_train.json		# See experiments data
	└── spider_qdmr_predicted_train.json 	# See experiments data
	└── spider_gold_train.json 	# Spider training set
	└── spider_gold_dev.json 	# Spider dev set
	└── spider_gold_train.sql 	# Spider training set SQL queries
	└── spider_gold_dev.sql 	# Spider dev set SQL queries

Database files are described in the downloads section. See the experiments section for the exact train and test files.

2. Train model

To train the text-to-SQL model configure its following parameters in train.py:

  • dataset: either spider or geo
  • target_encoding: sql for gold sql and either qdmr_formula or qdmr_sql for the QDMR experiments
  • data_dir: path to the directory containing the experiments data
  • output_dir: the directory to store the trained model
  • db_dir: the directory to store the trained model
  • training_set_file: training set file in the data directory e.g. spider/spider_gold_train.json
  • dev_set_file: dev set file in the data directory e.g. spider/spider_gold_dev.json
  • dev_set_sql: dev set SQL queries in the data directory e.g. spider/spider_gold_dev.sql

Following configuration, to train the model run:

CUDA_VISIBLE_DEVICES=0 python train.py 

3. Test model

To test the text-to-SQL model first configure the relevant parameters and checkpoint_path in test.py. Following the configuration, generate the trained model predictions using:

CUDA_VISIBLE_DEVICES=0 python test.py 

Experiments βš—οΈ

Data

Gold SQL:

For the Spider experiments we use its original train and dev json and sql files. For Geo880, Academic, IMDB and Yelp we format the original datasets in json files available here.

QDMR Synthesized SQL:

The QDMR text-to-SQL models are not trained directly on the synthesized SQL. Instead, we train on an encoded QDMR representation with its phrase-DB linking (from the SQL synthesis). This representation is automatically mapped to SQL to evaluate the models execution accuracy. To generate these grounded QDMRs we use the output of the data generation phase. The function encoded_grounded_qdmr in src/data_generation/write_encoding.py recieves the json file containing the synthesized SQL examples. It then encodes them as lisp style formulas of QDMR steps and their relevant phrase-DB linking.

For convenience, you can download the encoded QDMR training sets used in our experiments here. These include:

  • qdmr_ground_enc_spider_train.json: 5,349 examples, synthesized using gold QDMR + answer supervision
  • qdmr_ground_enc_predicted_spider_train_few_shot: 5,075 examples, synthesized examples using 700 gold QDMRs, predicted QDMR + answer supervision
  • qdmr_ground_enc_predicted_spider_train_30_db.json: 1,129 examples, synthesized using predicted QDMR + answer supervision
  • qdmr_ground_enc_predicted_spider_train_40_db.json: 1,440 examples, synthesized using predicted QDMR + answer supervision
  • qdmr_ground_enc_predicted_spider_train_40_db_V2.json: 1,552 examples, synthesized using predicted QDMR + answer supervision
  • qdmr_ground_enc_geo880_train.json: 454 examples, synthesized using gold QDMR + answer supervision
  • qdmr_ground_enc_predicted_geo_train_zero_shot.json: 432 examples, synthesized using predicted QDMR + answer supervision

Configurations

The configurations for training the text-to-SQL models on Spider. Other parameters are fixed in train.py.

SQL Gold (Spider):

{'dataset': 'spider',
'target_encoding': 'sql',
'db_dir': 'databases/spider_databases',
'training_set_file': 'queries/spider/spider_gold_train.json',
'dev_set_file': 'queries/spider/spider_gold_dev.json',
'dev_set_sql': 'queries/spider/spider_gold_dev.sql'}

QDMR Gold (Spider):

{'dataset': 'spider',
'target_encoding': 'qdmr_formula',
'db_dir': 'databases/spider_databases',
'training_set_file': 'queries/spider/spider_qdmr_train.json',
'dev_set_file': 'queries/spider/spider_gold_dev.json',
'dev_set_sql': 'queries/spider/spider_gold_dev.sql'}

SQL Predicted (Spider):

{'dataset': 'spider',
'target_encoding': 'qdmr_formula',
'db_dir': `databases/spider_databases',
'training_set_file': 'queries/spider/spider_qdmr_predicted_train.json',
'dev_set_file': 'queries/spider/spider_gold_dev.json',
'dev_set_sql': 'queries/spider/spider_gold_dev.sql'}

The configurations for training the text-to-SQL models on Geo880.

SQL Gold (Geo):

{'dataset': 'geo',
'target_encoding': 'sql',
'db_dir': 'databases',
'training_set_file': 'queries/geo/geo_gold_train.json',
'dev_set_file': 'queries/spider/geo_gold_dev.json',
'dev_set_sql': 'queries/spider/geo_gold_dev.sql'}

QDMR Gold (Geo):

{'dataset': 'geo',
'target_encoding': 'qdmr_sql',
'db_dir': 'databases',
'training_set_file': 'queries/geo/geo_qdmr_train.json',
'dev_set_file': 'queries/spider/geo_gold_dev.json',
'dev_set_sql': 'queries/spider/geo_gold_dev.sql'}

QDMR Predicted (Geo):

{'dataset': 'geo',
'target_encoding': 'qdmr_sql',
'db_dir': 'databases',
'training_set_file': 'queries/geo/geo_qdmr_predicted_train.json',
'dev_set_file': 'queries/spider/geo_gold_dev.json',
'dev_set_sql': 'queries/spider/geo_gold_dev.sql'}

Evaluation

Text-to-SQL model performance is evaluated using SQL execution accuracy in src/text_to_sql/eval_spider.py. The script automatically converts encoded QDMR predictions to SQL before executing them on the target database.

Citation ✍🏽

bibtex
@inproceedings{wolfson-etal-2022-weakly,
    title={"Weakly Supervised Text-to-SQL Parsing through Question Decomposition"},
    author={"Wolfson, Tomer and Deutch, Daniel and Berant, Jonathan"},
    booktitle = {"Findings of the Association for Computational Linguistics: NAACL 2022"},
    year={"2022"},
}

License

This repository and its data is released under the MIT license.

For the licensing of all external datasets and databases used throughout our experiments:

PyTorch implementation of normalizing flow models

PyTorch implementation of normalizing flow models

Vincent Stimper 242 Jan 02, 2023
List of all dependencies affected by node-ipc malicious commit

node-ipc-dependencies-list List of all dependencies affected by node-ipc malicious commit as of 17/3/2022 - 19/3/2022 (timestamp) Please improve upon

99 Oct 15, 2022
Count GitHub Stars ⭐

Count GitHub Stars per Day ⭐ Track GitHub stars per day over a date range to measure the open-source popularity of different repositories. Requirement

Ultralytics 20 Nov 20, 2022
Runtime type annotations for the shape, dtype etc. of PyTorch Tensors.

torchtyping Type annotations for a tensor's shape, dtype, names, ... Turn this: def batch_outer_product(x: torch.Tensor, y: torch.Tensor) - torch.Ten

Patrick Kidger 1.2k Jan 03, 2023
Implementation of the "PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences" paper.

PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences Introduction Point cloud sequences are irregular and unordered in the spatial dimen

Hehe Fan 63 Dec 09, 2022
wlad 2 Dec 19, 2022
PyVideoAI: Action Recognition Framework

This reposity contains official implementation of: Capturing Temporal Information in a Single Frame: Channel Sampling Strategies for Action Recognitio

Kiyoon Kim 22 Dec 29, 2022
A library for Deep Learning Implementations and utils

deeply A Deep Learning library Table of Contents Features Quick Start Usage License Features Python 2.7+ and Python 3.4+ compatible. Quick Start $ pip

Achilles Rasquinha 1 Dec 12, 2022
True Few-Shot Learning with Language Models

This codebase supports using language models (LMs) for true few-shot learning: learning to perform a task using a limited number of examples from a single task distribution.

Ethan Perez 124 Jan 04, 2023
Official PyTorch code of DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization (ICCV 2021 Oral).

DeepPanoContext (DPC) [Project Page (with interactive results)][Paper] DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context G

Cheng Zhang 66 Nov 16, 2022
Agile SVG maker for python

Agile SVG Maker Need to draw hundreds of frames for a GIF? Need to change the style of all pictures in a PPT? Need to draw similar images with differe

SemiWaker 4 Sep 25, 2022
This codebase is the official implementation of Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization (NeurIPS2021, Spotlight)

Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization This codebase is the official implementation of Test-Time Classifier A

47 Dec 28, 2022
CLIP (Contrastive Language–Image Pre-training) trained on Indonesian data

CLIP-Indonesian CLIP (Radford et al., 2021) is a multimodal model that can connect images and text by training a vision encoder and a text encoder joi

Galuh 17 Mar 10, 2022
An implementation of Fastformer: Additive Attention Can Be All You Need in TensorFlow

Fast Transformer This repo implements Fastformer: Additive Attention Can Be All You Need by Wu et al. in TensorFlow. Fast Transformer is a Transformer

Rishit Dagli 139 Dec 28, 2022
Artificial intelligence technology inferring issues and logically supporting facts from raw text

κ°œμš” λΉ„μ •ν˜• ν…μŠ€νŠΈλ₯Ό ν•™μŠ΅ν•˜μ—¬ μŸμ λ³„ 사싀과 논리적 κ·Όκ±° 좔둠이 κ°€λŠ₯ν•œ 인곡지λŠ₯ μ›μ²œκΈ°μˆ  Artificial intelligence techno

6 Dec 29, 2021
PyTorch Implementation of ECCV 2020 Spotlight TuiGAN: Learning Versatile Image-to-Image Translation with Two Unpaired Images

TuiGAN-PyTorch Official PyTorch Implementation of "TuiGAN: Learning Versatile Image-to-Image Translation with Two Unpaired Images" (ECCV 2020 Spotligh

181 Dec 09, 2022
NALSM: Neuron-Astrocyte Liquid State Machine

NALSM: Neuron-Astrocyte Liquid State Machine This package is a Tensorflow implementation of the Neuron-Astrocyte Liquid State Machine (NALSM) that int

Computational Brain Lab 4 Nov 28, 2022
Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)

Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching Official pytorch implementation of "Show, Attend and Distill: Kn

Clova AI Research 80 Dec 16, 2022
A rough implementation of the paper "A Steering Algorithm for Redirected Walking Using Reinforcement Learning"

A rough implementation of the paper "A Steering Algorithm for Redirected Walking Using Reinforcement Learning"

Somnus ο½€Chen 2 Jun 09, 2022
Bottom-up Human Pose Estimation

Introduction This is the official code of Rethinking the Heatmap Regression for Bottom-up Human Pose Estimation. This paper has been accepted to CVPR2

108 Dec 01, 2022