This is the project containing source code for the paper LGESQL: Line Graph Enhanced Text-to-SQL Model with Mixed Local and Non-Local Relations in ACL 2021 main conference. If you find it useful, please cite our work.
@inproceedings{cao-etal-2021-lgesql,
title = "{LGESQL}: Line Graph Enhanced Text-to-{SQL} Model with Mixed Local and Non-Local Relations",
author = "Cao, Ruisheng and
Chen, Lu and
Chen, Zhi and
Zhao, Yanbin and
Zhu, Su and
Yu, Kai",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.198",
doi = "10.18653/v1/2021.acl-long.198",
pages = "2541--2555",
}
The following commands are provided in setup.sh
.
- Firstly, create conda environment
text2sql
:
-
In our experiments, we use torch==1.6.0 and dgl==0.5.3 with CUDA version 10.1
-
We use one GeForce RTX 2080 Ti for GLOVE and base-series pre-trained language model~(PLM) experiments, one Tesla V100-PCIE-32GB for large-series PLM experiments
conda create -n text2sql python=3.6 source activate text2sql pip install torch==1.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html pip install -r requirements.txt
-
Next, download dependencies:
python -c "import stanza; stanza.download('en')" python -c "from embeddings import GloveEmbedding; emb = GloveEmbedding('common_crawl_48', d_emb=300)" python -c "import nltk; nltk.download('stopwords')"
-
Download pre-trained language models from
Hugging Face Model Hub
, such asbert-large-whole-word-masking
andelectra-large-discriminator
, into thepretrained_models
directory. The vocab file forglove.42B.300d
is also pulled: (please ensure thatGit LFS
is installed)mkdir -p pretrained_models && cd pretrained_models git lfs install git clone https://huggingface.co/bert-large-uncased-whole-word-masking git clone https://huggingface.co/google/electra-large-discriminator mkdir -p glove.42b.300d && cd glove.42b.300d wget -c http://nlp.stanford.edu/data/glove.42B.300d.zip && unzip glove.42B.300d.zip awk -v FS=' ' '{print $1}' glove.42B.300d.txt > vocab_glove.txt
-
Download, unzip and rename the spider.zip into the directory
data
. -
Merge the
data/train_spider.json
anddata/train_others.json
into one single datasetdata/train.json
. -
Preprocess the train and dev dataset, including input normalization, schema linking, graph construction and output actions generation. (Our preprocessed dataset can be downloaded here)
./run/run_preprocessing.sh
Training LGESQL models with GLOVE, BERT and ELECTRA respectively:
-
msde: mixed static and dynamic embeddings
-
mmc: multi-head multi-view concatenation
./run/run_lgesql_glove.sh [mmc|msde] ./run/run_lgesql_plm.sh [mmc|msde] bert-large-uncased-whole-word-masking ./run/run_lgesql_plm.sh [mmc|msde] electra-large-discriminator
-
Create the directory
saved_models
, save the trained model and its configuration (at least containingmodel.bin
andparams.json
) into a new directory undersaved_models
, e.g.saved_models/electra-msde-75.1/
. -
For evaluation, see
run/run_evaluation.sh
andrun/run_submission.sh
(eval from scratch) for reference. -
Model instances and submission scripts are available in codalab:plm and google drive: including submitted BERT and ELECTRA models. Codes and model for GLOVE are deprecated.
Dev and test EXACT MATCH ACC in the official leaderboard, also provided in the results
directory:
model | dev acc | test acc |
---|---|---|
LGESQL + GLOVE | 67.6 | 62.8 |
LGESQL + BERT | 74.1 | 68.3 |
LGESQL + ELECTRA | 75.1 | 72.0 |
We would like to thank Tao Yu, Yusen Zhang and Bo Pang for running evaluations on our submitted models. We are also grateful to the flexible semantic parser TranX that inspires our works.