Skip to content
Go to file


Failed to load latest commit information.
Latest commit message
Commit time

A Neural Text-to-SQL Architecture Incorporating Values


Building natural language interfaces for databases has been a long-standing challenge for several decades. The major advantage of these so-called text-to-SQL systems is that end-users can query complex databases without the need to know SQL or the underlying database schema. Due to significant advancements in machine learning, the recent focus of research has been on neural networks to tackle this challenge on complex datasets like Spider. Several recent text-to-SQL systems achieve promising results on this dataset. However, none of them extracts and incorporates values from the user questions for generating SQL statements. Thus, the practical use of these systems in a real-world scenario has not been sufficiently demonstrated yet.

In this paper we propose ValueNet light and ValueNet -- the first end-to-end text-to-SQL system incorporating values on the challenging Spider dataset. The main idea of our approach is to use not only metadata information about the underlying database but also information on the base data as input for our neural network architecture. In particular, we propose a novel architecture sketch to extract values from a user question and come up with possible value candidates which are not explicitly mentioned in the question. We then use a neural model based on an encoder-decoder architecture to synthesize the SQL query. Finally, we evaluate our model on the Spider challenge using the Execution Accuracy metric, a more difficult metric than used by most participants of the challenge. Our experimental evaluation demonstrates that ValueNet light and ValueNet reach state-of-the-art results of 64% and 60% accuracy, respectively, for translating from text to SQL, even when applying this more difficult metric than used by previous work.


To run this code you need the Spider ( data. If you plan to do inference only, you will further need a trained model.


Disclaimer: this code is largely based on the IRNet ( repository. By now though it has diverted quite a bit from the original code base, therefore we publish it as a non-fork. We thank the IRNet team for their great work, which made this repository possible in the first place.


You can either install the script with pip (pip install -r requirements.txt) or with pipenv (pipenv install). After installing you can run the tasks either from the command line or in PyCharm. To run then im PyCharm, simply import the run configurations from the .run folder.

You just wanna play around?

Use the manual inference mode, where you specify a database and you can just ask questions. Run src/manual_inference/ --model_to_load=path/to/ --database=cre_Theme_park. For all configuration see the file.

IMPORTANT: the manual inference mode uses Google Entities API for NER ( You need to organize a valid API key and add it to the file. The Google API is free up to a certain amount of requests.

Image of Manual Inference


Simply run python src/ For all configuration see the file. You might wanna increase the batch_size for example.

After each epoch the evaluation task is executed. Evaluation will use the Spider Evaluation script to execute SQL queries on the Spider databases.


ValueNet: A Neural Text-to-SQL Architecture Incorporating Values




No releases published


No packages published


You can’t perform that action at this time.