A modular framework for evaluating natural language to SQL systems
Supports execution-based metrics, multiple ground truths, LLM-as-judge, and rich error analysis.
Text-to-SQL systems aim to translate natural language questions into executable SQL queries, enabling intuitive access to structured data. While recent advances in large language models have significantly improved generation quality, evaluating these systems remains a complex and critical challenge.
Why an Evaluation Toolkit?
Assessing the correctness and utility of generated SQL queries requires more than simple string matchingβit demands a flexible, rigorous framework that accounts for semantic equivalence, execution behavior, and real-world variability in query formulation. This toolkit was designed to address key limitations in existing approaches and provide a comprehensive solution for researchers and developers.
- SQL Semantics Are Subtle: Minor syntactic differences can lead to major semantic changesβor none at all. Traditional string-based metrics often fail to capture this nuance.
- Execution-Based Validation Is Inconsistent: Existing implementations often expect exact column sets, even when the question allows flexibility. For example, a query asking for a list of customers might return IDs, names, or bothβall valid answers, yet not always recognized as such.
- Lack of Unified Standards: There is no widely adopted, unified implementation for execution-based evaluation, leading to fragmented and sometimes misleading results.
- Multiple Valid SQLs: Many questions have more than one correct SQL formulation. This toolkit supports multiple ground truth queries to reflect that reality.
- LLM-as-Judge Support: In cases where ground truth is unavailable or traditional metrics fall short, the toolkit enables using LLMs as judgesβa surprisingly reliable method for assessing query correctness and relevance.
- Error Analysis and Visualization: Understanding why a model fails is just as important as knowing that it fails. The toolkit includes scripts and notebooks for summarizing results, visualizing performance, and performing detailed error analysis.
- Modular components for inference, execution, evaluation, profiling, and analysis
- Support for multiple benchmarks and flexible configurations
- Extensible metrics and evaluation strategies
- Tools for debugging and improving model performance
- A practical framework for both academic research and production-level benchmarking
Whether you're building new models, comparing existing ones, or diagnosing performance bottlenecks, this toolkit provides the resources needed to evaluate text-to-SQL systems with rigor and clarity.
- Evaluation (
scripts/evaluation/src/evaluation.py): Includes a script and library for evaluating text-to-SQL model outputs using various metrics. Seescripts/evaluation/README.mdfor details. - Text-to-SQL Inference (
scripts/inference): Provides a simple LLM inference pipeline for generating SQL queries from natural language inputs. Runpython scripts/inference/run_inference.py -hfor more information. - SQL Execution (
scripts/execution): Runs the ground truth and predicted SQLs for a given benchmark and saves the dataframes for evaluation and error analysis. Runpython scripts/execution/run_execution.py -hfor more information. - Results and Error Analysis (
scripts/analysis): Contains scripts and utilities for analyzing evaluation results, identifying common error patterns, and generating summary statistics and visualizations. Useful for debugging and improving model performance. - SQL Profiling (
scripts/profiling) tools to profile SQL queries to gather query characteristics to facilitate better analysis of results and errors.
- Python 3.11 or higher
- UV package manager (recommended) or pip/conda
UV is a fast Python package manager that makes setup much quicker:
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Or using pip
pip install uv
# Or using Homebrew (macOS)
brew install uv# Clone the repository
git clone https://github.com/IBM/text2sql-eval-toolkit.git
cd text2sql-eval-toolkit
# Create virtual environment and install dependencies
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install the package in editable mode
uv pip install -e .
# Optional: Install with database-specific dependencies
uv pip install -e ".[mysql,presto,db2]"# Create conda environment (or use venv)
conda create -n evaltoolkit python=3.12
conda activate evaltoolkit
# Clone and install
git clone https://github.com/IBM/text2sql-eval-toolkit.git
cd text2sql-eval-toolkit
pip install -e .To use features that rely on LLM inference, you need to set the required environment variables for the relevant services. For watsonx.ai, this includes setting the following environment variables: WATSONX_APIKEY, WATSONX_API_BASE, and WATSONX_PROJECTID. See env.example for all available configuration options.
The toolkit comes with pre-defined public benchmarks including BIRD-SQL, Spider, Beaver, and Archer. These benchmarks require initial setup such as downloading databases or configuring connection strings. The toolkit also includes smaller test benchmarks (subsets of the full benchmarks) for quick validation and CI/CD pipelines.
π See data/benchmarks/README.md for complete benchmark details, setup instructions, and configuration.
Single Benchmark:
python scripts/run_experiment.py [benchmark_name]This will initialize a pipeline, perform inference to generate SQLs and output dataframes, and run the evaluation script to gather accuracy metrics. Benchmark names and configurations are defined in data/benchmarks.json (for full benchmarks) or data/test-benchmarks.json (for test benchmarks). See data/benchmarks/README.md for more details. Run python scripts/run_experiment.py -h for details on how to set different input arguments for the script.
Multiple Benchmarks:
# Run all test benchmarks (recommended for quick validation)
python scripts/run_all_benchmarks.py --test
# Run all full benchmarks
python scripts/run_all_benchmarks.py
# Run with all baselines (standard + 6 agentic variants)
python scripts/run_all_benchmarks.py --test --run_all_baselinesThe run_all_benchmarks.py script supports separate model configurations for standard and agentic baselines to optimize runtime. See data/benchmarks/README.md for detailed configuration options.
To run evaluation only:
python scripts/evaluation/run_evaluation.py [-h] [--output_file OUTPUT_FILE] [--summary_file SUMMARY_FILE] [--csv_summary_file CSV_SUMMARY_FILE] [--use_llm_judge] input_file
See scripts/evaluation/README.md for details.
To run inference only:
# Standard baseline inference
python scripts/inference/run_inference.py benchmark_id [--model_names MODEL1 MODEL2 ...]
# Agentic pipeline inference with default models
python scripts/inference/run_inference.py benchmark_id --pipeline_type agentic
# Agentic pipeline with custom models
python scripts/inference/run_inference.py benchmark_id --pipeline_type agentic \
--agentic_models "wxai:openai/gpt-oss-120b" "wxai:meta-llama/llama-3-3-70b-instruct"
# Additional options
python scripts/inference/run_inference.py benchmark_id \
--pipeline_type agentic \
--agentic_version v2 \
--max_attempts 5 \
--max_new_tokens 1024Like run_experiment.py and run_all_benchmarks.py, the inference script supports separate model configurations via --model_names (for standard baseline) and --agentic_models (for agentic pipeline). Run python scripts/inference/run_inference.py -h for all options.
To run execution only:
python scripts/execution/run_execution.py [-h] benchmark_id
To produce the results dashboard along with results summary and error analysis Markdown files for all the benchmarks defined in the data/benchmarks.json file, run:
python scripts/analysis/make_summary_report.py
The output will be written to data/results/README.md.
text2sql-eval-toolkit
βββ notebooks/ # Jupyter notebooks showcasing the use of the toolkit functions
βββ data/ # Benchmark datasets and evaluation results
β βββ benchmarks/ # Benchmark data and schema files
β β βββ README.md # Detailed benchmark documentation
β β βββ test_benchmarks/ # Test benchmark data files
β β βββ dbs/ # Database files and setup instructions
β βββ benchmarks.json # Full benchmarks configuration
β βββ test-benchmarks.json # Test benchmarks configuration (smaller datasets)
β βββ results/ # Evaluation outputs
β βββ README.md # Summary of results
βββ scripts/ # Scripts for running experiments
β βββ analysis/ # Results and error analysis scripts and utilities
β βββ curation/ # Benchmark data curation and preprocessing scripts
β βββ evaluation/ # Evaluation pipeline script
β βββ execution/ # Scripts for SQL execution
β βββ inference/ # Inference pipeline (baseline) script
β βββ profiling/ # SQL query profiling script
βββ src/ # Source code for the toolkit
β βββ text2sql_eval_toolkit/ # Core modules
β βββ analysis/ # Results and error analysis module
β βββ evaluation/ # Evaluation module
β βββ execution/ # SQL execution module
β βββ inference/ # LLM inference (baseline) module
β βββ profiling/ # SQL profiling module
βββ pyproject.toml # Build system and project metadata
βββ README.md # Project documentation
βββ requirements.txt # Python dependencies
βββ LICENSE # License file
We welcome external contributions! If you have an itch, please feel free to scratch it.
To contribute code or documentation, please submit a pull request.
A good way to familiarize yourself with the codebase and contribution process is to look for and tackle low-hanging fruit in the issue tracker.
Note: We appreciate your effort, and want to avoid a situation where a contribution requires extensive rework (by you or by us), sits in backlog for a long time, or cannot be accepted at all!
For detailed contribution guidelines, please see CONTRIBUTING.md.
- Fork the repo
- Create your feature branch (
git checkout -b my-new-feature) - Commit your changes (
git commit -am 'Added some feature') - Push to the branch (
git push origin my-new-feature) - Create new Pull Request
This project is licensed under the Apache License 2.0. See the LICENSE file for more details.
- CODE_OF_CONDUCT.md - Community guidelines
- SECURITY.md - Security policy and vulnerability reporting
- MAINTAINERS.md - Project maintainers
- CHANGELOG.md - Project changelog
