A Transformer model trained on WikiSQL dataset that accepts natural language as input and returns SQL Query as output.
Try it yourself here
├── LICENSE
│
├── README.md <- Documentation to get more information about the project.
│
├── saved_models <- Trained and serialized models.
│
├── requirements.txt <- The requirements file for reproducing the environment.
│
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module.
| |
| ├── train_tokenizer.py <- Script to train a sentencepiece tokenizer on the dataset.
| |
| ├── dataset.py <- Script to load and preprocess the dataset.
| |
| ├── model.py <- Script that defines the transformer model.
| |
| ├── config.py <- Contains all the basic parameters for training.
│ │
| └── train.py <- Script to train the model.
│
├── utils
| ├── examples.csv <- CSV files with few example predictions.
│ │
| └── examples.png <- An image with few example predictions.
|
├── engine.py <- Script to perform inference on the trained model.
|
└── ui.py <- Script to build the streamlit web application.
- python3
pip install -r requirements.txt
src/train.py
- is used to train the model
engine.py
- is used to perform inference
ui.py
- is used to build the streamlit web application