An evaluation for OKBQA-TGM
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
sqa_evaluator
tools
.gitignore
LICENSE
README.md
eval_tgm.py
setup.py

README.md

Evaluation scripts for OKBQA-TGM

This is a simple prototype for the sQA Evaluator, a library for subdivided, semantic, and systematic evaluation for Semantic QA systems. The scripts are aim to evaluate the quality of SPARQL templates generated by Template Generation Modules (TGMs), which is one of the core modules of the Open Knowledge Base and Question-Answering (OKBQA) framework.

Requirements

This script is designed to work with Python3 (3.5 or later).

Installation

All dependencies can be installed by one shot:

$ python3 setup.py install

Usage

Run eval_tgm.py with python3 interpreter:

$ python3 eval_tgm.py {json file} ...

Supported Datasets

Quick preparation

Run following commands:

$ git clone https://github.com/ag-sc/QALD
$ python3 tools/format_qald_data.py ./QALD

Then you got input json files for eval_tgm.py in directory data.

Format

Input json files have to contain the pairs of NL question (annotated with language specification) and SPARQL query. Here's a minimal sample:

{
    "questions": [
        {
            "question": [
                {
                    "language": "en",
                    "string": "What is the capital of Japan?"
                }
            ],
            "query": {
                "sparql": "SELECT ?city WHERE { res:Japan onto:capital ?city . }"
            }
        }
    ]
}

Useful datasets

Following datasets can be used for the evaluation:

The formatters for the datasets are prepared in the tools directory.

License

This program released under the MIT license.


Takuto ASAKURA