Skip to content

Latest commit

 

History

History
6 lines (3 loc) · 974 Bytes

File metadata and controls

6 lines (3 loc) · 974 Bytes

Error Annotations of our Human Evaluation

This repo contains the error annotation guidelines and sample annotations from our human evaluation experiment for our paper, Improving Factual Accuracy of Neural Table-to-Text Output by Addressing Input Problems in ToTTo.

To understand hallunication in neural model outputs at a more granular level, we adopted manual error annotation methodology from Thomson and Reiter(2020) and included specific error categories relevant for ToTTo Politics domain outputs. Definitions of error categories for this dataset and annotation guidelines are available here. Example annotations from our human evaluation experiment is available here.