Skip to content

Latest commit

 

History

History
3 lines (2 loc) · 934 Bytes

README.md

File metadata and controls

3 lines (2 loc) · 934 Bytes

In our EMNLP paper "Hallucination Detection for Grounded Instruction Generation", we investigate the problem of generating instructions to guide humans to navigate in simulated residential environments. A major issue with current models is hallucination: they generate references to actions or objects that are inconsistent with what a human follower would perform or encounter along the described path. We develop a model that detects these hallucinated references by adopting a model pre-trained on a large corpus of image-text pairs, and fine-tuning it with a contrastive loss that separates correct instructions from instructions containing synthesized hallucinations. Our final model outperforms several baselines, including using word probability estimated by the instruction-generation model, and supervised models based on LSTM and Transformer.

Codes and pretrained models to appear soon.