The results for Large Language Models Are Partially Primed in Pronoun Interpretation.
The data for running the experiments are in priming_data/
.
GPT_incontext.py
is for in-context learning experiments with GPT.
GPT_zeroinference.py
is for zero-shot inference experiments with GPT.
UL2_incontext.py
is for in-context learning experiments with Flan-UL2.
To cite our paper, please use the following bibtex
@inproceedings{lam-etal-2023-replicate,
title = "*Large Language Models Are Partially Primed in Pronoun Interpretation",
author = "Lam, Suet-Ying and
Zeng, Qingcheng and
Zhang, Kexun and
You, Chenyu and
Voigt, Rob",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
year = "2023",
}