Skip to content

zkx06111/llm_priming

Repository files navigation

LLM Priming

The results for Large Language Models Are Partially Primed in Pronoun Interpretation.

The data for running the experiments are in priming_data/.

GPT_incontext.py is for in-context learning experiments with GPT.

GPT_zeroinference.py is for zero-shot inference experiments with GPT.

UL2_incontext.py is for in-context learning experiments with Flan-UL2.

To cite our paper, please use the following bibtex

@inproceedings{lam-etal-2023-replicate,
    title = "*Large Language Models Are Partially Primed in Pronoun Interpretation",
    author = "Lam, Suet-Ying  and
      Zeng, Qingcheng  and
      Zhang, Kexun  and
      You, Chenyu  and
      Voigt, Rob",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
    year = "2023",
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages