Skip to content

Mitigating a language model's over-confidence with NLI predictions on Multi-NLI hypotheses with random word order using PAWS (paraphrase) and Winogrande (anaphora).

Notifications You must be signed in to change notification settings

nikotang/RD-UU-MPLT

Repository files navigation

Additional Paraphrase Training Drives Language Models Closer to Human Behaviour on Natural Language Inference

A Research and Development project.

Refer to the Jupyter notebook for now.

About

Mitigating a language model's over-confidence with NLI predictions on Multi-NLI hypotheses with random word order using PAWS (paraphrase) and Winogrande (anaphora).

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published