Skip to content

amore-upf/syntactic-uncertainty-LMs

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

The language model understood the prompt was ambiguous: probing syntactic uncertainty through generation

Code and data relevant to the experiments in "The language model understood the prompt was ambiguous: probing syntactic uncertainty through generation", to appear in Proc. BlackboxNLP 2021

Content

Instructions

To generate from prompts, e.g., with GPT2 on NP/S data

python generate_from_prompts.py --lm GPT2 --ambiguity NP-S 

By default using standard sampling. To generate with beam search and exploring parameters in stochastic decoding:

python generate_from_prompts.py --lm GPT2 --ambiguity NP-S --all --search

To assign interpretations the generated sentences:

python parse_sentences.py --lm GPT2 --ambiguity NP-S
python extract_interpretations.py --lm GPT2 --ambiguity NP-S

--lm : GPT2, LSTM --ambiguity: NP-S, NP-Z, N-V

Running these commands outputs files with the generated sentences, their parses and interpretation in a directory e.g., generated/NP-S/GPT2/GPT2/sampling-p_p1_temperature1_repetition1, containing:

  • for each prompt, files with generated sentences, e.g., NP-S_1_nocue.tsv -parses_allennlp/: for each prompt, parses and PoS labels of generated sentences
  • ìnterpretations_allennlp: for each prompt type, all the generated sentences with their associated interpretations

To use and run the LSTM model from Gulordava et al., (2018):

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%