This repository is for CUP: Curriculum Learning based Prompt Tuning for Implicit Event Argument Extraction
Download RAMS and WikiEvent datasets into ./data folder.
Our checkpoints are available here.
Download them into ./experiments
Evaluate performance on RAMS
sh script/test_RAMS.sh
Evaluate performance on WikiEvents
sh script/test_Wikievents.sh
In training stage, we utilize documental AMR graph. Hence, preprocess data first.
Follow the instructions in this repository to train an AMR parser.
Then use ./data/amr_parser.py to parse the sentences of the two datasets.
We utilize ready-made coreference resolution tool available here
Process training data into the same form as data/WikiEvents/amrs/train.amr.txt and data/WikiEvents/corefered.json
For RAMS:
python ./data/preprocess.py --train_dir=./data/RAMS/train.jsonlines --coref_dir=./data/RAMS/corefered.json --output_dir=./data/RAMS/RAMSwithcore/train.jsonl
For WikiEvents:
python ./data/preprocess.py --train_dir=./data/wikievents/informative/train.jsonl --coref_dir=./data/wikievents/corefered.json --output_dir=./data/wikievents/WikiwCoref/informative/train.jsonl
For full data training, store all doc_keys into f'./{args.data_path}/doc_keys.jsonl'
To conduct few-shot training, first select a particular ratio of samples, then store their doc_keys into f'./{args.data_path}/doc_keys.jsonl'
Train on RAMS:
sh scripts/train_RAMS.sh
Train on WikiEvents:
sh scripts/train_WikiEvents.sh
Please cite our work if this respository inspires you.