Skip to content

junjiechen-chris/Syn-Sem_Dependency_Correlation_Mixture_Model

Repository files navigation

This is the code repository for reproducing the result of our ACL 2022 paper Modeling Syntactic-Semantic Dependency Correlations in Semantic Role Labeling Using Mixture Models. We developed the code on the codebase of Linguistically-Informed Self-Attention (LISA) and added token-based batching component from Tensor2Tensor.

Requirements

  • Tensorflow 1.15
  • Python 3.6
  • h5py (for ELMo and BERT models)

Quick start

Data setup

Obtaining data

  • You need to obtain CoNLL-2009 dataset. Look at this site for reference.

Embeddings


We have packed the pipeline into several scripts in conll09-all_langs To prepare the data, you need to do the following in the directory of a specific language:

  • put the train&dev&test file into the respective directory
  • execute the followings
make rename_as_conll05 section=$(train/dev/test)
make all_parse section=$(train/dev/test)
make gather_all_info 
make correct_synt_idx

Running experiments

Please download the saved model files here. run the command stored in the "eval.cmd" file in each model file. For example, you can find the following command in conll-eng-mm5-fasttext.zip.

bin/evaluate-exported.sh config/llisa/e2e/fasttext/conll09-eng-sa-small-dep_prior-par_inp-bilinear-gp-ll.conf --save_dir <path_to_model>/best_checkpoint --num_gpus 1 --hparams  mixture_model=5

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published