LEAMR (Linguistically Enriched AMR, pronounced lemur) Alignments is a data release of alignments between AMR and English text for better parsing and probing of many different linguistic phenomena. We also include our code for the LEAMR aligner. For more details, read our paper.
Austin Blodgett and Nathan Schneider. 2021. Probabilistic, Structure-Aware Algorithms for Improved Variety, Accuracy, and Coverage of AMR Alignments. In Proceedings of the 59th Annual Meeting ofthe Association for Computational Linguistics.
For other useful resouces for AMR research, also take a look at AMR-utils and the AMR Bibliography.
pip install -r requirements.txt
git clone https://github.com//ablodge/amr-utils
pip install ./amr-utils
We release alignment data for AMR Release 3.0 and Little Prince comprising ~60,000 sentences, as well as 350 sentences with gold alignments in leamr_test.txt and leamr_dev.txt.
We release 4 layers of alignments: subgraph, duplicate subgraph, relation, and reentrancy alignments.
For AMR Release 3.0 and Little Prince, as well as our gold test and dev data we release:
<corpus>.subgraph_alignments.json
: Each subgraph alignment maps a DAG-shaped subgraph to a single span. We also include duplicate subgraph alignments in this layer with the alignment type "dupl-subgraph". Some AMRs involve a "duplicate" of some part of the graph to represent ellipsis and other phenomena where some part of the meaning is unpronounced. Duplicate subgraph alignments are used to represent these cases.<corpus>.relation_alignments.json
: Each relation alignment maps a span to a collection of external edges, where each edge is between two subgraphs aligned in the previous layer. These alignments include argument structures (gave => :ARG0, :ARG1, :ARG2) and single relation alignments (when => :time).<corpus>.reentrancy_alignments.json
: Each reentrancy alignment maps a reentrant edge to the span which "triggers" that reentrancy, and is classified with a reentrancy type to account for phenomona like coreference, control, and coordination.
We also release <corpus>.spans.json
which species the spans for each sentence, grouping together tokens which are named entities or multiword expressions.
Alignments are released as JSON files.
To read alignments from a JSON file do:
reader = AMR_Reader()
alignments = reader.load_alignments_from_json(alignments_file)
Anonymized alignments are stored in the folder data-release/alignments
. To interpret them, you will need the associated AMR data.
You will first need to obtain AMR Release 3.0 from LDC: https://catalog.ldc.upenn.edu/LDC2020T02. Afterwards you can run the following code to unpack the remainder of the data. Make sure to specify <LDC parent dir>
as the parent directory of your AMR Release 3.0 data.
wget https://amr.isi.edu/download/amr-bank-struct-v3.0.txt -O data-release/amrs/little_prince.txt
python build_data.py <LDC parent dir>
python unanonymize_alignments.py
You will need to download spacy and stanza models for English:
python3 -m spacy download en_core_web_sm
python3
import stanza
stanza.download('en')
First, make sure the param files have downloaded completely:
wget https://github.com/ablodge/leamr/raw/master/ldc%2Blittle_prince.subgraph_params.pkl -O ldc+little_prince.subgraph_params.pkl
wget https://github.com/ablodge/leamr/raw/master/ldc%2Blittle_prince.relation_params.pkl -O ldc+little_prince.relation_params.pkl
wget https://github.com/ablodge/leamr/raw/master/ldc%2Blittle_prince.reentrancy_params.pkl -O ldc+little_prince.reentrancy_params.pkl
For a file of unaligned AMRs for English <unaligned amr file>
, you can create alignments by running the following code. The script nlp_data.py
does necessary preprocessing and may take several hours to run on a large dataset.
python nlp_data.py <unaligned amr file>
python align_with_pretrained_model.py -t <unaligned amr file> --subgraph-model ldc+little_prince.subgraph_params.pkl --relation-model ldc+little_prince.relation_params.pkl --reentrancy-model ldc+little_prince.reentrancy_params.pkl
You can set <train file>
to 'data-release/amrs/ldc+little_prince' or some other AMR file name. The script nlp_data.py
does necessary preprocessing and may take several hours to run on a large dataset.
python nlp_data.py <train file>.txt
python train_subgraph_aligner.py -T <train file>.txt --save-model <model name>.subgraph_params.pkl
python train_relation_aligner.py -T <train file>.txt --save-model <model name>.relation_params.pkl
python train_reentrancy_aligner.py -T <train file>.txt --save-model <model name>.reentrancy_params.pkl
@inproceedings{blodgett-schneider-2021-probabilistic,
title = "Probabilistic, Structure-Aware Algorithms for Improved Variety, Accuracy, and Coverage of {AMR} Alignments",
author = "Blodgett, Austin and
Schneider, Nathan",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.257",
doi = "10.18653/v1/2021.acl-long.257",
pages = "3310--3321"
}