This is an overview over the code/data associated with my dissertation.
- Smatch++, π: optimal and standardized Smatch
- Smaragd, π: fast approximated Smatch with neural network
- S2match, π: Smatch matching with node labels matched via word embeddings
- WWLK, π: Weisfeiler Leman MR Kernel with embeddings for contextualied matching, many-to-many node alignment
- WWLK-aligner, π: Easy-to-use Wasserstein amr-text alignment
- Explainable sentence embeddings (S3BERT), π: Infusing fine-grained similarity into a state-of-the-art sentence embedding model for explainable and decomposable embeddings
- Explainable NLG evaluation, π: Evaluation of NLG systems in meaning space
- AMR4NLI, π: Asymmetric and unsupervised application of MR metrics for transparent entailment rating
- Textual argument similarity, π
- Penman-informed CNN, π: Simple and efficient neural graph encoding of graphs such as AMRs, dependency trees, etc.
- NLI AMR, π: >1 mio Silver AMR pairs of five NLI data sets
- ParsEval, π: 800 Parsed AMRs with human quality annoations (domain: little Prince, AMR3)
- Textual Similarity, π: Silver AMRs of text similarity benchmarks (e.g., STS) that are annoated with human textual similarity ratings
- Textual argument similarity, π: Silver AMRs of textual argument similarity that are annoated with human textual similarity ratings