diff --git a/README.md b/README.md
index 64ddd03c..a8a3d4a8 100644
--- a/README.md
+++ b/README.md
@@ -4,7 +4,7 @@ Transition-based Neural Parser
State-of-the-Art Abstract Meaning Representation (AMR) parsing, see [papers
with code](https://paperswithcode.com/task/amr-parsing). Models both
distribution over graphs and aligments with a transition-based approach. Parser
-supports any other graph formalism as long as it is expressed in [Penman
+supports generic text-to-graph as long as it is expressed in [Penman
notation](https://penman.readthedocs.io/en/latest/notation.html).
Some of the main features
@@ -29,7 +29,7 @@ all scripts source a `set_environment.sh` script that you can use to activate
your virtual environment as above and set environment variables. If not used,
just create an empty version
-```
+```bash
# or e.g. put inside conda activate ./cenv_x86
touch set_environment.sh
```
@@ -44,7 +44,7 @@ installation instructions.
(Please install the cpu version of torch-scatter; and model training is not fully supported here.)
-```
+```bash
pip install transition-neural-parser
# for linux users
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.13.1+cu117.html
@@ -54,7 +54,7 @@ pip install torch-scatter -f https://data.pyg.org/whl/torch-1.13.1+cu117.html
If you plan to edit the code, clone and install instead
-```
+```bash
# clone this repo (see link above), then
cd transition-neural-parser
pip install --editable .
@@ -63,7 +63,7 @@ pip install torch-scatter -f https://data.pyg.org/whl/torch-1.13.1+cu117.html
If you want to train a document-level AMR parser you will also need
-```
+```bash
git clone https://github.com/IBM/docAMR.git
cd docAMR
pip install .
@@ -185,8 +185,7 @@ This table shows you available pretrained model names to download;
2 Smatch on AMR3.0 Multi-Sentence dataset
-we also provide the trained `ibm-neural-aligner` under names
-`AMR2.0_ibm_neural_aligner.zip` and `AMR3.0_ibm_neural_aligner.zip`. For the
+contact authors to obtain the trained `ibm-neural-aligner`. For the
ensemble we provide the three seeds. Following fairseq conventions, to run the
ensemble just give the three checkpoint paths joined by `:` to the normal
checkpoint argument `-c`. Note that the checkpoints were trained with the
@@ -198,7 +197,6 @@ individual models. A fast way to test models standalone is
bash tests/standalone.sh configs/.sh
-
## Training a model
You first need to pre-process and align the data. For AMR2.0 do