Use Facebook FAIR's BPE Transformer model submitted to the WMT19 translation task. Evaluation is performed on the WMT19 test set using the sacreBLEU tool.
Evaluation of this model is described here
The aim is to perform a sequence-to-sequence adversarial attack, where an attack in the source language achieves an increased positive sentiment in the predicted sequence in the target language. The sentiment score for English as a target language is measured using a pre-trained Roberta-base model. Source language used is Russian.
The following types of adversarial attacks are considered:
- an importance based synonym substitution attack (where N words are substituted).
python3.6 or above
pip install torch transformers scipy
pip install nltk wiki-ru-wordnet
Use odenet. To install, clone the repository and then run pip install .
from within the repo. Further install: pip install networkx matplotlib
.