Hello, I have a problem while using ParroT inference.
In the translation task of wmt22 testset (en-de), I loaded the parameters of Lrama-7b for inference, and the bleu was only 6.9808, while after loading the fine-tuning parameters of ParroT-Hint-7b-lora you provided, without adding Hint, bleu did not improve. How can I improve inference performance? Thank you!