New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug in evaluate #4
Comments
the <checkpoint>.pt is the saved checkpoint of the training process. |
So you have trained your own model? Does it work with the pretrained checkpoints we released? |
Oh, sorry, it works well after I re-download the bart-large. It is strange. |
Hello, I believe I am experiencing the same issue. I am also meeting Wangpeiyi9979's error:
Since Wangpeiyi9979 mentioned this issue could come from BART-large, I deleted all the model cache, tried again, and met the same error. Additional information:
I would like to apologise if this error comes from a poor use of your code or an improper installation. Thank you very much for your work on AMR and thanks in advance for your response! |
Can you try to redownload the pretrained weights? We have pruned (I think a few months back) a few params in the checkpoint that did not play well with the current code. |
Thank you for your answer! I re-downloaded the 3.0 parsing weights and the same issue arose... |
Hello again, if you have the time i would greatly appreciate some help, my issue still hasn't resolved... thanks in advance! |
Try to use this checkpoint: https://drive.google.com/file/d/1p7oyQPacWSF-WTXapaA55TPuRP_pJ-Rc/view?usp=sharing Does it work? |
Thank you for taking the time to send me this checkpoint, I'm sorry to say I still have the same error... |
Clone again the repository and create a new env from scratch. It worked for me! As a last resort, try patching the checkpoint with |
Hello again, I create a new conda env (with python 3.7), installed the requirements using the
And observed the usual result:
I am as puzzled as you are, i really don't see why it would work differently on our different machines... |
FYI, I tried the same thing on a different machine and encountered the same error :( |
You are missing two arguments: |
Thank you very much, this seems to have been my issue all along, embarrassingly enough. This issue is solved, I'll now be looking into whether it would be possible to run this with the latest version of transformers (I'm telling you in case you have insight of that matter), I'll let you know if it works out! ^^ |
Thanks a lot! In the meanwhile I'll close the issue. |
Hi, when I run
python bin/predict_amrs.py \ --datasets <AMR-ROOT>/data/amrs/split/test/*.txt \ --gold-path data/tmp/amr2.0/gold.amr.txt \ --pred-path data/tmp/amr2.0/pred.amr.txt \ --checkpoint runs/<checkpoint>.pt \ --beam-size 5 \ --batch-size 500 \ --device cuda \ --penman-linearization --use-pointer-tokens
I meet a problem:
RuntimeError: Error(s) in loading state_dict for AMRBartForConditionalGeneration:
size mismatch for final_logits_bias: copying a param with shape torch.Size([1, 53587]) from checkpoint, the shape in current model is torch.Size([1, 53075])
could you help me?
The text was updated successfully, but these errors were encountered: