-
Notifications
You must be signed in to change notification settings - Fork 445
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot run inference on PubMedQA-Large #23
Comments
You probably will need to run the |
I also have the same error, even after running |
Did you download and extract the trained checkpoint tgz file in the required directory? If not, you need to do these steps:
|
I both downloaded and extracted the QA-PubMedQA-BioGPT-Large.tgz into the checkpoints directory and ran preprocess_large.sh. This error still occurs. |
Did you put the checkpoints directory inside the BioGPT directory? Because the paths it uses is relative and all the necessary directories has to be inside the BioGPT folder. From your error, it seems that it is not able to find the |
I am not sure I understand your response. Isn't
In my case, MODEL_DIR is present under checkpoints and it only contains one file (
In fact, inference will not run if the output file is there.
The code appears to be failing in TransformerLanguageModelPrompt.from_pretrained. It doesn't create the output file. |
The issue is not in the output file not being found --- that error is happening because
It looks like FairSeq cannot find the "_name" key in the MODEL_DATACLASS_REGISTRY. My FairSeq version is 0.12.0, per your recommendation. It looks like the problem is with FairSeq, though I don't yet see where it is coming from. It's failing at this assertion:
|
Hi @VisionaryMind , this is due to a rename bug. We have fixed it now. Please pull the latest code and re-download the QA-PubMedQA-BioGPT-Large.tgz checkpoint |
I pulled the latest version from github and redownloaded the checkpoint file. I ended up getting the same error as previous, but the temporary fix here #17 (comment) still resolved the issue.
|
@renqianluo I pulled down the latest repository, re-downloaded the QA-PubMedQA-BioGPT-Large.tgz checkpoint, and implemented the above fix listed by @rpolicastro, and encounter the same exact error message. Neither solutions work for me. |
Ditto, did the exact steps (including running
|
Probably because your script didn't generate an average checkpoint. use the best checkpoint instead |
I do have a question. how did you download the BioGPT large? using the URL gives me an error that is unable to load the parameters from the checkpoint. Did you use something else to download itit? |
Using your pre-trained model, the infer_large.sh script is failing as follows:
Please let me know if you have any suggestions to get it working. There seems to be a problem generating the output file.
The text was updated successfully, but these errors were encountered: