-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with evaluating model with beam search #36
Comments
Hi @pavlos2094 , Thanks for your interest in this work. This problem is caused by how division is treated in different versions of PyTorch (line |
@qibinc thank you for your immediate response! |
@qibinc i follow your suggestion with "modify the line |
Hi @ssexuejinwei , We have rewritten this codebase with latest versions of DL frameworks and significantly improved the installation, code quality, reproducibility and visualization. We have also replaced beam search with nucleus sampling, a method proven to be more suitable in text generation than beam search to avoid repetition. You can find how to use it at https://github.com/THUDM/KOBE#evaluating-kobe. We also added BERTScore for evaluation, a metric better than BLEU in aligning with human judgement. Here is a screenshot of the KOBE-v2's training progress, in case your are still interested: Best, |
Hello,
Thank you for providing this well-written and useful repository. After having trained a model, I try to evaluate the saved checkpoint model using beam search with a command similar to the one from the README:
python core/train.py --config configs/baseline.yaml --mode eval --restore experiments/finals-baseline/checkpoint.pt --expname eval-baseline --beam-size 10
However, I am getting an issue which produces a stack trace like this:
It seems to me that it actually makes sense to happen, since we are trying to index a tensor (attnOut) with a tensor of floats (prevK). Here is the code chunk from beam.py for reference:
Am I doing something wrong here? Thanks.
The text was updated successfully, but these errors were encountered: