New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
beam size problem #3
Comments
Hi @maryawwm , you can try to replace "model.done_beams" with "model.module.done_beams". |
Thanks @YuanEZhou !It works. |
hi again, after changing the beam size when I want to run the second training stage I face new error : iter 330103 (epoch 29), avg_reward = 0.000, time/batch = 0.975 Terminating BlobFetcher |
Hi @maryawwm , we usually set beam size to 1 during training and set beam size to 3 during testing. This setting is ok and it is not very necessary to set beam size to 3 during training. Based on the above, I may not write the code in support of setting a beam size greater than 1 during the second training stage. |
That's right. Thank you! |
Hi,
I trained the code with beam size of 1 and it worked well. Now I want to try it with other values but when I try beam size 3 in train script I got this error:
iter 2999 (epoch 0), train_loss = 0.770, time/batch = 0.202
250.90925693511963 ms needed to decode one sentece under batch size 10 and beam size 3
Traceback (most recent call last):
File "train.py", line 325, in
train(opt)
File "train.py", line 273, in train
dp_model, lw_model.crit, loader, eval_kwargs)
File "/mnt/f/satic/eval_utils.py", line 138, in eval_split
sents_list = [utils.decode_sequence(loader.get_vocab(), _['seq'].unsqueeze(0))[0] for _ in model.done_beams[i]]
File "/home/maryam/anaconda3/envs/satic/lib/python3.7/site-packages/torch/nn/modules/module.py", line 772, in getattr
type(self).name, name))
torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'done_beams'
Can you help me how to fix that?(because you provided results with different beam size in your paper and I guess the code should be ok )
The text was updated successfully, but these errors were encountered: