Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Predict mode with --prob option throws RuntimeError #7

Closed
RyosukeMitani opened this issue Aug 1, 2022 · 2 comments
Closed

Predict mode with --prob option throws RuntimeError #7

RyosukeMitani opened this issue Aug 1, 2022 · 2 comments

Comments

@RyosukeMitani
Copy link

RyosukeMitani commented Aug 1, 2022

Hello, @yzhangcs.
Thank you for sharing your great work!!
After training a model, I'm trying to get probabilities of each answers on "predict" mode.
But, I got an error shown below from File "/crfsrl/crfsrl/parser.py", line 272, in _predict.
I tried to cast the lens tensor into another type but it doesn't work.
Would it be possible to get any advices to fix this ??

preds['probs'].extend([prob[1:i, :i].cpu() for i, prob in zip(lens.softmax(-1).unbind())])
RuntimeError: "host_softmax" not implemented for 'Long' 
@RyosukeMitani RyosukeMitani changed the title predict mode with --prob option throws predict mode with --prob option throws RuntimeError Aug 1, 2022
@RyosukeMitani RyosukeMitani changed the title predict mode with --prob option throws RuntimeError Predict mode with --prob option throws RuntimeError Aug 1, 2022
@RyosukeMitani
Copy link
Author

I noticed that the main branch was updated. So, I also switched the version into HEAD of main.
But, similar error prevents to calculate probabilities properly.

batch.probs = [prob[1:i, :i].cpu() for i, prob in zip(lens.softmax(-1).unbind())]
RuntimeError: "softmax_lastdim_kernel_impl" not implemented for 'Long'

yzhangcs added a commit that referenced this issue Aug 9, 2022
yzhangcs added a commit that referenced this issue Aug 9, 2022
@yzhangcs
Copy link
Owner

yzhangcs commented Aug 9, 2022

@RyosukeMitani Hi, thank you for reporting this bug (also sorry for my super late reply :-().
I have pushed the fix to the main branch, please check it again.
You can get the values via the following code:

>>> from crfsrl import CRFSemanticRoleLabelingParser
>>> parser = CRFSemanticRoleLabelingParser.load(<path>)
>>> sent = parser.predict([['She', 'enjoys', 'playing', 'tennis', '.']], prob=True, verbose=False)[0]
>>> sent
1       She     _       _       _       _       _       _       2:B-A0|3:B-A0   _
2       enjoys  _       _       _       _       _       _       0:[prd] _
3       playing _       _       _       _       _       _       2:B-A1|0:[prd]  _
4       tennis  _       _       _       _       _       _       2:I-A1|3:B-A1   _
5       .       _       _       _       _       _       _       _       _

>>> s_edge, s_role = sent.probs
>>> s_edge.shape
torch.Size([6, 6])
>>> s_role.shape
torch.Size([6, 6, 55])

which are actually unnormalized scores.
For CRF2o, 2o sib scores are also returned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants