-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
关于使用PSA模块时报错 #83
Comments
我也有这个问题,不知道怎么解决 |
不能直接使用list,改成 |
issue有大佬改了,连带就地操作问题也解决了,找不到了,搬运一下
|
这个问题您解决了吗? |
好像还是不行 |
这个还是用不了,from attention.SpaceAtt import SpatialAttention这个引进不来 |
只改这个nn.ModuleList,确实出现您说的问题:one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 64, 20, 20]], which is output 0 of ReluBackward1, is at version 5; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). |
我是在48行添加conv = conv.to(x.device)和54行添加se = se.to(x.device)全部转移到和输入的x同设备上解决的。 |
请问这个问题您解决了吗 我现在也遇到了这个问题 |
大佬,我在使用
PSA.py
这个模块时,老是报错RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same,
我已经把这个模块
.cuda()
了还是报错,能解决一下吗?
The text was updated successfully, but these errors were encountered: