Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

an illegal memory access was encountered #3

Closed
sshan-zhao opened this issue Aug 23, 2018 · 4 comments
Closed

an illegal memory access was encountered #3

sshan-zhao opened this issue Aug 23, 2018 · 4 comments

Comments

@sshan-zhao
Copy link

Hi, when I use your code in pytorch 0.4.1, cuda 9.0, an error occurs:
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1532571933727/work/aten/src/THC/THCCachingHostAllocator.cpp line=271 error=77 : an illegal memory access was encountered
if I do not use the correlation layer, that would be ok.
Do you have any idea? Thank you!

@ClementPinard
Copy link
Owner

Yes, there has been some problems with last version (0.0.7), because there was a typo, which made this error occur when H and W where not the same (I did not ran tests thoroughly enough, I'm so sorry! 😕 )

I uploaded a new version (0.0.8) this morning, and it should be solved, so you should try to upgrade your correlation sampler.

@sshan-zhao
Copy link
Author

sshan-zhao commented Aug 24, 2018 via email

@sshan-zhao
Copy link
Author

Hi, I have solved this problem. There is indeed no bug in your code! Because the output of the forward in the correlation layer is a 5d tensor, then I transform it to a 4d one in forward in the spatial_correlation_sampler.py. However, I forgot to transform the the grad_output (4d tensor) to a 5d tensor in backward. Actually, I don't know why the the missing of transformation from 4d tensor(nx(phxpw)xhxw) to a 5d one(nxphxpwxhxw) could cause the problem, since the order of the elements does not change at all. Anyway, the problem has been solved. Thank you again.

@ClementPinard
Copy link
Owner

That's weird, the view operation is supposed to handle this automatically, see here how I did it for my implementation of flownetc.

Anyway I'm glad it could work for you in the end !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants