-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When GPU device id > 0, error: cudaSuccess (77 vs. 0) an illegal memory access was encountered #19
Comments
I found this error will occur if gpu device id > 0 @oh233 , why if I set gpu device id > 0, I will get this error? I tried different GPUs, different machines, different environments, still the same error. Only if I select device id = 0, then it will work. Please help! |
To reproduce, just run: |
Any updates on this issue? We would REALLY like to parallelize our trainings. Thank you. |
This is an incorrect setting in the demo.py. The cfg.GPU_ID is not reset in your case. Therefore the nms will output some incorrect values that cause illegal memory accessing. It will not cause errors in train_net/test_net since cfg.GPU_ID is set accordingly in those scripts. This is fixed in bc9a269 . |
@oh233 hi~ I wonder why the cfg.GPU_ID would influence the values output from the nms? |
This is also the case for Affordance Net. There under |
After following all the instructions, and running the demo, I get:
I tried with CUDNN on and off. I'm using TitanX. Any advice?
The text was updated successfully, but these errors were encountered: