We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I find that some CUDA program sets stack sizes. Please suggest which SYCL functions could set device resource limits.
cudaDeviceGetLimit(&limit, cudaLimitStackSize); cudaDeviceSetLimit(cudaLimitStackSize, limit * 12);
Without adjusting the stack size, the CUDA program will report illegal memory accesses.
Thanks.
The text was updated successfully, but these errors were encountered:
@jinz2014 If you target a nvidia device then could you please try this environment variable SYCL_PI_CUDA_MAX_LOCAL_MEM_SIZE ?
Sorry, something went wrong.
Thank you for your reply. It might be confusing; local memory may mean shared local memory.
The Cuda program is https://github.com/cuhk-eda/CULS
No branches or pull requests
I find that some CUDA program sets stack sizes. Please suggest which SYCL functions could set device resource limits.
cudaDeviceGetLimit(&limit, cudaLimitStackSize);
cudaDeviceSetLimit(cudaLimitStackSize, limit * 12);
Without adjusting the stack size, the CUDA program will report illegal memory accesses.
Thanks.
The text was updated successfully, but these errors were encountered: