Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes misplaced LLAMA_NO_HOST_ACC_WARNING. #17

Merged
merged 1 commit into from
Nov 27, 2018
Merged

Fixes misplaced LLAMA_NO_HOST_ACC_WARNING. #17

merged 1 commit into from
Nov 27, 2018

Conversation

tdd11235813
Copy link
Contributor

No description provided.

@bertwesarg
Copy link
Collaborator

+1

@ax3l ax3l requested a review from theZiz November 26, 2018 22:04
@theZiz
Copy link
Collaborator

theZiz commented Nov 27, 2018

How did you find this? This never throw an error for me. 😮

@bertwesarg
Copy link
Collaborator

I think it was nvcc from CUDA 10,which started to trigger this.

@theZiz
Copy link
Collaborator

theZiz commented Nov 27, 2018

Wow, some bugs they're fixing! 😛

@theZiz theZiz merged commit cfef36e into alpaka-group:develop Nov 27, 2018
@psychocoderHPC psychocoderHPC added the bug Something isn't working label Nov 28, 2018
@bertwesarg
Copy link
Collaborator

Wow, some bugs they're fixing! stuck_out_tongue

Does this mean, to support CUDA 9 and 10, we would need to have macros on booth places. And define them depending on the CUDA version?

@ax3l
Copy link
Member

ax3l commented Nov 30, 2018

No, iirc from a similar bug in alpaka, the placement just became more strict/well-defined now: https://github.com/ComputationalRadiationPhysics/alpaka/pull/533

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants