Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modify sum_to cuda kernel to not need atomic adds in backwards #367

Merged
merged 2 commits into from
Jan 17, 2023

Conversation

nkoppel
Copy link
Contributor

@nkoppel nkoppel commented Jan 16, 2023

Changes sum_to cuda kernel to not need atomic adds by spawning exactly one thread for each physical element in input or grad_input, and multiplying each added number by the number of threads which would have previously added that number. Resolves #351.

@nkoppel nkoppel changed the title Modify sum_to cuda kernel to not need atomic adds Modify sum_to cuda kernel to not need atomic adds in backwards Jan 16, 2023
Copy link
Owner

@coreylowman coreylowman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice changes! 🚀

@coreylowman coreylowman merged commit 6ffa87d into coreylowman:main Jan 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Do sum_to need to use atomicAdd in backward?
2 participants