Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More numerically stable lerp #18871

Closed
wants to merge 1 commit into from
Closed

Conversation

mkolod
Copy link
Contributor

@mkolod mkolod commented Apr 4, 2019

The C++ and CUDA implementations of the lerp are not numerically stable. This is discussed on Wikipedia here. I checked the GPU SASS output and there's no overhead from using the more precise implementation, from Kepler all the way to Turing. I haven't looked at CPU ASM though.

@ssnl
Copy link
Collaborator

ssnl commented Apr 4, 2019

method proposed by https://math.stackexchange.com/a/1798323 could be better

@mkolod
Copy link
Contributor Author

mkolod commented Apr 4, 2019

@ssnl I rewrote based on the proposal you shared. However, I'm worried that this may not perform well. For example, if t is fixed, then there's no warp divergence. If t depends on the data the way it does in say bilinear upsampling, I wonder if that may result in warp divergence and therefore be bad for perf. Of course the counter-argument to that is that this would be a stall for only a few instructions, for a problem that's bandwidth-bound anyway.

@ezyang
Copy link
Contributor

ezyang commented Apr 5, 2019

@mkolod Maybe do a quick benchmark, if you're concerned? :)

@ezyang ezyang added the module: numerical-stability Problems related to numerical stability of operations label Apr 5, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@mkolod
Copy link
Contributor Author

mkolod commented Apr 5, 2019

@ezyang It's fine. Looks like one CI run out of 76 is having issues, but it seems nothing to do with the commit (it's a multiprocessing/gloo issue). I hope that's not a blocker for the merging since it's not the lerp test that's affected, and I didn't change anything else in the code.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Apr 5, 2019
Summary:
The C++ and CUDA implementations of the lerp are not numerically stable. This is discussed on Wikipedia [here](https://en.wikipedia.org/wiki/Linear_interpolation#Programming_language_support). I checked the GPU SASS output and there's no overhead from using the more precise implementation, from Kepler all the way to Turing. I haven't looked at CPU ASM though.
Pull Request resolved: pytorch/pytorch#18871

Differential Revision: D14793438

Pulled By: ezyang

fbshipit-source-id: 2ddc2e026c5285466cae7d1b4101174253100445
@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in c1790fa.

@mkolod mkolod deleted the stable_lerp branch April 5, 2019 22:26
zhangguanheng66 pushed a commit to zhangguanheng66/pytorch that referenced this pull request May 6, 2019
Summary:
The C++ and CUDA implementations of the lerp are not numerically stable. This is discussed on Wikipedia [here](https://en.wikipedia.org/wiki/Linear_interpolation#Programming_language_support). I checked the GPU SASS output and there's no overhead from using the more precise implementation, from Kepler all the way to Turing. I haven't looked at CPU ASM though.
Pull Request resolved: pytorch#18871

Differential Revision: D14793438

Pulled By: ezyang

fbshipit-source-id: 2ddc2e026c5285466cae7d1b4101174253100445
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: numerical-stability Problems related to numerical stability of operations open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants