Skip to content

Conversation

@jamesr66a
Copy link
Collaborator

Stacked on #18815 and #18811.

This makes it so that we emit a higher-precision literal for float values in the fusion kernel, as well as assign that to a double variable. This prevents us from losing precision for values such as pi, but with the previous fixes this will also get downcasted to float if downstream operations require it. Therefore, we should not lose performance because of implicit promotions

@jamesr66a jamesr66a requested review from suo and zdevito April 3, 2019 21:23
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Apr 3, 2019
@jamesr66a jamesr66a force-pushed the double_const branch 7 times, most recently from 2c7d20b to 07a25e8 Compare April 4, 2019 21:56
Copy link
Contributor

@zdevito zdevito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. See comment about variableType, which I think has a pre-existing bug.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

variableType is wrong. It should return 'int64_t' for IntType and 'double' for FloatType. There should be no need for the if statement here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Somebody made it so that the interface to fused kernels that take scalar floating point arguments is float, probably because the fusion compiler is biased toward GPU code. I can change that interface, but it would involve mucking with the GPU fuser

@jamesr66a jamesr66a force-pushed the double_const branch 7 times, most recently from e272678 to 12846be Compare April 6, 2019 07:02
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jamesr66a has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@jamesr66a merged this pull request in 9b69f21.

zhangguanheng66 pushed a commit to zhangguanheng66/pytorch that referenced this pull request May 6, 2019
Summary:
Stacked on pytorch#18815 and pytorch#18811.

This makes it so that we emit a higher-precision literal for float values in the fusion kernel, as well as assign that to a `double` variable. This prevents us from losing precision for values such as `pi`, but with the previous fixes this will also get downcasted to `float` if downstream operations require it. Therefore, we should not lose performance because of implicit promotions
Pull Request resolved: pytorch#18817

Differential Revision: D14820842

Pulled By: jamesr66a

fbshipit-source-id: 519671c6ca5e7adac746a4c4c72760a6d91e332f
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants