Skip to content

Conversation

jamesr66a
Copy link
Collaborator

Stacked on #18811

This makes it so that we only emit the *f variants of math functions if the output value's type is FloatTensor, otherwise we call the double variants to prevent loss of precision. This fixes more numerical issues

@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Apr 3, 2019
@jamesr66a jamesr66a requested review from suo and zdevito April 3, 2019 21:21
@jamesr66a jamesr66a force-pushed the double_fuser branch 7 times, most recently from 49d2db4 to 5a12bd3 Compare April 4, 2019 21:55
Copy link
Contributor

@zdevito zdevito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In person: can we put all templates into a single list and use an object to track the different templates for float, double, and vector things. Also resolve cast_0 vs 0

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactor this into a function please.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

struct Template {
  Template(const char* flor_float, const char* for_double);
 const char * for_float;
 const char * for_double;
};

Then the lines can be much shorter:

{aten::sigmoid, {"1.f / (1.f + expf(-${0}))", "1. / (1. + exp(-${0}))"}},

@jamesr66a jamesr66a force-pushed the double_fuser branch 6 times, most recently from 5be3a9a to 35a54fa Compare April 5, 2019 19:15
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jamesr66a has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Member

@suo suo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good but add #documentation

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unused

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a comment explaining what this struct is for

test/test_jit.py Outdated
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a brief comment explaining what tests are supposed to check

@jamesr66a jamesr66a force-pushed the double_fuser branch 2 times, most recently from 43fbbac to 696a4a0 Compare April 6, 2019 00:24
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jamesr66a is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jamesr66a is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@jamesr66a merged this pull request in 34382e4.

facebook-github-bot pushed a commit that referenced this pull request Apr 7, 2019
Summary:
Stacked on #18815 and #18811.

This makes it so that we emit a higher-precision literal for float values in the fusion kernel, as well as assign that to a `double` variable. This prevents us from losing precision for values such as `pi`, but with the previous fixes this will also get downcasted to `float` if downstream operations require it. Therefore, we should not lose performance because of implicit promotions
Pull Request resolved: #18817

Differential Revision: D14820842

Pulled By: jamesr66a

fbshipit-source-id: 519671c6ca5e7adac746a4c4c72760a6d91e332f
zhangguanheng66 pushed a commit to zhangguanheng66/pytorch that referenced this pull request May 6, 2019
Summary:
Stacked on pytorch#18811

This makes it so that we only emit the *f variants of math functions if the output value's type is FloatTensor, otherwise we call the double variants to prevent loss of precision. This fixes more numerical issues
Pull Request resolved: pytorch#18815

Differential Revision: D14816965

Pulled By: jamesr66a

fbshipit-source-id: 464be644168875ede987142281fb2168f4041e81
zhangguanheng66 pushed a commit to zhangguanheng66/pytorch that referenced this pull request May 6, 2019
Summary:
Stacked on pytorch#18815 and pytorch#18811.

This makes it so that we emit a higher-precision literal for float values in the fusion kernel, as well as assign that to a `double` variable. This prevents us from losing precision for values such as `pi`, but with the previous fixes this will also get downcasted to `float` if downstream operations require it. Therefore, we should not lose performance because of implicit promotions
Pull Request resolved: pytorch#18817

Differential Revision: D14820842

Pulled By: jamesr66a

fbshipit-source-id: 519671c6ca5e7adac746a4c4c72760a6d91e332f
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants