Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[inductor] Fix nan-handling of max and min reductions #99881

Closed
wants to merge 11 commits into from

Conversation

peterbell10
Copy link
Collaborator

@peterbell10 peterbell10 commented Apr 24, 2023

@pytorch-bot
Copy link

pytorch-bot bot commented Apr 24, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/99881

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

✅ No Failures

As of commit 22e315e:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

[ghstack-poisoned]
peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Apr 24, 2023
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

ghstack-source-id: b3428393e95b4fdbd64ddda22f0d50fc01cf0dcf
Pull Request resolved: pytorch#99881
peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Apr 24, 2023
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

ghstack-source-id: b3428393e95b4fdbd64ddda22f0d50fc01cf0dcf
Pull Request resolved: pytorch#99881
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Apr 24, 2023
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

ghstack-source-id: b6e1fd2ca0e3ec5ad21966a93885c87e3f2904b7
Pull Request resolved: pytorch#99881
peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Apr 25, 2023
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

ghstack-source-id: b6e1fd2ca0e3ec5ad21966a93885c87e3f2904b7
Pull Request resolved: pytorch#99881
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
@peterbell10 peterbell10 marked this pull request as ready for review April 25, 2023 12:39
@peterbell10 peterbell10 requested a review from ngimel April 25, 2023 12:39
peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Apr 25, 2023
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

ghstack-source-id: 2a79ad0d0989ac5338e25e4f6bfcefbcb87ef7f9
Pull Request resolved: pytorch#99881
peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Apr 26, 2023
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

ghstack-source-id: 2a79ad0d0989ac5338e25e4f6bfcefbcb87ef7f9
Pull Request resolved: pytorch#99881
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
@triton.jit
def is_floating(x):
# Addition to promote scalars to tensor
x += tl.zeros((1,), tl.int1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this change generated code/increase overhead? minimum/maximum fns are used pretty often (including in register-sensitive contexts, e.g. when fusing relu to matmul), so we should avoid increasing register pressure.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you mean this exact line then it will get DCE'd. Or do you mean the NaN checks?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this line, if it's dce'd it's great

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to substantiate this a bit more, here is the triton IR generated for triton_helper.min:

  tt.func private @"min__fp32S1_4S__1cconstexpr[1]"(%arg0: tensor<1x4xf32>) -> tensor<1xf32> {
    %0 = "tt.reduce"(%arg0) ({
    ^bb0(%arg1: f32, %arg2: f32):
      %1 = tt.call @minimum__fp32_fp32__(%arg1, %arg2) : (f32, f32) -> f32
      tt.reduce.return %1 : f32
    }) {axis = 1 : i32} : (tensor<1x4xf32>) -> tensor<1xf32>
    tt.return %0 : tensor<1xf32>
  }
  tt.func private @minimum__fp32_fp32__(%arg0: f32, %arg1: f32) -> f32 {
    %0 = arith.cmpf olt, %arg0, %arg1 : f32
    %1 = tt.call @is_floating__fp32__(%arg0) : (f32) -> i1
    %2 = scf.if %1 -> (i1) {
      %4 = arith.cmpf une, %arg0, %arg0 : f32
      %5 = arith.ori %0, %4 : i1
      scf.yield %5 : i1
    } else {
      scf.yield %0 : i1
    }
    %3 = arith.select %2, %arg0, %arg1 : f32
    tt.return %3 : f32
  }
  tt.func private @is_floating__fp32__(%arg0: f32) -> i1 {
    %0 = tt.call @"zeros____0cconstexpr[(constexpr[1],)]_1cconstexpr[int1]"() : () -> tensor<1xi1>
    %1 = tt.splat %arg0 : (f32) -> tensor<1xf32>
    %2 = arith.uitofp %0 : tensor<1xi1> to tensor<1xf32>
    %3 = arith.addf %1, %2 : tensor<1xf32>
    %true = arith.constant true
    tt.return %true : i1
  }

Admittedly, it's pretty ugly. However, it is massively simplified after just the inlining pass, which is the very first pass in triton's optimizer.

    %11 = "tt.reduce"(%10) ({
    ^bb0(%arg5: f32, %arg6: f32):
      %16 = arith.cmpf olt, %arg5, %arg6 : f32
      %17 = arith.cmpf une, %arg5, %arg5 : f32
      %18 = arith.ori %16, %17 : i1
      %19 = arith.select %18, %arg5, %arg6 : f32
      tt.reduce.return %19 : f32
    }) {axis = 1 : i32} : (tensor<1x4xf32>) -> tensor<1xf32>

You can see it removed the branch on is_floating and all of the associated code.

peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Apr 27, 2023
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

ghstack-source-id: cf09703d4ce7a6bf3e1083e088005b1f1f8ca077
Pull Request resolved: pytorch#99881
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Apr 27, 2023
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

ghstack-source-id: 21300ac256d53b03c957ec821a0b2bfe53a324d4
Pull Request resolved: pytorch#99881
@ezyang ezyang mentioned this pull request May 3, 2023
ezyang added a commit that referenced this pull request May 3, 2023
Revert "[inductor] Stop using `x + tl.zeros(...)` in generated triton (#100163)"

This reverts commit 5b98910.

Revert "[inductor] Fix argmin/max with duplicate values (#99920)"

This reverts commit 659dcc5.

Revert "[inductor] Fix nan-handling of max and min reductions (#99881)"

This reverts commit f9c3fcd.

[ghstack-poisoned]
ezyang added a commit that referenced this pull request May 3, 2023
Revert "[inductor] Stop using `x + tl.zeros(...)` in generated triton (#100163)"

This reverts commit 5b98910.

Revert "[inductor] Fix argmin/max with duplicate values (#99920)"

This reverts commit 659dcc5.

Revert "[inductor] Fix nan-handling of max and min reductions (#99881)"

This reverts commit f9c3fcd.

ghstack-source-id: 85531baedfb245e48512be97c0ed90eba1685664
Pull Request resolved: #100517
peterbell10 added a commit to peterbell10/pytorch that referenced this pull request May 3, 2023
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

ghstack-source-id: 21300ac256d53b03c957ec821a0b2bfe53a324d4
Pull Request resolved: pytorch#99881
peterbell10 added a commit to peterbell10/pytorch that referenced this pull request May 3, 2023
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

ghstack-source-id: 21300ac256d53b03c957ec821a0b2bfe53a324d4
Pull Request resolved: pytorch#99881
@peterbell10 peterbell10 reopened this May 3, 2023
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
This adds helpers that replace tritons `minimum`, `maximum`, `min` and
`max` with the correct NaN prrpagation. I also removed
`ops.int_minimum` in favor of `ops.minimum` because we can just omit
the nan-checks by checking the dtype.

cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire

[ghstack-poisoned]
@peterbell10 peterbell10 closed this May 3, 2023
@facebook-github-bot facebook-github-bot deleted the gh/peterbell10/541/head branch June 8, 2023 18:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants