-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inductor: align baddbmm behavior with eager mode for beta=0 and input has nan value #96087
Conversation
… has nan value [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/96087
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 307aca0: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
… has nan value ghstack-source-id: c73c188145d52452fad395ef0a3a3df1979d30e7 Pull Request resolved: #96087
…0 and input has nan value" For ```torch.baddbmm(input, mat1,mat2, beta=0)```, if ```beta``` is zero, the multiplication of value ```input*beta``` will be ignored for the eager mode(always gets zero number, see https://pytorch.org/docs/stable/generated/torch.baddbmm.html?highlight=torch+baddbmm#torch.baddbmm), but the inductor is not, the inductor will get a different value if the input has a ```nan``` of ```inf``` value: ``` def fn_test(input, mat1, mat2): return torch.baddbmm(input, mat1, mat2, beta=0.0) opt_fn = torch._dynamo.optimize("inductor")(fn_test) a, b, c = [torch.rand((3,2,2)) for _ in range(3)] real_out = fn_test(a, b, c) a[:] = torch.nan compiled_out = opt_fn(a, b,c) print(compiled_out) print(real_out) ``` before this PR, the output will be like this: ``` tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[ nan, nan], [ nan, nan]]]) tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[0.4985, 0.1072], [0.0857, 0.0186]]]) ``` cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire [ghstack-poisoned]
… has nan value ghstack-source-id: 5e8a805ed833853964814ed463be83a9edc346e2 Pull Request resolved: #96087
…0 and input has nan value" For ```torch.baddbmm(input, mat1,mat2, beta=0)```, if ```beta``` is zero, the multiplication of value ```input*beta``` will be ignored for the eager mode(always gets zero number, see https://pytorch.org/docs/stable/generated/torch.baddbmm.html?highlight=torch+baddbmm#torch.baddbmm), but the inductor is not, the inductor will get a different value if the input has a ```nan``` of ```inf``` value: ``` def fn_test(input, mat1, mat2): return torch.baddbmm(input, mat1, mat2, beta=0.0) opt_fn = torch._dynamo.optimize("inductor")(fn_test) a, b, c = [torch.rand((3,2,2)) for _ in range(3)] real_out = fn_test(a, b, c) a[:] = torch.nan compiled_out = opt_fn(a, b,c) print(compiled_out) print(real_out) ``` before this PR, the output will be like this: ``` tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[ nan, nan], [ nan, nan]]]) tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[0.4985, 0.1072], [0.0857, 0.0186]]]) ``` cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire [ghstack-poisoned]
… has nan value ghstack-source-id: 9a13a4b605e97cdad29de8ce91e676c32aca9085 Pull Request resolved: #96087
…0 and input has nan value" For ```torch.baddbmm(input, mat1,mat2, beta=0)```, if ```beta``` is zero, the multiplication of value ```input*beta``` will be ignored for the eager mode(always gets zero number, see https://pytorch.org/docs/stable/generated/torch.baddbmm.html?highlight=torch+baddbmm#torch.baddbmm), but the inductor is not, the inductor will get a different value if the input has a ```nan``` of ```inf``` value: ``` def fn_test(input, mat1, mat2): return torch.baddbmm(input, mat1, mat2, beta=0.0) opt_fn = torch._dynamo.optimize("inductor")(fn_test) a, b, c = [torch.rand((3,2,2)) for _ in range(3)] real_out = fn_test(a, b, c) a[:] = torch.nan compiled_out = opt_fn(a, b,c) print(compiled_out) print(real_out) ``` before this PR, the output will be like this: ``` tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[ nan, nan], [ nan, nan]]]) tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[0.4985, 0.1072], [0.0857, 0.0186]]]) ``` cc soumith voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire [ghstack-poisoned]
… has nan value ghstack-source-id: 218831d7bdb0ee079e8a3a022c76c9a1c6949c7d Pull Request resolved: #96087
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
… has nan value (pytorch#96087) For ```torch.baddbmm(input, mat1,mat2, beta=0)```, if ```beta``` is zero, the multiplication of value ```input*beta``` will be ignored for the eager mode(always gets zero number, see https://pytorch.org/docs/stable/generated/torch.baddbmm.html?highlight=torch+baddbmm#torch.baddbmm), but the inductor is not, the inductor will get a different value if the input has a ```nan``` of ```inf``` value: ``` def fn_test(input, mat1, mat2): return torch.baddbmm(input, mat1, mat2, beta=0.0) opt_fn = torch._dynamo.optimize("inductor")(fn_test) a, b, c = [torch.rand((3,2,2)) for _ in range(3)] real_out = fn_test(a, b, c) a[:] = torch.nan compiled_out = opt_fn(a, b,c) print(compiled_out) print(real_out) ``` before this PR, the output will be like this: ``` tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[ nan, nan], [ nan, nan]]]) tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[0.4985, 0.1072], [0.0857, 0.0186]]]) ``` Pull Request resolved: pytorch#96087 Approved by: https://github.com/jansel, https://github.com/ngimel, https://github.com/jgong5
… has nan value (#96087) For ```torch.baddbmm(input, mat1,mat2, beta=0)```, if ```beta``` is zero, the multiplication of value ```input*beta``` will be ignored for the eager mode(always gets zero number, see https://pytorch.org/docs/stable/generated/torch.baddbmm.html?highlight=torch+baddbmm#torch.baddbmm), but the inductor is not, the inductor will get a different value if the input has a ```nan``` of ```inf``` value: ``` def fn_test(input, mat1, mat2): return torch.baddbmm(input, mat1, mat2, beta=0.0) opt_fn = torch._dynamo.optimize("inductor")(fn_test) a, b, c = [torch.rand((3,2,2)) for _ in range(3)] real_out = fn_test(a, b, c) a[:] = torch.nan compiled_out = opt_fn(a, b,c) print(compiled_out) print(real_out) ``` before this PR, the output will be like this: ``` tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[ nan, nan], [ nan, nan]]]) tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[0.4985, 0.1072], [0.0857, 0.0186]]]) ``` Pull Request resolved: pytorch/pytorch#96087 Approved by: https://github.com/jansel, https://github.com/ngimel, https://github.com/jgong5
… has nan value (#96087) For ```torch.baddbmm(input, mat1,mat2, beta=0)```, if ```beta``` is zero, the multiplication of value ```input*beta``` will be ignored for the eager mode(always gets zero number, see https://pytorch.org/docs/stable/generated/torch.baddbmm.html?highlight=torch+baddbmm#torch.baddbmm), but the inductor is not, the inductor will get a different value if the input has a ```nan``` of ```inf``` value: ``` def fn_test(input, mat1, mat2): return torch.baddbmm(input, mat1, mat2, beta=0.0) opt_fn = torch._dynamo.optimize("inductor")(fn_test) a, b, c = [torch.rand((3,2,2)) for _ in range(3)] real_out = fn_test(a, b, c) a[:] = torch.nan compiled_out = opt_fn(a, b,c) print(compiled_out) print(real_out) ``` before this PR, the output will be like this: ``` tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[ nan, nan], [ nan, nan]]]) tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[0.4985, 0.1072], [0.0857, 0.0186]]]) ``` Pull Request resolved: pytorch/pytorch#96087 Approved by: https://github.com/jansel, https://github.com/ngimel, https://github.com/jgong5
… has nan value (pytorch#96087) For ```torch.baddbmm(input, mat1,mat2, beta=0)```, if ```beta``` is zero, the multiplication of value ```input*beta``` will be ignored for the eager mode(always gets zero number, see https://pytorch.org/docs/stable/generated/torch.baddbmm.html?highlight=torch+baddbmm#torch.baddbmm), but the inductor is not, the inductor will get a different value if the input has a ```nan``` of ```inf``` value: ``` def fn_test(input, mat1, mat2): return torch.baddbmm(input, mat1, mat2, beta=0.0) opt_fn = torch._dynamo.optimize("inductor")(fn_test) a, b, c = [torch.rand((3,2,2)) for _ in range(3)] real_out = fn_test(a, b, c) a[:] = torch.nan compiled_out = opt_fn(a, b,c) print(compiled_out) print(real_out) ``` before this PR, the output will be like this: ``` tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[ nan, nan], [ nan, nan]]]) tensor([[[0.4272, 0.6037], [0.4279, 0.4219]], [[0.0838, 0.4873], [0.1210, 0.5516]], [[0.4985, 0.1072], [0.0857, 0.0186]]]) ``` Pull Request resolved: pytorch#96087 Approved by: https://github.com/jansel, https://github.com/ngimel, https://github.com/jgong5
Stack from ghstack (oldest at bottom):
For
torch.baddbmm(input, mat1,mat2, beta=0)
, ifbeta
is zero, the multiplication of valueinput*beta
will be ignored for the eager mode(always gets zero number, see https://pytorch.org/docs/stable/generated/torch.baddbmm.html?highlight=torch+baddbmm#torch.baddbmm), but the inductor is not, the inductor will get a different value if the input has anan
ofinf
value:before this PR, the output will be like this:
cc @soumith @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire