-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] Ignore print(Tensor) during tracing #86223
[ONNX] Ignore print(Tensor) during tracing #86223
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/86223
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 0b7850a: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
I think Bowen has a PR #86180 (some race condition happening 😅 ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment in line
9b90d86
to
e5ba4e2
Compare
I am not working on PyTorch these days, please don't ask me to review. |
Closing #86180 since duplicated, let's focus on this one. For
# 'tolist' has side effect calling 'resolve_conj' and 'resolve_neg'.
# Annotation added to pass torch script.
_: List[float] = x.tolist() |
e5ba4e2
to
e4c48a8
Compare
I believe all your comments were addressed. See what you think |
be26818
to
18b818a
Compare
self.run_test( | ||
m, | ||
x, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.run_test( | |
m, | |
x, | |
) | |
self.run_test(PrintTensorOnMyModel(), x) |
for simplicity and consistency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually lintrunner f
did this. I just left the linter do the linting for me :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's because there is a comma after x,
. Try removing the comma and run it again?
_: List[float] = x.tolist() | ||
return x_firsts | ||
|
||
m = PrintTensorOnMyModel().eval() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
m = PrintTensorOnMyModel().eval() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
m
is needed, otherwise the model is not instantiated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For simplicity and consistency with other tests, I would just call PrintTensorOnMyModel() in run test (comment above).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each test has its own specificities. It is subjective to expect "consistency" in their implementation. The reasoning for keeping the eval
was that it came from the bug report
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the context! If a test needs different treatment, I recommend adding a comment on the why to help future readers.
FYI I ran lintrunner locally on the file and got this result
Just for future reference. This is a rather minor detail we can always fix in some other changes in the future. The trick is to remove the trailing comma and let black add them when it sees the need
if input_dtype == torch.cfloat: | ||
with self.assertRaises(RuntimeError): | ||
self.run_test( | ||
m, | ||
x, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would expect no errors now? Maybe have separate tests for _conj
and conj_physical
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned on a previous review, an exception is also raised for complex when shape inference is enabled (and it is enabled by default).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to create a test that will execute the exact same code as the existing ones. They share implementation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder why it errors, is it because of a bug in shape inference? (keep open for reference)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
several jit passes do not expect complex numbers. for optimization-only passes, it would be safe to just return from the pass without making any change, preventing the failure. On the other hand, we are not interested in having ONNX graphs without shape inference (IIRC it is not part of the public API). We only use disable it for specific debugging scenarios
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Not sure why many seemingly unrelated jobs where failing, I'll try with a rebase. |
@pytorchbot rebase |
@pytorchbot successfully started a rebase job. Check the current status here |
Successfully rebased |
df4d38c
to
883db6e
Compare
@thiagocrepaldi the CI errors are legit,
|
Adding |
883db6e
to
0b7850a
Compare
@pytorchbot merge -f "Ignoring unrelated 'Build should have OpenMP enabled, but torch.backends.openmp.is_available() is False'" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Fixes #73619 Fixes microsoft/onnxruntime#11812 This PR adds new symbolics: `aten::_conj`, `aten::conj_physical`, `aten::resolve_conj`, and `aten::resolve_neg` While the last two are always NO-OP by definition (do not change nodes), the first raises an exception as they are not supported by ONNX yet Pull Request resolved: #86223 Approved by: https://github.com/justinchuby, https://github.com/BowenBao
Hey @thiagocrepaldi. |
Fixes #73619 Fixes microsoft/onnxruntime#11812 This PR adds new symbolics: `aten::_conj`, `aten::conj_physical`, `aten::resolve_conj`, and `aten::resolve_neg` While the last two are always NO-OP by definition (do not change nodes), the first raises an exception as they are not supported by ONNX yet Pull Request resolved: #86223 Approved by: https://github.com/justinchuby, https://github.com/BowenBao
Fixes #73619 Fixes microsoft/onnxruntime#11812 This PR adds new symbolics: `aten::_conj`, `aten::conj_physical`, `aten::resolve_conj`, and `aten::resolve_neg` While the last two are always NO-OP by definition (do not change nodes), the first raises an exception as they are not supported by ONNX yet Pull Request resolved: #86223 Approved by: https://github.com/justinchuby, https://github.com/BowenBao
Fixes #73619
Fixes microsoft/onnxruntime#11812
This PR adds new symbolics:
aten::_conj
,aten::conj_physical
,aten::resolve_conj
, andaten::resolve_neg
While the last two are always NO-OP by definition (do not change nodes), the first raises an exception as they are not supported by ONNX yet