-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Save graph prints as lazy strings instead of eager #137700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/137700
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 1915f6f with merge base 839d356 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| ), | ||
| ) | ||
| joint_graph_str = fx_g.print_readable( | ||
| joint_graph_str_fn = lambda: fx_g.print_readable( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have a LazyString class we use in a few other places to lazily materialize strings for logging - maybe we should use it here for consistency (to be fair it is a pretty tiny wrapper) https://github.com/pytorch/pytorch/blob/main/torch/_logging/_internal.py#L1083
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah in this case since we're passing a lambda anyway I figured making it lazy string was literally more overhead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could technically just pass a functools.partial :P
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
|
if those effect |
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Stack from ghstack (oldest at bottom):
My previous PR made these eager instead of lazy, making it regress some small pr time benchmarks that don't use structured trace. Making them lazy again by putting them under a function call fixes the issue. Production jobs should not be affected by the change either way, because structured logs are turned on and evaluated there.
Without this diff:
With this diff:
Specifically, aotdispatcher_training_nosubclass_cpu and aotdispatcher_training_subclass_cpu look better