New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nicer logs for dynamic shapes #99277
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/99277
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit 1ecf54e: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
ghstack-source-id: 1f54111b681f46459d96358c50aa7460b1ff8d45 Pull Request resolved: #99277
torch/_dynamo/variables/builder.py
Outdated
@@ -1176,18 +1180,34 @@ def wrap_to_fake_tensor_and_record( | |||
# If there is no entry for this source, add the tensor to frame state with its current static size. | |||
# E.g., {} -> {“x”: [2, 4]} | |||
curr_sizes = list(e.size()) | |||
log.debug("Registered static shapes %s for %s", curr_sizes, name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one gets very noisy, so maybe we keep him debug, and make the others info? Open to bikshedding. @ezyang opine please.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is hard to say without actually using the logs. Keeping everything as debug to start is a safe start, assuming this doesn't trigger too much (but I don't think it does, since we debug trace every opcode in dynamo). Note that you can selectively up logging from ONLY this module with TORCH_LOGS=+torch._dynamo.variables.builder
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would bikeshed the message here though; "registered" isn't very descriptive. In the log message, I think I probably want the words frame_state
in it. Maybe frame_state[name] = curr_sizes
. But in that case, why don't we move the log message down to where we actually do the setter, and then delete all the other branches?
cc soumith penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire [ghstack-poisoned]
ghstack-source-id: 48665ec78aa53fcb7434b2f92b0625baf7fb5ab3 Pull Request resolved: #99277 Better little none check
@@ -1945,10 +1945,10 @@ def hint(): | |||
msg = f" {len(error_msgs) + 1}. {msg()}" | |||
error_msgs.append(msg) | |||
if len(error_msgs) > 0: | |||
log.warning("Warning only constraints violated %s", warn_msgs) | |||
log.info("Warning only constraints violated %s", warn_msgs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, this looks like a botched merge, we shouldn't warn here at all, we're going to raise an error!
raise ConstraintViolationError(f"Constraints violated!\n{error_msgs}") | ||
elif len(warn_msgs) > 0: | ||
log.warning("%s Warning only constraints violated", len(warn_msgs)) | ||
log.info("%s Warning only constraints violated", len(warn_msgs)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This probably suppresses too much but we can figure out a better strategy later (maybe only warn this once per frame)
@pytorchbot rebase |
@pytorchbot successfully started a rebase job. Check the current status here |
Rebase failed due to Command
Raised by https://github.com/pytorch/pytorch/actions/runs/4783046149 |
cc soumith penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire [ghstack-poisoned]
ghstack-source-id: e62c51a2ad28a568699180f57613c165bdea7820 Pull Request resolved: #99277 Better little none check
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
cc @soumith @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire