-
Notifications
You must be signed in to change notification settings - Fork 25.2k
[ONNX] Adjust is_train
flag for onnx pass deduplicate initializers
#74247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
CI Flow Status⚛️ CI FlowRuleset - Version:
|
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 5c7afc8 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😹
Previous logic didn't consider the case for TrainingMode.PRESERVE. A more direct way is to check `model.training`, which is the accurate training mode, set by `exporter_context(model, training)`.
I will fix the CI issue before merging |
bec9fa7
to
00797e1
Compare
@pytorchbot merge this |
…74247) Summary: Previous logic didn't consider the case for TrainingMode.PRESERVE. A more direct way is to check `model.training`, which is the accurate training mode, set by `exporter_context(model, training)`. Pull Request resolved: #74247 Approved by: https://github.com/garymm Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/144b7de9dd5f5fea03330d5447c888be6ecd08cf Reviewed By: malfet Differential Revision: D35065521 fbshipit-source-id: 05eb5797763a30e647711f8af6a415122c404a98
Previous logic didn't consider the case for TrainingMode.PRESERVE. A more direct way is to check `model.training`, which is the accurate training mode, set by `exporter_context(model, training)`. Pull Request resolved: #74247 Approved by: https://github.com/garymm
Previous logic didn't consider the case for TrainingMode.PRESERVE.
A more direct way is to check
model.training
, which is the accuratetraining mode, set by
exporter_context(model, training)
.