-
Notifications
You must be signed in to change notification settings - Fork 25.2k
fixing call_module on subscripting into generator #81258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful links
✅ No Failures (0 Pending)As of commit c0f077e (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
inp = torch.randn(2, 3, 4, 5).to(dtype=dtype, device=device) | ||
m = Model().to(dtype=dtype, device=device) | ||
|
||
traced = symbolic_trace(m) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This symbolic_trace will result in a FX graph with torch ops. You can print(traced) to check this. For this graph, .compile() would not taking effect.
To get an FX graph with Aten ops (where the partitioner is based on), you need to use make_fx. See https://github.com/pytorch/pytorch/pull/81311/files#diff-de76da6bb7d45d8a243d50773b91340bc6feb177ad734ff4bdfba6def8e4c801R162 for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I understand that compiling this graph would not give us any fusion.
But currently this script fails and triggers runtime error. I think this is not a proper behavior for compiling a valid FX graph with call_module
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ic...
Can you add an explicit comment to say this symbolic trace will not result in any fusion, but it should pass the compiler without error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LTGM, with minor comment.
@pytorchbot merge |
@pytorchbot successfully started a merge job. Check the current status here |
Hey @jjsjann123. |
Summary: named_modules() return a generator, which is not subscriptable and causes node support query to fail Pull Request resolved: #81258 Approved by: https://github.com/SherlockNoMad Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/cc67a92e749d076f4594410f2b23caed76aba80c Reviewed By: DanilBaibak Differential Revision: D37876448 Pulled By: DanilBaibak fbshipit-source-id: 15c39ac0ac7d0f742a7d148cca82ac1633f39d94
named_modules() return a generator, which is not subscriptable and causes node support query to fail