New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FX] intermediate types of empty lists/dicts not preserved during torch.fx tracing #49935
Comments
After doing a fair bit of research, it looks like this isn't something we can change from the FX side. However, there is a fix on frontend. We need to make it so that the frontend doesn't automatically type empty lists as |
Could you fill in some details on why this isn't viable by changing FX side? I think James mentioned something about preserving annotations, is that infeasible? |
@gmagogsfm It's lot harder than you'd think to preserve existing annotations in this case. In Python, functions, methods, modules, and class objects store some of their annotations, which can be retrieved by The first thing I thought of doing was walking back up the stack, getting the calling frame, and examining the context of the code responsible for that frame. I ran into a lot of problems with this, though. It required me to make some uncomfortable assumptions about the code. I eventually came up with a solution that involved AST rewrites and a custom tracer. (I can explain my design more if you're interested.) Unfortunately, it had an awful time complexity. James and Zach discussed the issue, and we eventually came to the conclusion that this is not a feature that we should pursue from the FX side. |
A potential solution for this is to improve JIT type inference to make smarter decisions about types of lists and dicts. |
If I remember correctly, @ansley is working on this. Should it be moved out of "in discussion"? |
馃悰 Bug
On occasions we may want to pass, for example, an empty list to a leaf function. In TorchScript, these empty lists are assumed to have type
List[torch.Tensor]
, as most types in TorchScript (Tensor type defaulting behavior). In order to get around this defaulting behavior, we can either:var1: List[str] = []
torch.jit.annotate
to notify the TorchScript compiler:var1 = torch.jit.annotate(List[str], [])
However, during torch.fx, neither of these methods will annotate the variables in the resulting
GraphModule
. Thus, TorchScript will always assume that these areList[Tensor]
and may fail to compile the resulting module.To Reproduce
Suppose that
my_identity
is a custom op with the following signature:Then use this in a module and trace it:
yielding:
Finally, run the graph module through TorchScript via
torch.jit.script(graph_module)
.Expected behavior
TorchScript should ideally compile fine.
Actual behavior
The TorchScript compiler complains about the empty list:
The text was updated successfully, but these errors were encountered: