-
Notifications
You must be signed in to change notification settings - Fork 627
Description
Hi,
Even for a module with just a single con2d layer, shape inference returns a set of question marks:
func @forward(%arg0: !torch.vtensor<[1,1,64,64],f32>) -> !torch.vtensor<[?,?,?,?],f32>
while @annotate_args was used to specify the size of the input tensor.
Is there a pass available that can resolve this? Currently the pipelined torchscript-module-to-torch-backend-pipeline was used.
The python code for the small test is attached.
Additionally, what is the decomposition of the torchscript-module-to-torch-backend-pipeline? Using symbol-dce,torch-prepare-for-globalize-object-graph,torch-globalize-object-graph,symbol-dce,inline,torch-adjust-calling-conventions,torch-inline-global-slots does not produce the same output.
Thanks
Kristof
cnv1TorchMLIR.py.txt