Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unsupported FX Nodes: {'call_function': ['aten.quantized_gru.input', 'quantized.linear_dynamic.default']} #2074

Open
Sukriti-Mehrotra opened this issue Feb 24, 2025 · 4 comments
Labels
module: torchlib Related to the torch/aten function lib in development

Comments

@Sukriti-Mehrotra
Copy link

Hello,

I am trying to convert a torchao quantized deep learning model(consisting of Linear, GRU layers, etc) to onnx but running into the error: Unsupported FX nodes: {'call_function': ['aten.quantized_gru.input', 'quantized.linear_dynamic.default']}.

Post-Training Quantization(using torch.ao.quantization.quantize_fx)

The quantization method used is Post-Training Dynamic Int8 Quantization(weights-only) in FX mode.
Adding the snippet of quantizing the model and saving it as .pth below:

input_tensor = torch.randn(batch_size, 3840)
qconfig_mapping = QConfigMapping().set_global(torch.ao.quantization.default_dynamic_qconfig)
model_prepared = quantize_fx.prepare_fx(model_to_quantize, qconfig_mapping, input_tensor)
model_quantized = quantize_fx.convert_fx(model_prepared)
torch.save(model_quantized, "fx_quant.pth")

Conversion to ONNX

Upon converting the quantized model to onnx:

onnx_program = torch.onnx.dynamo_export(model_quantized, input_tensor)

I run into the below error:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/_exporter_legacy.py", line 1222, in dynamo_export
    ).export()
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/_exporter_legacy.py", line 976, in export
    graph_module = self.options.fx_tracer.generate_fx(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 217, in generate_fx
    return self.pre_export_passes(options, model, graph_module, updated_model_args)  # type: ignore[return-value]
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 226, in pre_export_passes
    return _exporter_legacy.common_pre_export_passes(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/_exporter_legacy.py", line 1275, in common_pre_export_passes
    ).analyze(infra.levels.ERROR)
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/analysis/unsupported_nodes.py", line 85, in analyze
    self._lint(analysis_result, diagnostic_level)
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/analysis/unsupported_nodes.py", line 37, in _lint
    self.diagnostic_context.log_and_raise_if_error(diagnostic)
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/context.py", line 356, in log_and_raise_if_error
    raise RuntimeErrorWithDiagnostic(diagnostic)
torch.onnx._internal.diagnostics.infra.context.RuntimeErrorWithDiagnostic: Unsupported FX nodes: {'call_function': ['aten.quantized_gru.input', 'quantized.linear_dynamic.default']}.

Note

  1. the conversion to onnx with the original model(without quantization) happens successfully
  2. I can find the support for op aten_quantized_gru_cell. is it possible to make use of this and if yes, then how?

report_dynamo_export.sarif

{
 "runs":[
  {
   "tool":{
    "driver":{
     "name":"torch.onnx.dynamo_export",
     "contents":[
      "localizedData",
      "nonLocalizedData"
     ],
     "language":"en-US",
     "rules":[
      {
       "id":"FXE0012",
       "fullDescription":{
        "text":"Result from FX graph analysis to reveal unsupported FX nodes.",
        "markdown":"This error indicates that an FX graph contains one or more unsupported nodes. The error message\nis typically accompanied by a list of the unsupported nodes found during analysis.\n\nTo resolve this error, you can try resolving each individual unsupported node error by following\nthe suggestions by its diagnostic. Typically, options include:\n\n- If exists, apply the auto-fix suggested by the diagnostic. TODO: this part is not available yet.\n- Rewrite the model using only supported PyTorch operators or functions.\n- Follow this [guide](https://pytorch.org/docs/stable/onnx.html#onnx-script-functions) to write and\n  register a custom symbolic function for the unsupported call_function FX node.\n"
       },
       "name":"unsupported-fx-node-analysis",
       "shortDescription":{
        "text":"Result from FX graph analysis to reveal unsupported FX nodes."
       }
      },
      {
       "id":"FXE0015",
       "fullDescription":{
        "text":"Determine if type promotion is required for the FX node. Insert cast nodes if needed.",
        "markdown":"This diagnostic monitors the node-level type promotion insertion process. In PyTorch, there is an automatic process called implicit type promotion,\nwhere the input types of an operator are promoted to a common type. The determination of the common type is based on the type promotion rule specific to each operator.\nTo learn more about PyTorch's type promotion rules, refer to the [elementwise_dtypes doc](https://github.com/pytorch/pytorch/blob/f044613f78df713fb57f70c608483c9f10ad332e/torch/_prims_common/__init__.py#L1252-L1335)\nand [torch._refs ops](https://github.com/pytorch/pytorch/blob/a475ea4542dfe961c9d097e33ab5041f61c8c17f/torch/_refs/__init__.py#L484).\n\nHowever, implicit type promotion is not supported in ONNX. Therefore, to replicate the PyTorch behavior, we need to explicitly insert cast nodes.\nThis diagnostic tracks the process of node-level type promotion insertion.\n\nThe type promotion rules used by this process can be found in `torch/onnx/_internal/fx/passes/type_promotion.py.`\nTo update or add new type promotion rules, please refer to the [Note: Update type promotion rule] section.\n"
       },
       "name":"fx-node-insert-type-promotion",
       "shortDescription":{
        "text":"Determine if type promotion is required for the FX node. Insert cast nodes if needed."
       }
      },
      {
       "id":"FXE0010",
       "fullDescription":{
        "text":"FX graph transformation during ONNX export before converting from FX IR to ONNX IR.",
        "markdown":"This diagnostic tracks the FX passes executed during the ONNX export process prior\nto converting from FX IR (Intermediate Representation) to ONNX IR.\n\nUnder the scope of ONNX export, an FX pass refers to a specific transformation applied to the FX GraphModule.\nThe primary aim of these passes is to streamline the graph into a format that aligns more with the ONNX IR.\nMoreover, these passes work to substitute unsupported FX IR features with those recognized and endorsed by\nONNX IR. Common transformations include, but aren't limited to, decomposition, functionalization and\ntype promotion.\n\nFor those who are interested in a comprehensive log detailing the modifications made during these passes,\nthere are a couple of options:\n\n- Set DiagnosticOptions.verbosity_level to logging.DEBUG.\n- Activate the environment variable TORCH_LOGS='onnx_diagnostics'.\n\nHowever, it's noteworthy that by default, such detailed logging is turned off. The primary reason being\nits considerable impact on performance.\n\nFor an in-depth understanding of each specific pass, please refer to the directory: torch/onnx/_internal/fx/passes.\n"
       },
       "name":"fx-pass",
       "shortDescription":{
        "text":"FX graph transformation during ONNX export before converting from FX IR to ONNX IR."
       }
      }
     ],
     "version":"2.5.1+cu124"
    }
   },
   "language":"en-US",
   "newlineSequences":[
    "\r\n",
    "\n"
   ],
   "results":[
    {
     "message":{
      "markdown":"Running Decompose pass. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature Transform.run\n- self: <class 'torch.onnx._internal.fx.passes.decomp.Decompose'>\n- args: Tuple[length=1](\nTensor(f32[1, 3840]),\n)\nFor detailed logging of graph modifications by this pass, either set `DiagnosticOptions.verbosity_level` to `logging.DEBUG` or use the environment variable `TORCH_LOGS='onnx_diagnostics'`.\n## Return values\ntorch.fx.GraphModule(<lambda>)",
      "text":"Running Decompose pass. "
     },
     "codeFlows":[
      {
       "threadFlows":[
        {
         "locations":[]
        }
       ]
      }
     ],
     "graphs":[],
     "kind":"informational",
     "level":"none",
     "locations":[
      {
       "message":{
        "text":"Transform.run"
       },
       "physicalLocation":{
        "artifactLocation":{
         "uri":"/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/_pass.py"
        },
        "region":{
         "snippet":{
          "text":"@diagnostics.diagnose_call("
         },
         "startLine":240
        }
       }
      }
     ],
     "properties":{
      "tags":[]
     },
     "ruleId":"FXE0010",
     "stacks":[]
    },
    {
     "message":{
      "markdown":"Running Functionalize pass. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature Transform.run\n- self: <class 'torch.onnx._internal.fx.passes.functionalization.Functionalize'>\n- args: Tuple[length=1](\nTensor(f32[1, 3840]),\n)\nFor detailed logging of graph modifications by this pass, either set `DiagnosticOptions.verbosity_level` to `logging.DEBUG` or use the environment variable `TORCH_LOGS='onnx_diagnostics'`.\n## Return values\ntorch.fx.GraphModule(<lambda>)",
      "text":"Running Functionalize pass. "
     },
     "codeFlows":[
      {
       "threadFlows":[
        {
         "locations":[]
        }
       ]
      }
     ],
     "graphs":[],
     "kind":"informational",
     "level":"none",
     "locations":[
      {
       "message":{
        "text":"Transform.run"
       },
       "physicalLocation":{
        "artifactLocation":{
         "uri":"/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/_pass.py"
        },
        "region":{
         "snippet":{
          "text":"@diagnostics.diagnose_call("
         },
         "startLine":240
        }
       }
      }
     ],
     "properties":{
      "tags":[]
     },
     "ruleId":"FXE0010",
     "stacks":[]
    },
    {
     "message":{
      "markdown":"Running RemoveInputMutation pass. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature Transform.run\n- self: <class 'torch.onnx._internal.fx.passes.functionalization.RemoveInputMutation'>\n- args: Tuple[length=1](\nTensor(f32[1, 3840]),\n)\nFor detailed logging of graph modifications by this pass, either set `DiagnosticOptions.verbosity_level` to `logging.DEBUG` or use the environment variable `TORCH_LOGS='onnx_diagnostics'`.\n## Return values\ntorch.fx.GraphModule(<lambda>)",
      "text":"Running RemoveInputMutation pass. "
     },
     "codeFlows":[
      {
       "threadFlows":[
        {
         "locations":[]
        }
       ]
      }
     ],
     "graphs":[],
     "kind":"informational",
     "level":"none",
     "locations":[
      {
       "message":{
        "text":"Transform.run"
       },
       "physicalLocation":{
        "artifactLocation":{
         "uri":"/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/_pass.py"
        },
        "region":{
         "snippet":{
          "text":"@diagnostics.diagnose_call("
         },
         "startLine":240
        }
       }
      }
     ],
     "properties":{
      "tags":[]
     },
     "ruleId":"FXE0010",
     "stacks":[]
    },
    {
     "message":{
      "markdown":"Skipped l_x_: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: <class 'torch.onnx._internal.fx.passes.type_promotion._TypePromotionInterpreter'>\n- node: fx.Node(arg0)[placeholder]:Tensor(f32[1, 3840])\n## Return values\nTensor(f32[1, 3840])",
      "text":"Skipped l_x_: not a call_function."
     },
     "codeFlows":[
      {
       "threadFlows":[
        {
         "locations":[]
        }
       ]
      }
     ],
     "graphs":[],
     "kind":"informational",
     "level":"none",
     "locations":[
      {
       "message":{
        "text":"_TypePromotionInterpreter.run_node"
       },
       "physicalLocation":{
        "artifactLocation":{
         "uri":"/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/passes/type_promotion.py"
        },
        "region":{
         "snippet":{
          "text":"@diagnostics.diagnose_call("
         },
         "startLine":1607
        }
       }
      }
     ],
     "properties":{
      "tags":[]
     },
     "ruleId":"FXE0015",
     "stacks":[]
    },
    {
     "message":{
      "markdown":"Skipped for fx.Node(aten.unsqueeze.default)[call_function]:Tensor(f32[1, 1, 3840]): Cannot find type promotion rule for op: aten.unsqueeze.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: <class 'torch.onnx._internal.fx.passes.type_promotion._TypePromotionInterpreter'>\n- node: fx.Node(aten.unsqueeze.default)[call_function]:Tensor(f32[1, 1, 3840])\n## Return values\nTensor(f32[1, 1, 3840])",
      "text":"Skipped for fx.Node(aten.unsqueeze.default)[call_function]:Tensor(f32[1, 1, 3840]): Cannot find type promotion rule for op: aten.unsqueeze.default"
     },
     "codeFlows":[
      {
       "threadFlows":[
        {
         "locations":[]
        }
       ]
      }
     ],
     "graphs":[],
     "kind":"informational",
     "level":"none",
     "locations":[
      {
       "message":{
        "text":"_TypePromotionInterpreter.run_node"
       },
       "physicalLocation":{
        "artifactLocation":{
         "uri":"/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/passes/type_promotion.py"
        },
        "region":{
         "snippet":{
          "text":"@diagnostics.diagnose_call("
         },
         "startLine":1607
        }
       }
      }
     ],
     "properties":{
      "tags":[]
     },
     "ruleId":"FXE0015",
     "stacks":[]
    },
    {
     "message":{
      "markdown":"Skipped _param_constant0: not a call_function.\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: <class 'torch.onnx._internal.fx.passes.type_promotion._TypePromotionInterpreter'>\n- node: fx.Node(_param_constant0)[get_attr]:None\n## Return values\nParameter(Tensor(f32[1280, 1, 960]))",
      "text":"Skipped _param_constant0: not a call_function."
     },
     "codeFlows":[
      {
       "threadFlows":[
        {
         "locations":[]
        }
       ]
      }
     ],
     "graphs":[],
     "kind":"informational",
     "level":"none",
     "locations":[
      {
       "message":{
        "text":"_TypePromotionInterpreter.run_node"
       },
       "physicalLocation":{
        "artifactLocation":{
         "uri":"/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/passes/type_promotion.py"
        },
        "region":{
         "snippet":{
          "text":"@diagnostics.diagnose_call("
         },
         "startLine":1607
        }
       }
      }
     ],
     "properties":{
      "tags":[]
     },
     "ruleId":"FXE0015",
     "stacks":[]
    },
    {
     "message":{
      "markdown":"Skipped for fx.Node(aten.convolution.default)[call_function]:Tensor(f32[1, 1280, 7]): Cannot find type promotion rule for op: aten.convolution.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: <class 'torch.onnx._internal.fx.passes.type_promotion._TypePromotionInterpreter'>\n- node: fx.Node(aten.convolution.default)[call_function]:Tensor(f32[1, 1280, 7])\n## Return values\nTensor(f32[1, 1280, 7])",
      "text":"Skipped for fx.Node(aten.convolution.default)[call_function]:Tensor(f32[1, 1280, 7]): Cannot find type promotion rule for op: aten.convolution.default"
     },
     "codeFlows":[
      {
       "threadFlows":[
        {
         "locations":[]
        }
       ]
      }
     ],
     "graphs":[],
     "kind":"informational",
     "level":"none",
     "locations":[
      {
       "message":{
        "text":"_TypePromotionInterpreter.run_node"
       },
       "physicalLocation":{
        "artifactLocation":{
         "uri":"/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/passes/type_promotion.py"
        },
        "region":{
         "snippet":{
          "text":"@diagnostics.diagnose_call("
         },
         "startLine":1607
        }
       }
      }
     ],
     "properties":{
      "tags":[]
     },
     "ruleId":"FXE0015",
     "stacks":[]
    },
    {
     "message":{
      "markdown":"Type promotion not needed for relu. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: <class 'torch.onnx._internal.fx.passes.type_promotion._TypePromotionInterpreter'>\n- node: fx.Node(aten.relu.default)[call_function]:Tensor(f32[1, 1280, 7])\nFound type promotion rule: ElementwiseTypePromotionRule('aten', 'relu', [0], [], ELEMENTWISE_TYPE_PROMOTION_KIND.DEFAULT)\nArgument convolution is not promoted. Already torch.float32.\n## Return values\nTensor(f32[1, 1280, 7])",
      "text":"Type promotion not needed for relu. "
     },
     "codeFlows":[
      {
       "threadFlows":[
        {
         "locations":[]
        }
       ]
      }
     ],
     "graphs":[],
     "kind":"informational",
     "level":"none",
     "locations":[
      {
       "message":{
        "text":"_TypePromotionInterpreter.run_node"
       },
       "physicalLocation":{
        "artifactLocation":{
         "uri":"/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/passes/type_promotion.py"
        },
        "region":{
         "snippet":{
          "text":"@diagnostics.diagnose_call("
         },
         "startLine":1607
        }
       }
      }
     ],
     "properties":{
      "tags":[]
     },
     "ruleId":"FXE0015",
     "stacks":[]
    },
    {
     "message":{
      "markdown":"Skipped for fx.Node(aten.detach.default)[call_function]:Tensor(f32[1, 1280, 7]): Cannot find type promotion rule for op: aten.detach.default\n\n## Additional Message:\n\n## Function Signature\n### Function Signature _TypePromotionInterpreter.run_node\n- self: <class 'torch.onnx._internal.fx.passes.type_promotion._TypePromotionInterpreter'>\n- node: fx.Node(aten.detach.default)[call_function]:Tensor(f32[1, 1280, 7])\n## Return values\nTensor(f32[1, 1280, 7])",
      "text":"Skipped for fx.Node(aten.detach.default)[call_function]:Tensor(f32[1, 1280, 7]): Cannot find type promotion rule for op: aten.detach.default"
     },
 ................................(too long to paste here)

Is there any guideline on how to solve this problem and implement the support for the aforementioned operations?
Thank you, and sorry for the long post.

@justinchuby
Copy link
Collaborator

I think they can implemented. Before implementing the functions, we need to know what the current behavior is:

Please test with torch.onnx.export(..., dynamo=True, report=True) using the latest torch-nightly. Attach the generated report if there is an error. Thanks!

@justinchuby justinchuby added the module: torchlib Related to the torch/aten function lib in development label Feb 24, 2025
@justinchuby
Copy link
Collaborator

Please also refer to https://pytorch.org/tutorials/prototype/pt2e_quant_ptq.html and https://github.com/pytorch/ao/blob/2a3fbffc461f30751552006c864c57a80b297ca6/tutorials/developer_api_guide/export_to_executorch.py#L79-L80 for quantization in PyTorch 2.

@justinchuby
Copy link
Collaborator

One thing to note is that GRU and other rnn layers are currently unsupported.

@Sukriti-Mehrotra
Copy link
Author

Sukriti-Mehrotra commented Feb 25, 2025

I think they can implemented. Before implementing the functions, we need to know what the current behavior is:

Please test with torch.onnx.export(..., dynamo=True, report=True) using the latest torch-nightly. Attach the generated report if there is an error. Thanks!

I ran the below command torch.onnx.export(model, input_tensor, dynamo=True, report=True) which generated the below report:

PyTorch ONNX Conversion Error Report

� Obtain model graph with `torch.export.export(..., strict=False)`
� Obtain model graph with `torch.export.export(..., strict=True)`
� Obtain model graph with `torch.jit.trace`
⚪ Decompose operators for ONNX compatibility
⚪ Translate the graph into ONNX
⚪ Run `onnx.checker` on the ONNX model
⚪ Execute the model with ONNX Runtime
⚪ Validate model output accuracy


Error message:

pytb
# ⚠� Errors from strategy 'TorchExportNonStrictStrategy': -----------------------

Traceback (most recent call last):

  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 110, in __call__
    exported_program = self._capture(model, args, kwargs, dynamic_shapes)

  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 186, in _capture
    return torch.export.export(

  File "/usr/local/lib/python3.10/dist-packages/torch/export/__init__.py", line 368, in export
    return _export(

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1035, in wrapper
    raise e

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1008, in wrapper
    ep = fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/export/exported_program.py", line 128, in wrapper
    return fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1970, in _export
    return _export_for_training(

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1035, in wrapper
    raise e

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1008, in wrapper
    ep = fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/export/exported_program.py", line 128, in wrapper
    return fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1834, in _export_for_training
    export_artifact = export_func(  # type: ignore[operator]

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1756, in _non_strict_export
    with _fakify_script_objects(mod, fake_args, fake_kwargs, fake_mode) as (

  File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)

  File "/usr/local/lib/python3.10/dist-packages/torch/_export/non_strict_utils.py", line 498, in _fakify_script_objects
    fake_script_obj = _maybe_fakify_obj(obj)

  File "/usr/local/lib/python3.10/dist-packages/torch/_export/non_strict_utils.py", line 482, in _maybe_fakify_obj
    fake_obj = torch._library.fake_class_registry.maybe_to_fake_obj(fake_mode, obj)

  File "/usr/local/lib/python3.10/dist-packages/torch/_library/fake_class_registry.py", line 142, in maybe_to_fake_obj
    flat_x = x.__obj_flatten__()  # type: ignore[attr-defined]

AttributeError: __torch__.torch.classes.rnn.CellParamsBase (of Python compilation unit at: 0) does not have a field with name '__obj_flatten__'


# ⚠� Errors from strategy 'TorchExportStrategy': -----------------------

Traceback (most recent call last):

  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 110, in __call__
    exported_program = self._capture(model, args, kwargs, dynamic_shapes)

  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 145, in _capture
    return torch.export.export(

  File "/usr/local/lib/python3.10/dist-packages/torch/export/__init__.py", line 368, in export
    return _export(

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1035, in wrapper
    raise e

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1008, in wrapper
    ep = fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/export/exported_program.py", line 128, in wrapper
    return fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1970, in _export
    return _export_for_training(

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1035, in wrapper
    raise e

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1008, in wrapper
    ep = fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/export/exported_program.py", line 128, in wrapper
    return fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1834, in _export_for_training
    export_artifact = export_func(  # type: ignore[operator]

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1371, in _strict_export_lower_to_aten_ir
    aten_export_artifact = lower_to_aten_callback(

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1564, in _export_to_aten_ir_make_fx
    gm, graph_signature = transform(_make_fx_helper)(

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1485, in _make_fx_helper
    gm = make_fx(

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2196, in wrapped
    return make_fx_tracer.trace(f, *args)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2134, in trace
    return self._trace_inner(f, *args)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 2105, in _trace_inner
    t = dispatch_trace(

  File "/usr/local/lib/python3.10/dist-packages/torch/_compile.py", line 32, in inner
    return disable_fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
    return fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1138, in dispatch_trace
    graph = tracer.trace(root, concrete_args)  # type: ignore[arg-type]

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1694, in trace
    res = super().trace(root, concrete_args)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
    return fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/_symbolic_trace.py", line 843, in trace
    (self.create_arg(fn(*args)),),

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1193, in wrapped
    out = f(*tensors)  # type:ignore[call-arg]

  File "<string>", line 1, in <lambda>

  File "/usr/local/lib/python3.10/dist-packages/torch/export/_trace.py", line 1469, in wrapped_fn
    return tuple(flat_fn(*args))

  File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/utils.py", line 184, in flat_fn
    tree_out = fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 875, in functional_call
    out = PropagateUnbackedSymInts(mod).run(

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 167, in run
    self.env[node] = self.run_node(node)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/symbolic_shapes.py", line 6779, in run_node
    result = super().run_node(n)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 230, in run_node
    return getattr(self, n.op)(n.target, args, kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 359, in call_module
    return submod(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
    return self.call_module(mod, forward, args, kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1764, in call_module
    return Tracer.call_module(self, m, forward, args, kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/_symbolic_trace.py", line 539, in call_module
    ret_val = forward(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/_symbolic_trace.py", line 814, in forward
    return _orig_module_call(mod, *args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py", line 921, in forward
    return self.forward_tensor(input, hx)

  File "/usr/local/lib/python3.10/dist-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py", line 892, in forward_tensor
    output, hidden = self.forward_impl(

  File "/usr/local/lib/python3.10/dist-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py", line 855, in forward_impl
    result = torch.quantized_gru(

  File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 1123, in __call__
    return self._op(*args, **(kwargs or {}))

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1241, in __torch_function__
    return func(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 1123, in __call__
    return self._op(*args, **(kwargs or {}))

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1288, in __torch_function__
    return func(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 1123, in __call__
    return self._op(*args, **(kwargs or {}))

  File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 840, in handler
    return torch._library.utils.handle_dispatch_mode(

  File "/usr/local/lib/python3.10/dist-packages/torch/_library/utils.py", line 295, in handle_dispatch_mode
    return curr_mode.__torch_dispatch__(op_overload, overload_types, args, kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_stats.py", line 21, in wrapper
    return fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1343, in __torch_dispatch__
    return proxy_call(self, func, self.pre_dispatch, args, kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 912, in proxy_call
    out = func(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 723, in __call__
    return self._op(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_stats.py", line 21, in wrapper
    return fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1276, in __torch_dispatch__
    return self.dispatch(func, types, args, kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1816, in dispatch
    return self._cached_dispatch_impl(func, types, args, kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1367, in _cached_dispatch_impl
    entry = cache.get(key, None)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1027, in __eq__
    return isinstance(other, _DispatchCacheKey) and self.key == other.key

NotImplementedError: '__eq__' is not implemented for __torch__.torch.classes.rnn.CellParamsBase

While executing %l__self___feature_gru1 : [num_users=1] = call_module[target=L__self___feature_gru1](args = (%permute,), kwargs = {})
Original traceback:
  File "<eval_with_key>.1 from <eval_with_key>.0:10 in forward", line 12, in forward
    feature_gru1 = self.feature_gru1(permute);  permute = None



# ⚠� Errors from strategy 'JitTraceConvertStrategy': -----------------------

Traceback (most recent call last):

  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 110, in __call__
    exported_program = self._capture(model, args, kwargs, dynamic_shapes)

  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 292, in _capture
    return _torchscript_converter.TS2EPConverter(

  File "/usr/local/lib/python3.10/dist-packages/torch/_export/converter.py", line 1400, in __init__
    self.ts_graph, self.params, _, _ = _create_jit_graph(ts_model, sample_args)

  File "/usr/local/lib/python3.10/dist-packages/torch/_export/converter.py", line 92, in _create_jit_graph
    in_vars, _ = torch.jit._flatten(args_params)

RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: torch._C.ScriptObject


# ⚠� Errors from strategy 'LegacyDynamoStrategy': -----------------------

Traceback (most recent call last):

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 2384, in _dispatch_impl
    r = func(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 723, in __call__
    return self._op(*args, **kwargs)

NotImplementedError: aten::quantized_gru.input: attempted to run this operator with Meta tensors, but there was no fake impl or Meta kernel registered. You may have run into this message while using an operator with PT2 compilation APIs (torch.compile/torch.export); in order to use this operator with those APIs you'll need to add a fake impl. Please see the following for next steps:  https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html


During handling of the above exception, another exception occurred:


Traceback (most recent call last):

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 2591, in run_node
    return nnmodule(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py", line 921, in forward
    return self.forward_tensor(input, hx)

  File "/usr/local/lib/python3.10/dist-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py", line 892, in forward_tensor
    output, hidden = self.forward_impl(

  File "/usr/local/lib/python3.10/dist-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py", line 855, in forward_impl
    result = torch.quantized_gru(

  File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 1123, in __call__
    return self._op(*args, **(kwargs or {}))

  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_stats.py", line 21, in wrapper
    return fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1276, in __torch_dispatch__
    return self.dispatch(func, types, args, kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1816, in dispatch
    return self._cached_dispatch_impl(func, types, args, kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1386, in _cached_dispatch_impl
    output = self._dispatch_impl(func, types, args, kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 2386, in _dispatch_impl
    return maybe_run_unsafe_fallback(not_implemented_error)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 2368, in maybe_run_unsafe_fallback
    raise UnsupportedOperatorException(func)

torch._subclasses.fake_tensor.UnsupportedOperatorException: aten.quantized_gru.input


The above exception was the direct cause of the following exception:


Traceback (most recent call last):

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 2471, in get_fake_value
    ret_val = wrap_fake_exception(

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 2017, in wrap_fake_exception
    return fn()

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 2472, in <lambda>
    lambda: run_node(tx.output, node, args, kwargs, nnmodule)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 2604, in run_node
    raise RuntimeError(make_error_message(e)).with_traceback(

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 2591, in run_node
    return nnmodule(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py", line 921, in forward
    return self.forward_tensor(input, hx)

  File "/usr/local/lib/python3.10/dist-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py", line 892, in forward_tensor
    output, hidden = self.forward_impl(

  File "/usr/local/lib/python3.10/dist-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py", line 855, in forward_impl
    result = torch.quantized_gru(

  File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 1123, in __call__
    return self._op(*args, **(kwargs or {}))

  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_stats.py", line 21, in wrapper
    return fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1276, in __torch_dispatch__
    return self.dispatch(func, types, args, kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1816, in dispatch
    return self._cached_dispatch_impl(func, types, args, kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1386, in _cached_dispatch_impl
    output = self._dispatch_impl(func, types, args, kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 2386, in _dispatch_impl
    return maybe_run_unsafe_fallback(not_implemented_error)

  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 2368, in maybe_run_unsafe_fallback
    raise UnsupportedOperatorException(func)

RuntimeError: Failed running call_module L__self___feature_gru1(*(FakeTensor(..., size=(1, ((s0//480)) - 1, 1024), grad_fn=<PermuteBackward0>),), **{}):
aten.quantized_gru.input


During handling of the above exception, another exception occurred:


Traceback (most recent call last):

  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 110, in __call__
    exported_program = self._capture(model, args, kwargs, dynamic_shapes)

  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 325, in _capture
    graph_module, _ = torch._dynamo.export(

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 1569, in inner
    result_traced = opt_f(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
    return fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 822, in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 400, in __call__
    raise e

  File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 387, in __call__
    return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
    return self._torchdynamo_orig_callable(

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
    return _compile(

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
    return _compile_inner(code, one_graph, hooks, transform)

  File "/usr/local/lib/python3.10/dist-packages/torch/_utils_internal.py", line 95, in wrapper_function
    return function(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
    out_code = transform_code_object(code, transform)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
    transformations(instructions, code_options)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
    return fn(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 662, in transform
    tracer.run()

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
    super().run()

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
    while self.step():

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
    self.dispatch_table[inst.opcode](self, inst)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
    return inner_fn(self, inst)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1658, in CALL_FUNCTION
    self.call_function(fn, args, {})

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
    self.push(fn.call_function(self, args, kwargs))  # type: ignore[arg-type]

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/nn_module.py", line 415, in call_function
    return wrap_fx_proxy(

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 2153, in wrap_fx_proxy
    return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 2219, in wrap_fx_proxy_cls
    return _wrap_fx_proxy(

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 2315, in _wrap_fx_proxy
    example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 2518, in get_fake_value
    unimplemented(

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/exc.py", line 317, in unimplemented
    raise Unsupported(msg, case_name=case_name)

torch._dynamo.exc.Unsupported: unsupported operator: aten.quantized_gru.input (see https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.64r4npvq0w0 for how to fix)

from user code:
   File "<eval_with_key>.1 from <eval_with_key>.0:10 in forward", line 12, in forward
    feature_gru1 = self.feature_gru1(permute);  permute = None

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: torchlib Related to the torch/aten function lib in development
Projects
None yet
Development

No branches or pull requests

2 participants