-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unsupported FX Nodes: {'call_function': ['aten.quantized_gru.input', 'quantized.linear_dynamic.default']} #2074
Comments
I think they can implemented. Before implementing the functions, we need to know what the current behavior is: Please test with |
Please also refer to https://pytorch.org/tutorials/prototype/pt2e_quant_ptq.html and https://github.com/pytorch/ao/blob/2a3fbffc461f30751552006c864c57a80b297ca6/tutorials/developer_api_guide/export_to_executorch.py#L79-L80 for quantization in PyTorch 2. |
One thing to note is that GRU and other rnn layers are currently unsupported. |
I ran the below command PyTorch ONNX Conversion Error Report
|
Hello,
I am trying to convert a
torchao
quantized deep learning model(consisting of Linear, GRU layers, etc) to onnx but running into the error:Unsupported FX nodes: {'call_function': ['aten.quantized_gru.input', 'quantized.linear_dynamic.default']}
.Post-Training Quantization(using torch.ao.quantization.quantize_fx)
The quantization method used is Post-Training Dynamic Int8 Quantization(weights-only) in FX mode.
Adding the snippet of quantizing the model and saving it as .pth below:
Conversion to ONNX
Upon converting the quantized model to onnx:
I run into the below error:
Note
aten_quantized_gru_cell
. is it possible to make use of this and if yes, then how?report_dynamo_export.sarif
Is there any guideline on how to solve this problem and implement the support for the aforementioned operations?
Thank you, and sorry for the long post.
The text was updated successfully, but these errors were encountered: