-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix FoldTransposeIntoQuantInit Transformation #78
Fix FoldTransposeIntoQuantInit Transformation #78
Conversation
Previously, the transform was seemingly applied to all Quant-Transpose patterns, irrespective of whether *all* the inputs are actually initializers. This should now be fixed by testing more strictly for the node being a QuantInit.
Hm, all tests fail? What is going on? Seems not to be my fault? |
I have just added a new unit test validating both situations, i.e., keeping and removing the |
Hm, again some unrelated tests fail. This time it is some |
…ature/test_rect_dwise_dilated_conv_lowering Add extra conv lowering tests + fix linter issues
…eb/qonnx into iksnagreb-fix/transpose_into_quant
The previous CI failures were due to an The last CI failures seem to be due to some intermittent server failure, re-running was enough to make all tests pass. Otherwise, the PR looks all good to me - thanks @iksnagreb ! I'll only add one comment here about this bit before I hit merge:
The only cases I've previously seen that would give a In theory, one could take advantage of shape broadcasting to create quantization parameters that are neither scalar nor matching the number of dimensions for the target tensor, as this is already mis-specified in the Quant node spec. I'll update the Quant node spec to permit only scalar OR ndim == tensor ndim cases. |
Thank you! |
Intends to fix problems related to the
FoldTransposeIntoQuantInit
transformation. The transformation should only be applied if theTranspose
node actually follows a so calledQuantInit
, these areQuant
(orBipolarQuant
) nodes where all inputs are initializers. Currently, the transform is always applied, even if just some inputs have initializers. This causes problems with the shape inference, as the remainingQuant
node does not transpose its runtime inputs. This is fixed by making the test for a node beingQuantInit
more strict.I have tested this by running the unit tests for QONNX as well as those under
tests/transformation
over at FINN and did not observe any issues so far. For more context please see the following issues: #77, Xilinx/finn#878, Xilinx/finn#892.