[QNN][TFLite] Added support for fused-bias and quantized input in TRANSPOSE_CONV for TFLite.#6523
Closed
jainris wants to merge 3 commits intoapache:masterfrom
Closed
[QNN][TFLite] Added support for fused-bias and quantized input in TRANSPOSE_CONV for TFLite.#6523jainris wants to merge 3 commits intoapache:masterfrom
jainris wants to merge 3 commits intoapache:masterfrom
Conversation
…NSPOSE_CONV for TFLite. * Added dilation_value attribute to dilate operator of Relay/TOPI. (Enables custom value for dilation, instead of always 0) * Added tests for dilation_value of dilate operator in Relay and TOPI. * Added support for quantized input in TRANSPOSE_CONV operator of TFLite. * Added tests for quantized input in TRANSPOSE_CONV operator of TFLite.
Contributor
Author
mbaret
requested changes
Sep 22, 2020
Contributor
|
also ping @siju-samuel |
Contributor
|
Dilation part is good. I am not sure about the conv2d transpose portion. My concern is that we now have to replicate the logic for different framework parsers. My suggestion would be to add For now, we can make the transformation for all targets, not just specifically to ARM. This will keep the option open to improve the schedule of conv2d_transpose as a whole if needed. |
Contributor
Author
|
Quantized Transpose Convolution code needs some changes, so bringing |
giuseros
pushed a commit
to giuseros/incubator-tvm
that referenced
this pull request
Nov 11, 2020
This work is based on @jainris initial PR: apache#6523 I added a relay.qnn.conv2d_transpose node. The strategy I followed is to convert to int16 and invoke nn.conv2d_transpose (which already exists in relay). Main changes: - The node declaration lives in relay/qnn/op/convolution_transpose.cc - Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py. - I added and tested the operator in the tflite front-end - I added a unit-test in Relay for qnn.conv2d_transpose
giuseros
pushed a commit
to giuseros/incubator-tvm
that referenced
this pull request
Nov 11, 2020
This work is based on @jainris initial PR: apache#6523 I added a relay.qnn.conv2d_transpose node. The strategy I followed is to convert to int16 and invoke nn.conv2d_transpose (which already exists in relay). Main changes: - The node declaration lives in relay/qnn/op/convolution_transpose.cc - Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py. - I added and tested the operator in the tflite front-end - I added a unit-test in Relay for qnn.conv2d_transpose Co-authored-by: Rishabh Jain
giuseros
pushed a commit
to giuseros/incubator-tvm
that referenced
this pull request
Nov 11, 2020
This work is based on @jainris initial PR: apache#6523 I added a relay.qnn.conv2d_transpose node. The strategy I followed is to convert to int16 and invoke nn.conv2d_transpose (which already exists in relay). Main changes: - The node declaration lives in relay/qnn/op/convolution_transpose.cc - Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py. - I added and tested the operator in the tflite front-end - I added a unit-test in Relay for qnn.conv2d_transpose Co-authored-by: Rishabh Jain
giuseros
pushed a commit
to giuseros/incubator-tvm
that referenced
this pull request
Nov 11, 2020
This work is based on @jainris initial PR: apache#6523 I added a relay.qnn.conv2d_transpose node. The strategy I followed is to convert to int16 and invoke nn.conv2d_transpose (which already exists in relay). Main changes: - The node declaration lives in relay/qnn/op/convolution_transpose.cc - Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py. - I added and tested the operator in the tflite front-end - I added a unit-test in Relay for qnn.conv2d_transpose Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>
mbaret
pushed a commit
that referenced
this pull request
Nov 26, 2020
* Add initial support for quantized transpose convolution in Relay This work is based on @jainris initial PR: #6523 I added a relay.qnn.conv2d_transpose node. The strategy I followed is to convert to int16 and invoke nn.conv2d_transpose (which already exists in relay). Main changes: - The node declaration lives in relay/qnn/op/convolution_transpose.cc - Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py. - I added and tested the operator in the tflite front-end - I added a unit-test in Relay for qnn.conv2d_transpose Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com> * Fix linting * Addressing review comments Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>
trevor-m
pushed a commit
to trevor-m/tvm
that referenced
this pull request
Dec 2, 2020
…che#6899) * Add initial support for quantized transpose convolution in Relay This work is based on @jainris initial PR: apache#6523 I added a relay.qnn.conv2d_transpose node. The strategy I followed is to convert to int16 and invoke nn.conv2d_transpose (which already exists in relay). Main changes: - The node declaration lives in relay/qnn/op/convolution_transpose.cc - Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py. - I added and tested the operator in the tflite front-end - I added a unit-test in Relay for qnn.conv2d_transpose Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com> * Fix linting * Addressing review comments Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>
trevor-m
pushed a commit
to trevor-m/tvm
that referenced
this pull request
Dec 4, 2020
…che#6899) * Add initial support for quantized transpose convolution in Relay This work is based on @jainris initial PR: apache#6523 I added a relay.qnn.conv2d_transpose node. The strategy I followed is to convert to int16 and invoke nn.conv2d_transpose (which already exists in relay). Main changes: - The node declaration lives in relay/qnn/op/convolution_transpose.cc - Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py. - I added and tested the operator in the tflite front-end - I added a unit-test in Relay for qnn.conv2d_transpose Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com> * Fix linting * Addressing review comments Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>
trevor-m
pushed a commit
to neo-ai/tvm
that referenced
this pull request
Dec 4, 2020
…che#6899) * Add initial support for quantized transpose convolution in Relay This work is based on @jainris initial PR: apache#6523 I added a relay.qnn.conv2d_transpose node. The strategy I followed is to convert to int16 and invoke nn.conv2d_transpose (which already exists in relay). Main changes: - The node declaration lives in relay/qnn/op/convolution_transpose.cc - Cast int8->int16 and subsequent offset removal is in tvm/relay/qnn/op/legalizations.py. - I added and tested the operator in the tflite front-end - I added a unit-test in Relay for qnn.conv2d_transpose Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com> * Fix linting * Addressing review comments Co-authored-by: Rishabh Jain <jainris@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
(Enables custom value for dilation, instead of always 0)