New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tensorflow] ops not quantized #25
Comments
conv2d doesn't support paired with leakyrelu. conv2d + relu and single conv2d only supports that conv2d has positive input. |
Does it mean that if my input calibration data has any negative values, I won't be able to quantize the Conv2D layer? |
Is there any pattern that supports Conv2D with negative inputs? |
But I did experiment another model that also has 5 Conv2D but with no activation, and it takes the same inputs as the model mentioned here (which contain negative values), these two models' Conv2D has different weights. |
It's nothing to do with calibration data. The so-called negative input means the input of conv2d doesn't contain relu/relu6 op. e.g op a---->op b --->op c, the op c is a conv2d op. If the op b is relu or relu6 , then the single conv2d op c is quantizable. If the op b is another conv2d, then its output may have negative value, and thus the op c couldn't be quantizable. |
I see! Thank you for explaining it in great detail, I will have another try on this later. |
Hi @guomingz, just wanted to ask another question, thank you so much!
I also printed out the matched fusion_mapping name as below:
It seems that |
are u sure those 8 conv2d all have their own biasadd + relu as successor? |
In a sense, yes and no. Here's an example of the printed node info for
which of course, takes
The input becomes I have attached the node names of the model here for your reference, I was wondering if LPOT can find a way to identify the patterns? |
You're mentioning a new pattern conv2d + BatchToSpaceND + biasadd + relu rather than conv2d +biasadd+relu, so it's also the rootcause of your previous concern why not all conv2d ops got quantized. |
May I ask if there is a chance that this new pattern |
If the pattern is SpaceToBatchND + Conv2d + BatchToSpaceND, it has the possibility to enable the fusion by converting this pattern to conv with dialation. |
I am not sure I know how to convert the pattern to conv with dialation, as I already set |
|
Originally I built my model with the following code:
I thought the reason why Tensorflow adds the |
So the pattern you mentioned earlier If so, LPOT couldn't support it due to there's Tf dependecy on that. You may wait for TF2.7 release. |
local patch ready, waiting for TF2.7 formal release with kernel merged. |
…rt pass for tensorflow backend. (intel#25) Signed-off-by: Zhang, Guoming <guoming.zhang@intel.com>
Framework: Tensorflow 2.6.0
LPOT: 1.6.0
When I printed out the
tune_cfg()
instrategy.py
First Conv2D and Matmul seems to be set to quantize to int8, but in mixed precision statistics, they are still in fp32 format. My main focus is to speed up Conv2D computation, but I cannot find the reason why it stays unquantized.
Is this because the pattern is unmatched?
Originally, my convolutional layer is paired with a leaky ReLU, and I also tried using ReLU, or no activation at all, but it just won't quantize Conv2D.
Please find my model link here
The text was updated successfully, but these errors were encountered: