Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added tflite frontend support for quantized mean #4339

Merged
merged 1 commit into from Nov 22, 2019

Conversation

tristan-arm
Copy link
Contributor

No description provided.

@tqchen
Copy link
Member

tqchen commented Nov 15, 2019

cc @FrozenGene @srkreddy1238 @anijain2305 can you take a look?

@@ -659,7 +659,23 @@ def _convert_reduce(self, relay_op, op):
reduce_options.Init(op_options.Bytes, op_options.Pos)
keep_dims = reduce_options.KeepDims()

if input_tensor.qnn_params:
in_expr = _qnn.op.dequantize(data=in_expr,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm...do we have to dequantize -> compute fp32 -> requantize? could we do only integer computation?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar concern as that of @FrozenGene
From maths size, it looks safe to me, but let me know if I am wrong

Initial Quantized tensor - QA - shape (N, dtype='int8') 

Mean would be 

scale_output * (Qout - zp_out) = scale_in * [(QA[0] + QA[1] .... + QA[N-1]) - N* zp_in]/N

scale_output * (Qout - zp_out) = scale_in * (Mean(QA) - zp_in)

So, basically, upcast the quantized tensor to int32, call Int Mean, and then requantize if the output scale/zp are different.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated as suggested above

@@ -710,16 +710,23 @@ def test_forward_zeros_like():
# Reduce
# ------

def _test_reduce(math_op, data, keep_dims=None):
def _test_reduce(math_op, data, keep_dims=None, quantized=False):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest breaking into two functions. Another is _test_reduce_quantize

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest breaking into two functions. Another is _test_reduce_quantize

Done

@tristan-arm
Copy link
Contributor Author

CI failure is unrelated to this patch - I'll kick it off again once the issue is resolved

@zhiics
Copy link
Member

zhiics commented Nov 19, 2019

@tristan-arm CI is consistently failing. Shouldn't relate to your change.

Copy link
Contributor

@anijain2305 anijain2305 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks for the changes!

Copy link
Member

@zhiics zhiics left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhiics zhiics merged commit 0f4b32f into apache:master Nov 22, 2019
@zhiics
Copy link
Member

zhiics commented Nov 22, 2019

Thanks @tristan-arm @anijain2305 @FrozenGene

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants