New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added quantized division for uint8 #26570
Added quantized division for uint8 #26570
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed (or fixed any issues), please reply here (e.g. What to do if you already signed the CLAIndividual signers
Corporate signers
ℹ️ Googlers: Go here for more info. |
@mwtarnowski please sign CLA |
@rthadur Michał and @w-adamski are members of https://github.com/TCLResearchEurope |
I signed it! |
CLAs look good, thanks! ℹ️ Googlers: Go here for more info. |
@bjacob, thanks for your feedback and a comprehensive explanation of the problem. In fact, I've already implemented integer-only version of this operation with gemmlowp library. However, during several benchmarks, I found that, even on mobile devices with limited support of floating-point operations, an unoptimized (no LUT, no NEON) integer-only implementation performs poorly in comparison to the approach I proposed here. For my purposes, this implementation provided demanded functionality with fair performance and being pretty simple at the same time, which made it a good candidate for a reference. But indeed, when it comes to ensure maximum portability, it makes perfect sense to avoid using floats. So I am updating this PR eliminating all floating point operations from the code. Any further suggestions are welcome. |
Sorry, I had forgotten. This looks good to me. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good.
@bjacob what is the further procedure? PR has been accepted and what are further steps needed to merge it? |
@herbakamil The normal procedure is that this PR has been converted to a Google-internal "change" that has been entered into integration tests. Unfortunately, there are some real test failures, so this has been blocked at that stage there. The test failures are in this test target: //tensorflow/lite/kernels:div_test You should be able to reproduce this test failure by this command: bazel test //tensorflow/lite/kernels:div_test Please update this PR so as to make this test pass, then we can let this go through integration again. The other failures reported above here ("Windows Bazel") look like spurious/flaky failures not actually caused by your PR, at least from a cursory look. |
@bjacob, can you provide more details? I am not able to reproduce this test failure on Ubuntu 18.04 with Bazel 0.24.1 and GCC 7.4. |
(Edit: this is on a Linux setup fairly similar to yours, debian-based. the compiler is some recent clang). It seems to be failing in the new tests which this PR adds:
|
In the first of these failures,
Notice how it's the 2 first out of these 4 values that are off, while the 2 last values are OK (within the stated tolerance). |
@bjacob, is now everything ok? |
@mwtarnowski I just inquired again. It's OK as far as I can see, someone needs to submit it into integration again. |
PiperOrigin-RevId: 254446906
Added simple, non-optimized implementation of quantized division for unsigned 8-bit integers to reference ops. Implemented a few tests together with templates that can be used for testing quantized Div for other data types as well.