New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support RESIZE_BILINEAR in TFLu #43426
Support RESIZE_BILINEAR in TFLu #43426
Conversation
Thanks for contributing to TensorFlow Lite Micro. To keep this process moving along, we'd like to make sure that you have completed the items on this list:
We would like to have a discussion on the Github issue first to determine the best path forward, and then proceed to the PR review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for the long delay here.
The review is half-baked at this time and I'm realizing that there are many things that we should put down into a porting guide. I'm going to take some time to make that happen, which unfortunately means some more delay on this particular review.
Thanks for your patience as we improve our documentation.
No action needed at this time from you. I'll put down a proper guide and then respond to this PR.
float scaled_value_floor = std::floor(*scaled_value); | ||
*lower_bound = std::max(static_cast<int32_t>(scaled_value_floor), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#include <algorithm>
#include <cmath>
#include <cstdint>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure I understood this comment correctly. I included these includes at the top of the file. Was that what you had in mind?
@@ -0,0 +1,199 @@ | |||
#include "tensorflow/lite/kernels/internal/cppmath.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One request would be to branch this from reference_ops.h so that it is clear exactly what (if any changes are made to the reference implementation).
git cp reference_ops.h resize_bilinear.h
and then remove all the extra code. The idea is to make it clear that the only operation was the code being copied, and nothing else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't seem to find the git cp
command. I don't know how to accomplish what you are describing. How would you do it without using git cp
? @advaitjain
@patriklaurell Can you please check @advaitjain's comments and keep us posted ? Thanks! |
@patriklaurell Can you please resolve conflicts? Thanks! |
@patriklaurell no need for any changes. Its on my list of things to address this PR but let's keep this on ice for now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
quick approve for small changes (since @petewarden has already approved the changes)
@petewarden @advaitjain sorry for the failing test. I am looking into it now. Will push the fix as soon as possible. |
Internal checks seemed to be running into merge conflicts. I have pushed a commit that updates this PR to tip of tree. My assumption is that we should now be able to get this merged. |
Adds support for RESIZE_BILINEAR in TFLu. This PR is related to this issue b/168339972. It depends on the flatbuffer conversion changes in this PR, which were submitted as a separate PR as suggested by @advaitjain, and is not relevant for review until that PR has been merged.