Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TOPI][IMAGE][RESIZE] Bilinear interpolation for resize and upsampling. #1181

Merged
merged 12 commits into from
Jun 14, 2018

Conversation

srkreddy1238
Copy link
Contributor

@srkreddy1238 srkreddy1238 commented May 29, 2018

    * upsampling - migrated to cpp
    * bilinear resize implementation & test cases.
    * upsampling testcases enhanced for NHWC

Discuss:
Above implementation merges both nn (upsampling) and bilinear at topi.cpp level as scale.
How about merging upsampling & bilinear as scale at python tvm.topi and nnvm front end too ?

We can have just "scale" at all places in this case.

@FrozenGene
Copy link
Member

I just go through in mobile. Maybe omit something.
Where is align_corners attribue? for tf, align_corners will affect the h_scale / w_scale.

@srkreddy1238
Copy link
Contributor Author

srkreddy1238 commented May 29, 2018

align_corners doesn't affect the final shape, it just few pixels retained from input source to output with out re computation..
I am just trying to understand align_corners logic for both nn and bilinear.

I will add that soon while review in progress.

@srkreddy1238
Copy link
Contributor Author

How about merging upsampling and bilinear into scale from frontend ?

@FrozenGene
Copy link
Member

align_corners, you can refer the TF's implementation:

For nearest neighbours:

const int64 in_y = std::min(
            (align_corners) ? static_cast<int64>(roundf(y * height_scale))
                            : static_cast<int64>(floorf(y * height_scale)),
            in_height - 1);

for (int x = 0; x < out_width; ++x) {
          const int64 in_x = std::min(
              (align_corners) ? static_cast<int64>(roundf(x * width_scale))
                              : static_cast<int64>(floorf(x * width_scale)),
              in_width - 1);

For ResizeBilinear:

// CalculateResizeScale determines the float scaling factor.
inline float CalculateResizeScale(int64 in_size, int64 out_size,
                                  bool align_corners) {
  return (align_corners && out_size > 1)
             ? (in_size - 1) / static_cast<float>(out_size - 1)
             : in_size / static_cast<float>(out_size);
}
height_scale = CalculateResizeScale(in_height, out_height, align_corners_);
width_scale = CalculateResizeScale(in_width, out_width, align_corners_);

I personally prefer keep separate interface. Which will make the customer know what op do. How to implement these op is the low-level framework's work. You can unify into scale you can also choose not to do.

@srkreddy1238
Copy link
Contributor Author

Thanks, I will check the above logic for align_coeners.

Separation at customer end happens by "mode" attribute. The template and purpose is same hence I feel good to unify upto frontend.

@tqchen what do you think ?

@srkreddy1238
Copy link
Contributor Author

@FrozenGene

I got through align_corners for bilinear, the same for nn is a challenge due to round, floor compatible IR instructions.
I tried with 'cast' and addition of 0.5f which worked for llvm but failed on cuda.

Any ideas ? Or will hold align_corners for nn until there is a demand for it !

@srkreddy1238 srkreddy1238 changed the title [WIP] Scaling with bilinear interpolation [REVIEW] Scaling with bilinear interpolation May 30, 2018
@srkreddy1238
Copy link
Contributor Author

@FrozenGene
Copy link
Member

You can implement floor like this: https://discuss.tvm.ai/t/floordiv-operation-in-tensors-how-to-achieve/146/5 Then we should not have not-compatibility instruction

@srkreddy1238
Copy link
Contributor Author

Ok, let me try the floor operator addition first. I hope mod division work fine with all platforms by default.
round would be easy if we have floor [ round(f) = floor(f+0.5) ].

@masahi
Copy link
Member

masahi commented May 31, 2018

thanks, having bilinear upsampling is nice. But what is the use case of asymmetric upsampling (different scales for width and height) ?

@srkreddy1238
Copy link
Contributor Author

Yes, different scale for width and height.

I don't have information on real time models for asymmetric scale.
I am trying to unify the API for all algorithms instead of having multiple operators from frontend to topi.
Unified template would be
scale(data, out_shape, layout='NCHW', align_corners=False, mode='NN')

I felt its easy to maintain with new algorithms in future if required.

Thanks for the review :)

@FrozenGene
Copy link
Member

Please do the test on the OpenCL platform carefully. Someone told me that mod opeartor on floating point type has some issue.

@srkreddy1238
Copy link
Contributor Author

Sure, I will check that too.

@tqchen
Copy link
Member

tqchen commented Jun 5, 2018

any updates on this PR? @masahi @FrozenGene do you have followup comments?

@masahi
Copy link
Member

masahi commented Jun 6, 2018

Shouldn't upsampling and bilinear_scale be merged into a single API? I think if we have bilinear scaling available, then upsampling op should support bilinear upsampling as well.

@srkreddy1238
Copy link
Contributor Author

Can I merge them both as "scale" and made it generic at all levels?

@srkreddy1238
Copy link
Contributor Author

@masahi
I mean API "scale" supporting both up/down sampling.

@masahi
Copy link
Member

masahi commented Jun 6, 2018

I think "upsampling" is a better name than "scale". I don't know why you want down sampling, when we already have pooling.

        * Doc changes from review (3D tensor and others)
        * set_num_inputs, input_names : Dynamic assignment for resize and upsampling.
        * align_corners removed across for upsampling.
        * in_shape->size checks updated.
        * Bilinear test case added at compiler level.
               * UseBilinear as common function.
               * Doc correction.
               * mode -> method
               * Documentation changes.
@srkreddy1238
Copy link
Contributor Author

@tqchen & @masahi

  • Weights compute made inline inside topi.
  • Extra parameter handling removed across.

@tqchen
BTW, is CI still on python2 ?

@tqchen
Copy link
Member

tqchen commented Jun 14, 2018

CI is upgraded to use python3 in most cases but there is still python2 checks, so the code is better to be compatible until 2019


)" NNVM_ADD_FILELINE)
.add_argument("data", "4D Tensor", "Input data.")
.add_argument("weight", "3D Tensor", "Weight matrix.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not needed anymore

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. removed now.

@masahi
Copy link
Member

masahi commented Jun 14, 2018

LGTM.

@tqchen tqchen merged commit 57f9588 into apache:master Jun 14, 2018
@tqchen
Copy link
Member

tqchen commented Jun 14, 2018

Thanks to the reviewers and the contributor, this is merged.

tqchen pushed a commit to tqchen/tvm that referenced this pull request Jul 6, 2018
mnuyens pushed a commit to mnuyens/tvm that referenced this pull request Jul 10, 2018
sergei-mironov pushed a commit to sergei-mironov/tvm that referenced this pull request Aug 8, 2018
@srkreddy1238 srkreddy1238 deleted the scale branch October 17, 2018 05:27
@ppwwyyxx
Copy link

From the discussion it seems like this implementation follows resize in tensorflow.

However, resize(align_corners=False) in tensorflow has bugs since day 1 and it was fixed in Feb this year by introducing a new resize operator in TF. See tensorflow/tensorflow#6720.
The fixed version of align_corners=False agree with pytorch & opencv, etc. It would be good to fix it in topi.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants