New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Relay][TF] Make StridedSlice support dynamic input and constant attrs #6024
Conversation
👍 as a fix for the current design. In terms of improving design, I think the issue, and the reason we have two strided slice ops here, might be that the topi op for strided slice calculates the output shape. If we extract that from the topi op, we might not need to work about this complexity so much? |
cc @yongwww |
Can't topi.strided_slice support dynamic shape + const attr? What's the error? We'd better fix topi.strided_slice if there is any issue since it should be able to support dynamic input shape. |
} | ||
} | ||
|
||
if (param->begin && param->end && param->strides && !dyn) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I printed output_shape in topi::strided_slice and it is (0, 0) for that modified test_any case, which causes an runtime error. Maybe we can fix this bug directly in topi::strided_slice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
topi::strided_slice requires static shape because it will get value from each dim. Should we change this requirement or just use DynamicStrideSlice?
diff --git a/topi/include/topi/transform.h b/topi/include/topi/transform.h
index b5fc02ae7..329a62ce7 100644
--- a/topi/include/topi/transform.h
+++ b/topi/include/topi/transform.h
@@ -610,6 +610,7 @@ inline Tensor strided_slice(const Tensor& x, const Array<Integer>& begin, const
for (size_t i = 0; i < src_tensor_dim; ++i) {
int64_t begin_range = stride_vec[i] < 0 ? -1 : 0;
int64_t dim_i = GetConstInt(x->shape[i]);
+ LOG(INFO) << dim_i; // it is -1 for modified case
int64_t end_range = stride_vec[i] < 0 ? dim_i - 1 : dim_i;
// transform negative indices to positive value, clips on the correct range
auto index_canonicalization = [dim_i, begin_range, end_range](int64_t index) {
@@ -635,6 +636,8 @@ inline Tensor strided_slice(const Tensor& x, const Array<Integer>& begin, const
out_shape.push_back(slice_size);
}
+// LOG(INFO) << out_shape;
+
return compute(
out_shape,
[&](const Array<Var>& indices) {
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. I think it would be more complicated to fix topi. Probably we can use dynamic stridedslice compute.
return True | ||
return False | ||
|
||
if _dyn(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to special hand this in tf frontend and skip mask transformation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because mask transformation needs a concrete shape which doesn't work for dynamic shape input. I think there is more to do to handle mask in dynamic case but I'm afraid it can't support all mask transformation without concrete shape.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can use shape_of to get data shape. In this case all shape dim will be relay expression and transform mask should be able to handle them. Anyway I don't think we should silently skip it since the output can be wrong and cause latter type infer error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lixiaoquan Can you simply raise an error here and add a TODO? We can merge this PR so that backend changes can take effect.
@lixiaoquan @kevinthesun What's the status on this PR? I need to split this op into the dynamic namespace and add it to the dynamic to static pass, but I don't want to conflict with what you have here, and I'd definitely appreciate the extra tests as I do that 😄 |
@lixiaoquan Can you update tf frontend part? |
@mbrookhart Please go ahead with your change, I'm afraid there are still some work to do to handle mask conversion with dynamic shape case. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overall lgtm, please take a look at the frontend comments.
@lixiaoquan Any update? |
Sorry the late reply, I think I need some time to figure out how to handle mask in dynamic case, I'll close it for now. |
@kevinthesun @mbrookhart @icemelon9 Could you please review this, thanks
The input of strided_slice is Expr and attrs are Const, but it is still possible this strided_slice is dynamic.
This PR adds / changes some test to demostrate this issue. If we make strided_slice static and add _dyn.strided_slice, the static version can't support this case, so we have to always use _dyn.strided_slice unless we are sure shape of input is static. I think other _dyn.* ops may have similiar issue