Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strided_slice added in NNVM #1318

Merged
merged 6 commits into from
Jun 28, 2018

Conversation

PariksheetPinjari909
Copy link
Contributor

Strided_slice added in NNVM

}

return Array<Tensor>{ topi::strided_slice(inputs[0], begin, end, stride) };
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add set_attr("FCorrectLayout", ElemwiseArbitraryLayout<1, 1>)

.describe("Indices for begin of slice");
DMLC_DECLARE_FIELD(end)
.describe("Indices for end of the slice");
DMLC_DECLARE_FIELD(stride)
Copy link
Member

@FrozenGene FrozenGene Jun 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the stride, should add set_default like this:
.set_default(Tuple<int64_t>()) like MXNet did. Because stride is an optional value. Otherwise, users must provide this value for stride in the frontend (not the same as your unit testing, just _sym.strided_slice(insym, begin=begin, end=end) ), which is not the same meaning we want to do.

@tqchen tqchen added the status: need update need update based on feedbacks label Jun 26, 2018
@PariksheetPinjari909
Copy link
Contributor Author

@FrozenGene , @kevinthesun : your comments are handled please check, thanks!

@FrozenGene
Copy link
Member

@PariksheetPinjari909 I have no other comments now. I have used this PR into my real model, which is currently no problem. Thanks.

@srkreddy1238
Copy link
Contributor

@FrozenGene @kevinthesun @tqchen @PariksheetPinjari909

Ref. https://github.com/onnx/onnx/blob/master/docs/Operators.md#Slice
We may need choosing axis support as well.

Current implementation fills the begin, end, strides only at the end. But doesn't support choosing arbitrary axis. Please correct me if I am wrong.

@FrozenGene
Copy link
Member

FrozenGene commented Jun 26, 2018

@srkreddy1238 I haven't thought of one situation onnx's axes can do but our slice can not do. If you want to choose one specify axis, you could use _sym.stride_slice(insym, begin=(specify_axis, another_offset_for_start_indice), end=(specify_axis + 1, another_offset_for_end_indice )) if you want to use axes=(0, 1) you could use _sym.stride_slice(insym, begin=(0, another_offset_for_start_indice), end=(2, another_offset_for_end_indice)). onnx's axes could be expressed by our begin / end.
Tensorflow's / MXNet 's slice is the same as ours.
tf: https://www.tensorflow.org/versions/r1.9/api_docs/python/tf/slice
MXNet: https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.slice

@srkreddy1238
Copy link
Contributor

@FrozenGene

Our implementation is ref from MXNet and tensorflow is a subset of it. ONNX differs slightly here.

How do we specify begin, end only for axis 1 here for a 4D tensor (axis 0, 2, 3 should take full range)?

@tqchen any advice ?

@tqchen
Copy link
Member

tqchen commented Jun 26, 2018

One thing we need to support is the case when some of the end is None(which means inclusive, for cases like X[0:]), as long as that is supported, it can support any axis.

@srkreddy1238
Copy link
Contributor

How do we pass None here ? onnx use very high value (like INT_MAX) to indicate end.
Amending INT_MAX should work for frontend to handle axis.

@FrozenGene
Copy link
Member

FrozenGene commented Jun 27, 2018

How do we specify begin, end only for axis 1 here for a 4D tensor (axis 0, 2, 3 should take full range)?
Pass _sym.strided_slice(insym, begin=(1, whatever_offset_values_for_axis_1_start), end=(2, whatever_offset_values_for_axis_1_end))
@srkreddy1238

@tqchen
Copy link
Member

tqchen commented Jun 27, 2018

explicit checking for int_max and not applying slice makes sense

@srkreddy1238
Copy link
Contributor

begin(1, indicates begin offset for axis 0.

begin_vec.push_back(0);
}

std::vector<int64_t> end_vec;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Handle INT_MAX for end

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had considered this scenario while implementation, and concluded that the functionality can be achieved using the current implementation, which i did in case of Tensorflow case(had many other parameters like ellipsis_mask, new_axis_mask, shrink_axis_mask)(PR coming soon), and it works fine, similarly can be done for ONNX also.

NOTE: in Tensorflow ellipsis_mask is nothing but similar to axis input case in ONNX.

I have gone through onnx slice operator, my understanding is INT_MAX they will use when the size in the particular dimension is unknown, which is not the case in NNVM.

Apart from that our current implementation can handle INT_MAX input perfectly, it will strip down end value to the max size of the dimension. In my opinion, we should not modify the base topi operators based on each frontends, provided the functionality can be achieved by modifying the inputs to the operator.

Please let me know @tqchen , @srkreddy1238 , @FrozenGene , your opinions. Will wait for your feedback.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think apply INT_MAX for slice op making sense. INT_MAX is for unknown dimension, which is not the things NNVM should consider. I translated many models with unknown dimension (such as ssd-mobilenet / Deeplab V3 tensorflow model's preprocess input), for this situation, the users should provide the concrete shape, not leaving this to NNVM.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not the dimension which is unknown here. It's the range inside the dimension.

INT_MAX here indicates to use max available range from input.

}

std::vector<int64_t> stride_vec;
if (param.stride.ndim() != 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stride has a default initialization. This check may not be needed.

}

NNVM_REGISTER_OP(strided_slice)
.describe(R"code(Strided slice of an array.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please elaborate the description with examples.

verify_strided_slice((3, 4, 3), [1, -1, 0], [4, -5, 3], [2, -1, 1])
verify_strided_slice((3, 4, 3), [1, 0, 0], [2, 2, 3], [1, 1, 2])
verify_strided_slice((3, 4, 3), [1, -1, 0], [2, -3, 3], [1, -1, 1])
verify_strided_slice((3, 4, 3), [1, 1, 0], [4, 4, 3])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could add test cases for INT_MAX and another where len(begin) != len(end).

@PariksheetPinjari909
Copy link
Contributor Author

@srkreddy1238 , all comments are handled now.

Copy link
Contributor

@srkreddy1238 srkreddy1238 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@tqchen tqchen merged commit 8419790 into apache:master Jun 28, 2018
tqchen pushed a commit to tqchen/tvm that referenced this pull request Jul 6, 2018
mnuyens pushed a commit to mnuyens/tvm that referenced this pull request Jul 10, 2018
sergei-mironov pushed a commit to sergei-mironov/tvm that referenced this pull request Aug 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: need update need update based on feedbacks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants