Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support GatherND operator in ONNX #2106

Merged
merged 32 commits into from
Aug 29, 2019
Merged

Conversation

hariharans29
Copy link
Contributor

@hariharans29 hariharans29 commented Jun 18, 2019

This change is to support GatherND operator in ONNX.

@hariharans29 hariharans29 requested review from a team as code owners June 18, 2019 02:13
@hariharans29 hariharans29 changed the title [New operator] GatherND WIP [New operator] GatherND Jun 18, 2019
@hariharans29 hariharans29 changed the title WIP [New operator] GatherND Support GatherND operator in ONNX Jun 21, 2019
@gramalingam
Copy link
Contributor

TF's gather-nd seems to have an extra batch_dims attribute for batched gather-nd … is that required or useful?

Copy link
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also could you provide some brief summary about the PR in the PR's description?

@hariharans29
Copy link
Contributor Author

Also could you provide some brief summary about the PR in the PR's description?

Well sure - but the description is same as the title - to support GatherND op in ONNX. Do you mean you want a justification to include this by any chance ?

@hariharans29
Copy link
Contributor Author

TF's gather-nd seems to have an extra batch_dims attribute for batched gather-nd … is that required or useful?

Not sure about this - let me take a look. Thanks!

@hariharans29
Copy link
Contributor Author

TF's gather-nd seems to have an extra batch_dims attribute for batched gather-nd … is that required or useful?

Not sure about this - let me take a look. Thanks!

@gramalingam - I took a look at the batch_dims concept in the tf Gather_nd op and like you said, it is to support batched inputs ('params' and 'indices' in tf op) and does seem like a good extension to the op's abstraction. However, I see no immediate use-case for us to justify including that. If need be, we can version this op later to support batch_dims. Or I can make the spec adjustment now and modify the contrib implementation of this op in ORT. Any thoughts?

CC: @wschin @linkerzhang @spandantiwari

@gramalingam
Copy link
Contributor

I personally think it is okay to leave it for future extension if we don't see a use-case for it.

@hariharans29
Copy link
Contributor Author

I personally think it is okay to leave it for future extension if we don't see a use-case for it.

I think this is a good idea too. We can always version the op later.

@CLAassistant
Copy link

CLAassistant commented Jul 24, 2019

CLA assistant check
All committers have signed the CLA.

@spandantiwari
Copy link
Contributor

@hariharans29 - Overall, this looks OK. I don't have any specific blocking issues on this. Could you please take a look at the recently approved guideline document for adding new ops and update (if needed) based on that.
https://github.com/onnx/onnx/blob/master/docs/AddNewOp.md
Some suggestions in line with the guidelines:

  1. Add more details in the op definition; could we include some equations?
  2. Add more test examples.
  3. Add a reference implementation in the test script.

onnx/defs/tensor/defs.cc Outdated Show resolved Hide resolved
Copy link
Contributor

@spandantiwari spandantiwari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Copy link
Contributor

@wschin wschin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Just one suspicious typo.


The output is computed as follows:

The output tensor can be thought of as filling in the `indices` tensor with the appropriate slices of the input `data`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to "The output tensor is obtained by mapping each index-tuple in the indices tensor to the corresponding slice of the input data"

@hariharans29
Copy link
Contributor Author

Let me also add - This operator is the inverse of ScatterND ,since ScatterND has a similar line in it

@gramalingam
Copy link
Contributor

By the way: there was a discussion about supporting negative indices in Gather (since pytorch supports that). Do we want to extend this to allow negative indices?

@hariharans29
Copy link
Contributor Author

hariharans29 commented Aug 22, 2019

By the way: there was a discussion about supporting negative indices in Gather (since pytorch supports that). Do we want to extend this to allow negative indices?

This is a good question - in this spec version, I explicitly did not allow it and pretty much followed the spec clarification in Gather. Any comments @spandantiwari @BowenBao ? Btw - Does GatherElements, ScatterElements, and ScatterND support negative indices ?

@postrational postrational added the operator Issues related to ONNX operators label Aug 22, 2019
@hariharans29
Copy link
Contributor Author

Spoke offline to @spandantiwari and @BowenBao - We will support negative indexing in this op after a consensus on it is reached and we will add negative indexing support in a more streamlined manner across similar ops.

For now, it makes sense to stay consistent with Gather which doesn't support negative indexing.

@ebarsoum ebarsoum merged commit 1a62afd into onnx:master Aug 29, 2019
jcwchen pushed a commit to jcwchen/onnx that referenced this pull request Sep 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
operator Issues related to ONNX operators
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet