Skip to content

Commit

Permalink
Support GatherND operator in ONNX (#2106)
Browse files Browse the repository at this point in the history
Add GatherND
  • Loading branch information
hariharans29 authored and ebarsoum committed Aug 29, 2019
1 parent 0e330e9 commit 1a62afd
Show file tree
Hide file tree
Showing 15 changed files with 626 additions and 58 deletions.
95 changes: 95 additions & 0 deletions docs/Changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -10753,6 +10753,101 @@ This version of the operator has been available since version 11 of the default
<dd>Constrain indices to integer types</dd>
</dl>

### <a name="GatherND-11"></a>**GatherND-11**</a>

Given `data` tensor of rank `r` >= 1, and `indices` tensor of rank `q` >= 1, this operator gathers
slices of `data` into an output tensor of rank `q + r - indices_shape[-1] - 1`.

`indices` is an q-dimensional integer tensor, best thought of as a `(q-1)`-dimensional tensor of index-tuples into `data`,
where each element defines a slice of `data`

Some salient points about the inputs' rank and shape:

1) r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks `r` and `q`

2) The `indices_shape[-1]` should have a value between 1 (inclusive) and rank `r` (inclusive)

3) All values in `indices` are expected to be within bounds [-s, s-1] along axis of size `s` (i.e.) `-data_shape[i] <= indices[...,i] <= data_shape[i] - 1`.
It is an error if any of the index values are out of bounds.

The output is computed as follows:

The output tensor is obtained by mapping each index-tuple in the `indices` tensor to the corresponding slice of the input `data`.

1) If `indices_shape[-1] > r` => error condition

2) If `indices_shape[-1] == r`, since the rank of `indices` is `q`, `indices` can be thought of as a `(q-1)`-dimensional tensor
containing 1-D tensors of dimension `r`. Let us think of each such `r` ranked tensor as `indices_slice`.
Each *scalar value* corresponding to `data[indices_slice]` is filled into the corresponding location of the `(q-1)`-dimensional tensor
to form the `output` tensor (Example 1 below)

3) If `indices_shape[-1] < r`, since the rank of `indices` is `q`, `indices` can be thought of as a `(q-1)`-dimensional tensor
containing 1-D tensors of dimension `< r`. Let us think of each such tensors as `indices_slice`.
Each *tensor slice* corresponding to `data[indices_slice , :]` is filled into the corresponding location of the `(q-1)`-dimensional tensor
to form the `output` tensor (Examples 2, 3, and 4 below)

This operator is the inverse of `ScatterND`.

`Example 1`

data = [[0,1],[2,3]] # data_shape = [2, 2]

indices = [[0,0],[1,1]] # indices_shape = [2, 2]

output = [0,3] # output_shape = [2]

`Example 2`

data = [[0,1],[2,3]] # data_shape = [2, 2]

indices = [[1],[0]] # indices_shape = [2, 1]

output = [[2,3],[0,1]] # output_shape = [2, 2]

`Example 3`

data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]

indices = [[0,1],[1,0]] # indices_shape = [2, 2]

output = [[2,3],[4,5]] # output_shape = [2, 2]

`Example 4`

data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]

indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]

output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]


#### Version

This version of the operator has been available since version 11 of the default ONNX operator set.

#### Inputs

<dl>
<dt><tt>data</tt> : T</dt>
<dd>Tensor of rank r >= 1.</dd>
<dt><tt>indices</tt> : tensor(int64)</dt>
<dd>Tensor of rank q >= 1.</dd>
</dl>

#### Outputs

<dl>
<dt><tt>output</tt> : T</dt>
<dd>Tensor of rank q + r - indices_shape[-1] - 1.</dd>
</dl>

#### Type Constraints

<dl>
<dt><tt>T</tt> : tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)</dt>
<dd>Constrain input and output types to any tensor type.</dd>
</dl>

### <a name="Loop-11"></a>**Loop-11**</a>

Generic Looping construct. This loop has multiple termination conditions:
Expand Down
143 changes: 143 additions & 0 deletions docs/Operators.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@
* <a href="#GRU">GRU</a>
* <a href="#Gather">Gather</a>
* <a href="#GatherElements">GatherElements</a>
* <a href="#GatherND">GatherND</a>
* <a href="#Gemm">Gemm</a>
* <a href="#GlobalAveragePool">GlobalAveragePool</a>
* <a href="#GlobalLpPool">GlobalLpPool</a>
Expand Down Expand Up @@ -5158,6 +5159,148 @@ expect(node, inputs=[data, indices.astype(np.int64)], outputs=[y],
</details>


### <a name="GatherND"></a><a name="gathernd">**GatherND**</a>

Given `data` tensor of rank `r` >= 1, and `indices` tensor of rank `q` >= 1, this operator gathers
slices of `data` into an output tensor of rank `q + r - indices_shape[-1] - 1`.

`indices` is an q-dimensional integer tensor, best thought of as a `(q-1)`-dimensional tensor of index-tuples into `data`,
where each element defines a slice of `data`

Some salient points about the inputs' rank and shape:

1) r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks `r` and `q`

2) The `indices_shape[-1]` should have a value between 1 (inclusive) and rank `r` (inclusive)

3) All values in `indices` are expected to be within bounds [-s, s-1] along axis of size `s` (i.e.) `-data_shape[i] <= indices[...,i] <= data_shape[i] - 1`.
It is an error if any of the index values are out of bounds.

The output is computed as follows:

The output tensor is obtained by mapping each index-tuple in the `indices` tensor to the corresponding slice of the input `data`.

1) If `indices_shape[-1] > r` => error condition

2) If `indices_shape[-1] == r`, since the rank of `indices` is `q`, `indices` can be thought of as a `(q-1)`-dimensional tensor
containing 1-D tensors of dimension `r`. Let us think of each such `r` ranked tensor as `indices_slice`.
Each *scalar value* corresponding to `data[indices_slice]` is filled into the corresponding location of the `(q-1)`-dimensional tensor
to form the `output` tensor (Example 1 below)

3) If `indices_shape[-1] < r`, since the rank of `indices` is `q`, `indices` can be thought of as a `(q-1)`-dimensional tensor
containing 1-D tensors of dimension `< r`. Let us think of each such tensors as `indices_slice`.
Each *tensor slice* corresponding to `data[indices_slice , :]` is filled into the corresponding location of the `(q-1)`-dimensional tensor
to form the `output` tensor (Examples 2, 3, and 4 below)

This operator is the inverse of `ScatterND`.

`Example 1`

data = [[0,1],[2,3]] # data_shape = [2, 2]

indices = [[0,0],[1,1]] # indices_shape = [2, 2]

output = [0,3] # output_shape = [2]

`Example 2`

data = [[0,1],[2,3]] # data_shape = [2, 2]

indices = [[1],[0]] # indices_shape = [2, 1]

output = [[2,3],[0,1]] # output_shape = [2, 2]

`Example 3`

data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]

indices = [[0,1],[1,0]] # indices_shape = [2, 2]

output = [[2,3],[4,5]] # output_shape = [2, 2]

`Example 4`

data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]

indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]

output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]


#### Version

This version of the operator has been available since version 11 of the default ONNX operator set.

#### Inputs

<dl>
<dt><tt>data</tt> : T</dt>
<dd>Tensor of rank r >= 1.</dd>
<dt><tt>indices</tt> : tensor(int64)</dt>
<dd>Tensor of rank q >= 1.</dd>
</dl>

#### Outputs

<dl>
<dt><tt>output</tt> : T</dt>
<dd>Tensor of rank q + r - indices_shape[-1] - 1.</dd>
</dl>

#### Type Constraints

<dl>
<dt><tt>T</tt> : tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128)</dt>
<dd>Constrain input and output types to any tensor type.</dd>
</dl>


#### Examples

<details>
<summary>float32</summary>

```python
node = onnx.helper.make_node(
'GatherND',
inputs=['data', 'indices'],
outputs=['output'],
)

data = np.array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]], dtype=np.float32)
indices = np.array([[[0, 1]], [[1, 0]]], dtype=np.int64)
output = gather_nd_impl(data, indices)
expected_output = np.array([[[2, 3]], [[4, 5]]], dtype=np.float32)
assert (np.array_equal(output, expected_output))
expect(node, inputs=[data, indices], outputs=[output],
name='test_gathernd_example_float32')
```

</details>


<details>
<summary>int32</summary>

```python
node = onnx.helper.make_node(
'GatherND',
inputs=['data', 'indices'],
outputs=['output'],
)

data = np.array([[0, 1], [2, 3]], dtype=np.int32)
indices = np.array([[0, 0], [1, 1]], dtype=np.int64)
output = gather_nd_impl(data, indices)
expected_output = np.array([0, 3], dtype=np.int32)
assert (np.array_equal(output, expected_output))
expect(node, inputs=[data, indices], outputs=[output],
name='test_gathernd_example_int32')
```

</details>


### <a name="Gemm"></a><a name="gemm">**Gemm**</a>

General Matrix multiplication:
Expand Down
46 changes: 45 additions & 1 deletion docs/TestCoverage.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
* [Overall Test Coverage](#overall-test-coverage)
# Node Test Coverage
## Summary
Node tests have covered 135/142 (95.07%, 5 generators excluded) common operators.
Node tests have covered 136/143 (95.10%, 5 generators excluded) common operators.

Node tests have covered 0/0 (N/A) experimental operators.

Expand Down Expand Up @@ -2828,6 +2828,50 @@ expect(node, inputs=[data, indices.astype(np.int64)], outputs=[y],
</details>


### GatherND
There are 2 test cases, listed as following:
<details>
<summary>float32</summary>

```python
node = onnx.helper.make_node(
'GatherND',
inputs=['data', 'indices'],
outputs=['output'],
)

data = np.array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]], dtype=np.float32)
indices = np.array([[[0, 1]], [[1, 0]]], dtype=np.int64)
output = gather_nd_impl(data, indices)
expected_output = np.array([[[2, 3]], [[4, 5]]], dtype=np.float32)
assert (np.array_equal(output, expected_output))
expect(node, inputs=[data, indices], outputs=[output],
name='test_gathernd_example_float32')
```

</details>
<details>
<summary>int32</summary>

```python
node = onnx.helper.make_node(
'GatherND',
inputs=['data', 'indices'],
outputs=['output'],
)

data = np.array([[0, 1], [2, 3]], dtype=np.int32)
indices = np.array([[0, 0], [1, 1]], dtype=np.int64)
output = gather_nd_impl(data, indices)
expected_output = np.array([0, 3], dtype=np.int32)
assert (np.array_equal(output, expected_output))
expect(node, inputs=[data, indices], outputs=[output],
name='test_gathernd_example_int32')
```

</details>


### Gemm
There are 2 test cases, listed as following:
<details>
Expand Down
71 changes: 71 additions & 0 deletions onnx/backend/test/case/node/gathernd.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals

import numpy as np # type: ignore

import onnx
from ..base import Base
from . import expect


def gather_nd_impl(data, indices):
# type: (np.ndarray, np.ndarray) -> np.ndarray

# Note the data rank - will be reused multiple times later
data_rank = len(data.shape)

# Check input tensors' shape/rank condition
assert indices.shape[-1] <= data_rank

# Compute output of the op as below
# Compute shape of output array
output_shape = list(indices.shape)[:-1] if (indices.shape[-1] == data_rank) else list(indices.shape)[:-1] + list(data.shape)[indices.shape[-1]:]

# Placeholder for output data
output_data_buffer = []

# Flatten 'indices' to 2D array
reshaped_indices = indices.reshape(-1, indices.shape[-1])

# gather each scalar value from 'data'
for outer_dim in range(reshaped_indices.shape[0]):
gather_index = tuple(reshaped_indices[outer_dim])
output_data_buffer.append(data[gather_index])
return np.asarray(output_data_buffer, dtype=data.dtype).reshape(output_shape)


class GatherND(Base):

@staticmethod
def export_int32(): # type: () -> None
node = onnx.helper.make_node(
'GatherND',
inputs=['data', 'indices'],
outputs=['output'],
)

data = np.array([[0, 1], [2, 3]], dtype=np.int32)
indices = np.array([[0, 0], [1, 1]], dtype=np.int64)
output = gather_nd_impl(data, indices)
expected_output = np.array([0, 3], dtype=np.int32)
assert (np.array_equal(output, expected_output))
expect(node, inputs=[data, indices], outputs=[output],
name='test_gathernd_example_int32')

@staticmethod
def export_float32(): # type: () -> None
node = onnx.helper.make_node(
'GatherND',
inputs=['data', 'indices'],
outputs=['output'],
)

data = np.array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]], dtype=np.float32)
indices = np.array([[[0, 1]], [[1, 0]]], dtype=np.int64)
output = gather_nd_impl(data, indices)
expected_output = np.array([[[2, 3]], [[4, 5]]], dtype=np.float32)
assert (np.array_equal(output, expected_output))
expect(node, inputs=[data, indices], outputs=[output],
name='test_gathernd_example_float32')
Loading

0 comments on commit 1a62afd

Please sign in to comment.