Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/onnx/onnx into neraoof/op…
Browse files Browse the repository at this point in the history
…tional

Signed-off-by: neginraoof <neginmr@utexas.edu>

# Conflicts:
#	docs/Changelog.md
#	docs/TestCoverage.md
#	onnx/defs/operator_sets.h
  • Loading branch information
neginraoof committed Jul 14, 2021
2 parents 61c5107 + 2e5caf7 commit ea2ee70
Show file tree
Hide file tree
Showing 90 changed files with 706 additions and 6 deletions.
35 changes: 35 additions & 0 deletions docs/Changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -19607,6 +19607,41 @@ This version of the operator has been available since version 15 of the default
<dd>Constrain output types to all numeric tensors and bool tensors.</dd>
</dl>

### <a name="CastLike-15"></a>**CastLike-15**</a>

The operator casts the elements of a given input tensor (the first input) to
the same data type as the elements of the second input tensor.
See documentation of the Cast operator for further details.

#### Version

This version of the operator has been available since version 15 of the default ONNX operator set.

#### Inputs

<dl>
<dt><tt>input</tt> (differentiable) : T1</dt>
<dd>Input tensor to be cast.</dd>
<dt><tt>target_type</tt> (non-differentiable) : T2</dt>
<dd>The (first) input tensor will be cast to produce a tensor of the same type as this (second input) tensor.</dd>
</dl>

#### Outputs

<dl>
<dt><tt>output</tt> (differentiable) : T2</dt>
<dd>Output tensor produced by casting the first input tensor to have the same type as the second input tensor.</dd>
</dl>

#### Type Constraints

<dl>
<dt><tt>T1</tt> : tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16)</dt>
<dd>Constrain input types. Casting from complex is not supported.</dd>
<dt><tt>T2</tt> : tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16)</dt>
<dd>Constrain output types. Casting to complex is not supported.</dd>
</dl>

### <a name="Optional-15"></a>**Optional-15**</a>

Construct an optional type value containing either an empty optional of a certain type specified by the attribute,
Expand Down
124 changes: 124 additions & 0 deletions docs/Operators.md
Original file line number Diff line number Diff line change
Expand Up @@ -169,6 +169,7 @@ For an operator input/output's differentiability, it can be differentiable,
|<a href="#Xor">Xor</a>|<a href="Changelog.md#Xor-7">7</a>, <a href="Changelog.md#Xor-1">1</a>|
|**Function**|**Since version**|
|<a href="#Bernoulli">Bernoulli</a>|<a href="Changelog.md#Bernoulli-15">15</a>|
|<a href="#CastLike">CastLike</a>|<a href="Changelog.md#CastLike-15">15</a>|
|<a href="#Celu">Celu</a>|<a href="Changelog.md#Celu-12">12</a>|
|<a href="#DynamicQuantizeLinear">DynamicQuantizeLinear</a>|<a href="Changelog.md#DynamicQuantizeLinear-11">11</a>|
|<a href="#GreaterOrEqual">GreaterOrEqual</a>|<a href="Changelog.md#GreaterOrEqual-12">12</a>|
Expand Down Expand Up @@ -2530,6 +2531,129 @@ for from_type, to_type in test_cases:
</details>


### <a name="CastLike"></a><a name="castlike">**CastLike**</a>

The operator casts the elements of a given input tensor (the first input) to
the same data type as the elements of the second input tensor.
See documentation of the Cast operator for further details.

#### Version

This version of the operator has been available since version 15 of the default ONNX operator set.

#### Inputs

<dl>
<dt><tt>input</tt> (differentiable) : T1</dt>
<dd>Input tensor to be cast.</dd>
<dt><tt>target_type</tt> (non-differentiable) : T2</dt>
<dd>The (first) input tensor will be cast to produce a tensor of the same type as this (second input) tensor.</dd>
</dl>

#### Outputs

<dl>
<dt><tt>output</tt> (differentiable) : T2</dt>
<dd>Output tensor produced by casting the first input tensor to have the same type as the second input tensor.</dd>
</dl>

#### Type Constraints

<dl>
<dt><tt>T1</tt> : tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16)</dt>
<dd>Constrain input types. Casting from complex is not supported.</dd>
<dt><tt>T2</tt> : tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16)</dt>
<dd>Constrain output types. Casting to complex is not supported.</dd>
</dl>


#### Examples

<details>
<summary>castlike</summary>

```python
shape = (3, 4)
test_cases = [
('FLOAT', 'FLOAT16'),
('FLOAT', 'DOUBLE'),
('FLOAT16', 'FLOAT'),
('FLOAT16', 'DOUBLE'),
('DOUBLE', 'FLOAT'),
('DOUBLE', 'FLOAT16'),
('FLOAT', 'STRING'),
('STRING', 'FLOAT'),
('FLOAT', 'BFLOAT16'),
('BFLOAT16', 'FLOAT'),
]

for from_type, to_type in test_cases:
input_type_proto = None
output_type_proto = None
if 'BFLOAT16' == from_type or 'BFLOAT16' == to_type:
np_fp32 = np.array([u'0.47892547', u'0.48033667', u'0.49968487', u'0.81910545',
u'0.47031248', u'0.816468', u'0.21087195', u'0.7229038',
u'NaN', u'INF', u'+INF', u'-INF'], dtype=np.float32)
little_endisan = sys.byteorder == 'little'
np_uint16_view = np_fp32.view(dtype=np.uint16)
np_bfp16 = np_uint16_view[1::2] if little_endisan else np_uint16_view[0::2]
if 'BFLOAT16' == to_type:
assert from_type == 'FLOAT'
input = np_fp32.reshape([3, 4])
output = np_bfp16.reshape([3, 4])
input_type = onnx.helper.make_tensor_type_proto(int(TensorProto.FLOAT), None)
output_type_proto = onnx.helper.make_tensor_type_proto(int(TensorProto.BFLOAT16), None)
else:
assert to_type == 'FLOAT'
input = np_bfp16.reshape([3, 4])
#convert bfloat to FLOAT
np_fp32_zeros = np.zeros((len(np_bfp16) * 2,), dtype=np.uint16)
if little_endisan:
np_fp32_zeros[1::2] = np_bfp16
else:
np_fp32_zeros[0::2] = np_bfp16
np_fp32_from_bfloat = np_fp32_zeros.view(dtype=np.float32)
output = np_fp32_from_bfloat.reshape([3, 4])
input_type_proto = onnx.helper.make_tensor_type_proto(int(TensorProto.BFLOAT16), None)
output_type_proto = onnx.helper.make_tensor_type_proto(int(TensorProto.FLOAT), None)
elif 'STRING' != from_type:
input = np.random.random_sample(shape).astype(
TENSOR_TYPE_TO_NP_TYPE[getattr(TensorProto, from_type)])
if ('STRING' == to_type):
# Converting input to str, then give it np.object dtype for generating script
ss = []
for i in input.flatten():
s = str(i).encode('utf-8')
su = s.decode('utf-8')
ss.append(su)

output = np.array(ss).astype(np.object).reshape([3, 4])
else:
output = input.astype(TENSOR_TYPE_TO_NP_TYPE[getattr(TensorProto, to_type)])
else:
input = np.array([u'0.47892547', u'0.48033667', u'0.49968487', u'0.81910545',
u'0.47031248', u'0.816468', u'0.21087195', u'0.7229038',
u'NaN', u'INF', u'+INF', u'-INF'], dtype=np.dtype(np.object)).reshape([3, 4])
output = input.astype(TENSOR_TYPE_TO_NP_TYPE[getattr(TensorProto, to_type)])
like = output.flatten()[0:1]
node = onnx.helper.make_node(
'CastLike',
inputs=['input', 'like'],
outputs=['output'],
)
if input_type_proto and output_type_proto:
expect(node, inputs=[input, like], outputs=[output],
name='test_castlike_' + from_type + '_to_' + to_type,
input_type_protos=[input_type_proto, output_type_proto],
output_type_protos=[output_type_proto])
else:
expect(node, inputs=[input, like], outputs=[output],
name='test_castlike_' + from_type + '_to_' + to_type)
```

</details>


### <a name="Ceil"></a><a name="ceil">**Ceil**</a>

Ceil takes one input data (Tensor<T>) and produces one output data
Expand Down
89 changes: 88 additions & 1 deletion docs/TestCoverage.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
* [Overall Test Coverage](#overall-test-coverage)
# Node Test Coverage
## Summary
Node tests have covered 151/167 (90.42%, 5 generators excluded) common operators.
Node tests have covered 152/168 (90.48%, 5 generators excluded) common operators.

Node tests have covered 0/0 (N/A) experimental operators.

Expand Down Expand Up @@ -1717,6 +1717,93 @@ for from_type, to_type in test_cases:
</details>


### CastLike
There are 1 test cases, listed as following:
<details>
<summary>castlike</summary>

```python
shape = (3, 4)
test_cases = [
('FLOAT', 'FLOAT16'),
('FLOAT', 'DOUBLE'),
('FLOAT16', 'FLOAT'),
('FLOAT16', 'DOUBLE'),
('DOUBLE', 'FLOAT'),
('DOUBLE', 'FLOAT16'),
('FLOAT', 'STRING'),
('STRING', 'FLOAT'),
('FLOAT', 'BFLOAT16'),
('BFLOAT16', 'FLOAT'),
]

for from_type, to_type in test_cases:
input_type_proto = None
output_type_proto = None
if 'BFLOAT16' == from_type or 'BFLOAT16' == to_type:
np_fp32 = np.array([u'0.47892547', u'0.48033667', u'0.49968487', u'0.81910545',
u'0.47031248', u'0.816468', u'0.21087195', u'0.7229038',
u'NaN', u'INF', u'+INF', u'-INF'], dtype=np.float32)
little_endisan = sys.byteorder == 'little'
np_uint16_view = np_fp32.view(dtype=np.uint16)
np_bfp16 = np_uint16_view[1::2] if little_endisan else np_uint16_view[0::2]
if 'BFLOAT16' == to_type:
assert from_type == 'FLOAT'
input = np_fp32.reshape([3, 4])
output = np_bfp16.reshape([3, 4])
input_type = onnx.helper.make_tensor_type_proto(int(TensorProto.FLOAT), None)
output_type_proto = onnx.helper.make_tensor_type_proto(int(TensorProto.BFLOAT16), None)
else:
assert to_type == 'FLOAT'
input = np_bfp16.reshape([3, 4])
#convert bfloat to FLOAT
np_fp32_zeros = np.zeros((len(np_bfp16) * 2,), dtype=np.uint16)
if little_endisan:
np_fp32_zeros[1::2] = np_bfp16
else:
np_fp32_zeros[0::2] = np_bfp16
np_fp32_from_bfloat = np_fp32_zeros.view(dtype=np.float32)
output = np_fp32_from_bfloat.reshape([3, 4])
input_type_proto = onnx.helper.make_tensor_type_proto(int(TensorProto.BFLOAT16), None)
output_type_proto = onnx.helper.make_tensor_type_proto(int(TensorProto.FLOAT), None)
elif 'STRING' != from_type:
input = np.random.random_sample(shape).astype(
TENSOR_TYPE_TO_NP_TYPE[getattr(TensorProto, from_type)])
if ('STRING' == to_type):
# Converting input to str, then give it np.object dtype for generating script
ss = []
for i in input.flatten():
s = str(i).encode('utf-8')
su = s.decode('utf-8')
ss.append(su)

output = np.array(ss).astype(np.object).reshape([3, 4])
else:
output = input.astype(TENSOR_TYPE_TO_NP_TYPE[getattr(TensorProto, to_type)])
else:
input = np.array([u'0.47892547', u'0.48033667', u'0.49968487', u'0.81910545',
u'0.47031248', u'0.816468', u'0.21087195', u'0.7229038',
u'NaN', u'INF', u'+INF', u'-INF'], dtype=np.dtype(np.object)).reshape([3, 4])
output = input.astype(TENSOR_TYPE_TO_NP_TYPE[getattr(TensorProto, to_type)])
like = output.flatten()[0:1]
node = onnx.helper.make_node(
'CastLike',
inputs=['input', 'like'],
outputs=['output'],
)
if input_type_proto and output_type_proto:
expect(node, inputs=[input, like], outputs=[output],
name='test_castlike_' + from_type + '_to_' + to_type,
input_type_protos=[input_type_proto, output_type_proto],
output_type_protos=[output_type_proto])
else:
expect(node, inputs=[input, like], outputs=[output],
name='test_castlike_' + from_type + '_to_' + to_type)
```

</details>


### Ceil
There are 1 test cases, listed as following:
<details>
Expand Down
2 changes: 1 addition & 1 deletion onnx/backend/test/case/node/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ def _extract_value_info(input, name, type_proto=None): # type: (Union[List[Any]
raise NotImplementedError("_extract_value_info: both input and type_proto arguments cannot be None.")
elif isinstance(input, list):
elem_type = onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[input[0].dtype]
shape = input[0].shape
shape = None
tensor_type_proto = onnx.helper.make_tensor_type_proto(elem_type, shape)
type_proto = onnx.helper.make_sequence_type_proto(tensor_type_proto)
else:
Expand Down
100 changes: 100 additions & 0 deletions onnx/backend/test/case/node/castlike.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# SPDX-License-Identifier: Apache-2.0

# coding: utf-8

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals

import numpy as np # type: ignore

import onnx
from onnx import TensorProto
from onnx.mapping import TENSOR_TYPE_TO_NP_TYPE

from ..base import Base
from . import expect
import sys


class CastLike(Base):

@staticmethod
def export(): # type: () -> None
shape = (3, 4)
test_cases = [
('FLOAT', 'FLOAT16'),
('FLOAT', 'DOUBLE'),
('FLOAT16', 'FLOAT'),
('FLOAT16', 'DOUBLE'),
('DOUBLE', 'FLOAT'),
('DOUBLE', 'FLOAT16'),
('FLOAT', 'STRING'),
('STRING', 'FLOAT'),
('FLOAT', 'BFLOAT16'),
('BFLOAT16', 'FLOAT'),
]

for from_type, to_type in test_cases:
input_type_proto = None
output_type_proto = None
if 'BFLOAT16' == from_type or 'BFLOAT16' == to_type:
np_fp32 = np.array([u'0.47892547', u'0.48033667', u'0.49968487', u'0.81910545',
u'0.47031248', u'0.816468', u'0.21087195', u'0.7229038',
u'NaN', u'INF', u'+INF', u'-INF'], dtype=np.float32)
little_endisan = sys.byteorder == 'little'
np_uint16_view = np_fp32.view(dtype=np.uint16)
np_bfp16 = np_uint16_view[1::2] if little_endisan else np_uint16_view[0::2]
if 'BFLOAT16' == to_type:
assert from_type == 'FLOAT'
input = np_fp32.reshape([3, 4])
output = np_bfp16.reshape([3, 4])
input_type = onnx.helper.make_tensor_type_proto(int(TensorProto.FLOAT), None)
output_type_proto = onnx.helper.make_tensor_type_proto(int(TensorProto.BFLOAT16), None)
else:
assert to_type == 'FLOAT'
input = np_bfp16.reshape([3, 4])
#convert bfloat to FLOAT
np_fp32_zeros = np.zeros((len(np_bfp16) * 2,), dtype=np.uint16)
if little_endisan:
np_fp32_zeros[1::2] = np_bfp16
else:
np_fp32_zeros[0::2] = np_bfp16
np_fp32_from_bfloat = np_fp32_zeros.view(dtype=np.float32)
output = np_fp32_from_bfloat.reshape([3, 4])
input_type_proto = onnx.helper.make_tensor_type_proto(int(TensorProto.BFLOAT16), None)
output_type_proto = onnx.helper.make_tensor_type_proto(int(TensorProto.FLOAT), None)
elif 'STRING' != from_type:
input = np.random.random_sample(shape).astype(
TENSOR_TYPE_TO_NP_TYPE[getattr(TensorProto, from_type)])
if ('STRING' == to_type):
# Converting input to str, then give it np.object dtype for generating script
ss = []
for i in input.flatten():
s = str(i).encode('utf-8')
su = s.decode('utf-8')
ss.append(su)

output = np.array(ss).astype(np.object).reshape([3, 4])
else:
output = input.astype(TENSOR_TYPE_TO_NP_TYPE[getattr(TensorProto, to_type)])
else:
input = np.array([u'0.47892547', u'0.48033667', u'0.49968487', u'0.81910545',
u'0.47031248', u'0.816468', u'0.21087195', u'0.7229038',
u'NaN', u'INF', u'+INF', u'-INF'], dtype=np.dtype(np.object)).reshape([3, 4])
output = input.astype(TENSOR_TYPE_TO_NP_TYPE[getattr(TensorProto, to_type)])
like = output.flatten()[0:1]
node = onnx.helper.make_node(
'CastLike',
inputs=['input', 'like'],
outputs=['output'],
)
if input_type_proto and output_type_proto:
expect(node, inputs=[input, like], outputs=[output],
name='test_castlike_' + from_type + '_to_' + to_type,
input_type_protos=[input_type_proto, output_type_proto],
output_type_protos=[output_type_proto])
else:
expect(node, inputs=[input, like], outputs=[output],
name='test_castlike_' + from_type + '_to_' + to_type)

0 comments on commit ea2ee70

Please sign in to comment.