Skip to content
Permalink
master
Switch branches/tags
Go to file
* add functionBody

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* fix clang

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* add test and Schema again

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* add a newline at end of bernoulli.py

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* rebase master and update function body

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* add type annotation and rename info

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* fix attribute indicated type

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* add functionBody

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* fix clang

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* add test and Schema again

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* add a newline at end of bernoulli.py

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* rebase master and update function body

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* add type annotation and rename info

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* add seed in function body and add some comments

Signed-off-by: hwangdeyu <deyhuang@qq.com>

* fix description

Signed-off-by: hwangdeyu <dejack953@outlook.com>

Co-authored-by: hwangdeyu <deyhuang@qq.com>
Co-authored-by: Ashwini Khade <askhade@microsoft.com>
Co-authored-by: G. Ramalingam <grama@microsoft.com>
120 contributors

Users who have contributed to this file

@linkerzhang @houseroad @wschin @gramalingam @askhade @bddppq @yuanbyu @neginraoof @ebarsoum @fumihwh @BowenBao @codemzs
executable file 24418 lines (18425 sloc) 725 KB

Operator Schemas

This file is automatically generated from the def files via this script. Do not modify directly and instead edit operator definitions.

For an operator input/output's differentiability, it can be differentiable, non-differentiable, or undefined. If a variable's differentiability is not specified, that variable has undefined differentiability.

ai.onnx (default)

Operator Since version
Abs 13, 6, 1
Acos 7
Acosh 9
Add 14, 13, 7, 6, 1
And 7, 1
ArgMax 13, 12, 11, 1
ArgMin 13, 12, 11, 1
Asin 7
Asinh 9
Atan 7
Atanh 9
AveragePool 11, 10, 7, 1
BatchNormalization 14, 9, 7, 6, 1
BitShift 11
Cast 13, 9, 6, 1
Ceil 13, 6, 1
Clip 13, 12, 11, 6, 1
Compress 11, 9
Concat 13, 11, 4, 1
ConcatFromSequence 11
Constant 13, 12, 11, 9, 1
ConstantOfShape 9
Conv 11, 1
ConvInteger 10
ConvTranspose 11, 1
Cos 7
Cosh 9
CumSum 14, 11
DepthToSpace 13, 11, 1
DequantizeLinear 13, 10
Det 11
Div 14, 13, 7, 6, 1
Dropout 13, 12, 10, 7, 6, 1
Einsum 12
Elu 6, 1
Equal 13, 11, 7, 1
Erf 13, 9
Exp 13, 6, 1
Expand 13, 8
EyeLike 9
Flatten 13, 11, 9, 1
Floor 13, 6, 1
GRU 14, 7, 3, 1
Gather 13, 11, 1
GatherElements 13, 11
GatherND 13, 12, 11
Gemm 13, 11, 9, 7, 6, 1
GlobalAveragePool 1
GlobalLpPool 2, 1
GlobalMaxPool 1
Greater 13, 9, 7, 1
HardSigmoid 6, 1
Hardmax 13, 11, 1
Identity 14, 13, 1
If 13, 11, 1
InstanceNormalization 6, 1
IsInf 10
IsNaN 13, 9
LRN 13, 1
LSTM 14, 7, 1
LeakyRelu 6, 1
Less 13, 9, 7, 1
Log 13, 6, 1
Loop 13, 11, 1
LpNormalization 1
LpPool 11, 2, 1
MatMul 13, 9, 1
MatMulInteger 10
Max 13, 12, 8, 6, 1
MaxPool 12, 11, 10, 8, 1
MaxRoiPool 1
MaxUnpool 11, 9
Mean 13, 8, 6, 1
Min 13, 12, 8, 6, 1
Mod 13, 10
Mul 14, 13, 7, 6, 1
Multinomial 7
Neg 13, 6, 1
NonMaxSuppression 11, 10
NonZero 13, 9
Not 1
OneHot 11, 9
Or 7, 1
PRelu 9, 7, 6, 1
Pad 13, 11, 2, 1
Pow 15, 13, 12, 7, 1
QLinearConv 10
QLinearMatMul 10
QuantizeLinear 13, 10
RNN 14, 7, 1
RandomNormal 1
RandomNormalLike 1
RandomUniform 1
RandomUniformLike 1
Reciprocal 13, 6, 1
ReduceL1 13, 11, 1
ReduceL2 13, 11, 1
ReduceLogSum 13, 11, 1
ReduceLogSumExp 13, 11, 1
ReduceMax 13, 12, 11, 1
ReduceMean 13, 11, 1
ReduceMin 13, 12, 11, 1
ReduceProd 13, 11, 1
ReduceSum 13, 11, 1
ReduceSumSquare 13, 11, 1
Relu 14, 13, 6, 1
Reshape 14, 13, 5, 1
Resize 13, 11, 10
ReverseSequence 10
RoiAlign 10
Round 11
Scan 11, 9, 8
Scatter (deprecated) 11, 9
ScatterElements 13, 11
ScatterND 13, 11
Selu 6, 1
SequenceAt 11
SequenceConstruct 11
SequenceEmpty 11
SequenceErase 11
SequenceInsert 11
SequenceLength 11
Shape 13, 1
Shrink 9
Sigmoid 13, 6, 1
Sign 13, 9
Sin 7
Sinh 9
Size 13, 1
Slice 13, 11, 10, 1
Softplus 1
Softsign 1
SpaceToDepth 13, 1
Split 13, 11, 2, 1
SplitToSequence 11
Sqrt 13, 6, 1
Squeeze 13, 11, 1
StringNormalizer 10
Sub 14, 13, 7, 6, 1
Sum 13, 8, 6, 1
Tan 7
Tanh 13, 6, 1
TfIdfVectorizer 9
ThresholdedRelu 10
Tile 13, 6, 1
TopK 11, 10, 1
Transpose 13, 1
Trilu 14
Unique 11
Unsqueeze 13, 11, 1
Upsample (deprecated) 10, 9, 7
Where 9
Xor 7, 1
Function Since version
Bernoulli 15
Celu 12
DynamicQuantizeLinear 11
GreaterOrEqual 12
HardSwish 14
LessOrEqual 12
LogSoftmax 13, 11, 1
MeanVarianceNormalization 13, 9
NegativeLogLikelihoodLoss 13, 12
Range 11
Softmax 13, 11, 1
SoftmaxCrossEntropyLoss 13, 12

ai.onnx.preview.training

Operator Since version
ai.onnx.preview.training.Adagrad 1
ai.onnx.preview.training.Adam 1
ai.onnx.preview.training.Gradient 1
ai.onnx.preview.training.Momentum 1

ai.onnx (default)

Abs

Absolute takes one input data (Tensor) and produces one output data (Tensor) where the absolute is, y = abs(x), is applied to the tensor elementwise.

Version

This version of the operator has been available since version 13 of the default ONNX operator set.

Other versions of this operator: 1, 6

Inputs

X (differentiable) : T
Input tensor

Outputs

Y (differentiable) : T
Output tensor

Type Constraints

T : tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16)
Constrain input and output types to all numeric tensors.

Examples

abs
node = onnx.helper.make_node(
    'Abs',
    inputs=['x'],
    outputs=['y'],
)
x = np.random.randn(3, 4, 5).astype(np.float32)
y = abs(x)

expect(node, inputs=[x], outputs=[y],
       name='test_abs')

Sample Implementation

Abs
# SPDX-License-Identifier: Apache-2.0

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals

import numpy as np  # type: ignore


def abs(input):  # type: (np.ndarray) -> np.ndarray
    return np.abs(input)

Acos

Calculates the arccosine (inverse of cosine) of the given input tensor, element-wise.

Version

This version of the operator has been available since version 7 of the default ONNX operator set.

Inputs

input (differentiable) : T
Input tensor

Outputs

output (differentiable) : T
The arccosine of the input tensor computed element-wise

Type Constraints

T : tensor(float16), tensor(float), tensor(double)
Constrain input and output types to float tensors.

Examples

acos
node = onnx.helper.make_node(
    'Acos',
    inputs=['x'],
    outputs=['y'],
)

x = np.array([-0.5, 0, 0.5]).astype(np.float32)
y = np.arccos(x)
expect(node, inputs=[x], outputs=[y],
       name='test_acos_example')

x = np.random.rand(3, 4, 5).astype(np.float32)
y = np.arccos(x)
expect(node, inputs=[x], outputs=[y],
       name='test_acos')

Acosh

Calculates the hyperbolic arccosine of the given input tensor element-wise.

Version

This version of the operator has been available since version 9 of the default ONNX operator set.

Inputs

input (differentiable) : T
Input tensor

Outputs

output (differentiable) : T
The hyperbolic arccosine values of the input tensor computed element-wise

Type Constraints

T : tensor(float16), tensor(float), tensor(double)
Constrain input and output types to float tensors.

Examples

acosh
node = onnx.helper.make_node(
    'Acosh',
    inputs=['x'],
    outputs=['y'],
)

x = np.array([10, np.e, 1]).astype(np.float32)
y = np.arccosh(x)  # expected output [2.99322295,  1.65745449,  0.]
expect(node, inputs=[x], outputs=[y],
       name='test_acosh_example')

x = np.random.uniform(1.0, 10.0, (3, 4, 5)).astype(np.float32)
y = np.arccosh(x)
expect(node, inputs=[x], outputs=[y],
       name='test_acosh')

Add

Performs element-wise binary addition (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check the doc.

(Opset 14 change): Extend supported types to include uint8, int8, uint16, and int16.

Version

This version of the operator has been available since version 14 of the default ONNX operator set.

Other versions of this operator: 1, 6, 7, 13

Inputs

A (differentiable) : T
First operand.
B (differentiable) : T
Second operand.

Outputs

C (differentiable) : T
Result, has same element type as two inputs

Type Constraints

T : tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16)
Constrain input and output types to all numeric tensors.

Examples

add
node = onnx.helper.make_node(
    'Add',
    inputs=['x', 'y'],
    outputs=['sum'],
)

x = np.random.randn(3, 4, 5).astype(np.float32)
y = np.random.randn(3, 4, 5).astype(np.float32)
expect(node, inputs=[x, y], outputs=[x + y],
       name='test_add')
add_broadcast
node = onnx.helper.make_node(
    'Add',
    inputs=['x', 'y'],
    outputs=['sum'],
)

x = np.random.randn(3, 4, 5).astype(np.float32)
y = np.random.randn(5).astype(np.float32)
expect(node, inputs=[x, y], outputs=[x + y],
       name='test_add_bcast')
add_uint8
node = onnx.helper.make_node(
    'Add',
    inputs=['x', 'y'],
    outputs=['sum'],
)

x = np.random.randint(24, size=(3, 4, 5), dtype=np.uint8)
y = np.random.randint(24, size=(3, 4, 5), dtype=np.uint8)
expect(node, inputs=[x, y], outputs=[x + y],
       name='test_add_uint8')

And

Returns the tensor resulted from performing the and logical operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check the doc.

Version

This version of the operator has been available since version 7 of the default ONNX operator set.

Other versions of this operator: 1

Inputs

A (non-differentiable) : T
First input operand for the logical operator.
B (non-differentiable) : T
Second input operand for the logical operator.

Outputs

C (non-differentiable) : T1
Result tensor.

Type Constraints

T : tensor(bool)
Constrains input to boolean tensor.
T1 : tensor(bool)
Constrains output to boolean tensor.

Examples

and
node = onnx.helper.make_node(
    'And',
    inputs=['x', 'y'],
    outputs=['and'],
)

# 2d
x = (np.random.randn(3, 4) > 0).astype(np.bool)
y = (np.random.randn(3, 4) > 0).astype(np.bool)
z = np.logical_and(x, y)
expect(node, inputs=[x, y], outputs=[z],
       name='test_and2d')

# 3d
x = (np.random.randn(3, 4, 5) > 0).astype(np.bool)
y = (np.random.randn(3, 4, 5) > 0).astype(np.bool)
z = np.logical_and(x, y)
expect(node, inputs=[x, y], outputs=[z],
       name='test_and3d')

# 4d
x = (np.random.randn(3, 4, 5, 6) > 0).astype(np.bool)
y = (np.random.randn(3, 4, 5, 6) > 0).astype(np.bool)
z = np.logical_and(x, y)
expect(node, inputs=[x, y], outputs=[z],
       name='test_and4d')
and_broadcast
node = onnx.helper.make_node(
    'And',
    inputs=['x', 'y'],
    outputs=['and'],
)

# 3d vs 1d
x = (np.random.randn(3, 4, 5) > 0).astype(np.bool)
y = (np.random.randn(5) > 0).astype(np.bool)
z = np.logical_and(x, y)
expect(node, inputs=[x, y], outputs=[z],
       name='test_and_bcast3v1d')

# 3d vs 2d
x = (np.random.randn(3, 4, 5) > 0).astype(np.bool)
y = (np.random.randn(4, 5) > 0).astype(np.bool)
z = np.logical_and(x, y)
expect(node, inputs=[x, y], outputs=[z],
       name='test_and_bcast3v2d')

# 4d vs 2d
x = (np.random.randn(3, 4, 5, 6) > 0).astype(np.bool)
y = (np.random.randn(5, 6) > 0).astype(np.bool)
z = np.logical_and(x, y)
expect(node, inputs=[x, y], outputs=[z],
       name='test_and_bcast4v2d')

# 4d vs 3d
x = (np.random.randn(3, 4, 5, 6) > 0).astype(np.bool)
y = (np.random.randn(4, 5, 6) > 0).astype(np.bool)
z = np.logical_and(x, y)
expect(node, inputs=[x, y], outputs=[z],
       name='test_and_bcast4v3d')

# 4d vs 4d
x = (np.random.randn(1, 4, 1, 6) > 0).astype(np.bool)
y = (np.random.randn(3, 1, 5, 6) > 0).astype(np.bool)
z = np.logical_and(x, y)
expect(node, inputs=[x, y], outputs=[z],
       name='test_and_bcast4v4d')

ArgMax

Computes the indices of the max elements of the input tensor's element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the max is selected if the max appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.

Version

This version of the operator has been available since version 13 of the default ONNX operator set.

Other versions of this operator: 1, 11, 12

Attributes

axis : int (default is 0)
The axis in which to compute the arg indices. Accepted range is [-r, r-1] where r = rank(data).
keepdims : int (default is 1)
Keep the reduced dimension or not, default 1 mean keep reduced dimension.
select_last_index : int (default is 0)
Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index).

Inputs

data (non-differentiable) : T
An input tensor.

Outputs

reduced (non-differentiable) : tensor(int64)
Reduced output tensor with integer data type.

Type Constraints

T : tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16)
Constrain input and output types to all numeric tensors.

Examples

default_axes_keepdims
data = np.array([[2, 1], [3, 10]], dtype=np.float32)
keepdims = 1
node = onnx.helper.make_node(
    'ArgMax',
    inputs=['data'],
    outputs=['result'],
    keepdims=keepdims)

# result: [[1], [1]]
result = argmax_use_numpy(data, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_default_axis_example')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [1, 3, 4]
result = argmax_use_numpy(data, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_default_axis_random')
default_axes_keepdims_select_last_index
data = np.array([[2, 2], [3, 10]], dtype=np.float32)
keepdims = 1
node = onnx.helper.make_node(
    'ArgMax',
    inputs=['data'],
    outputs=['result'],
    keepdims=keepdims,
    select_last_index=True)

# result: [[1, 1]]
result = argmax_use_numpy_select_last_index(data, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_default_axis_example_select_last_index')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [1, 3, 4]
result = argmax_use_numpy_select_last_index(data, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_default_axis_random_select_last_index')
keepdims
data = np.array([[2, 1], [3, 10]], dtype=np.float32)
axis = 1
keepdims = 1
node = onnx.helper.make_node(
    'ArgMax',
    inputs=['data'],
    outputs=['result'],
    axis=axis,
    keepdims=keepdims)
# result: [[0], [1]]
result = argmax_use_numpy(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_keepdims_example')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [2, 1, 4]
result = argmax_use_numpy(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_keepdims_random')
keepdims_select_last_index
data = np.array([[2, 2], [3, 10]], dtype=np.float32)
axis = 1
keepdims = 1
node = onnx.helper.make_node(
    'ArgMax',
    inputs=['data'],
    outputs=['result'],
    axis=axis,
    keepdims=keepdims,
    select_last_index=True)
# result: [[1], [1]]
result = argmax_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_keepdims_example_select_last_index')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [2, 1, 4]
result = argmax_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_keepdims_random_select_last_index')
negative_axis_keepdims
data = np.array([[2, 1], [3, 10]], dtype=np.float32)
axis = -1
keepdims = 1
node = onnx.helper.make_node(
    'ArgMax',
    inputs=['data'],
    outputs=['result'],
    axis=axis,
    keepdims=keepdims)
# result: [[0], [1]]
result = argmax_use_numpy(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_negative_axis_keepdims_example')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [2, 3, 1]
result = argmax_use_numpy(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_negative_axis_keepdims_random')
negative_axis_keepdims_select_last_index
data = np.array([[2, 2], [3, 10]], dtype=np.float32)
axis = -1
keepdims = 1
node = onnx.helper.make_node(
    'ArgMax',
    inputs=['data'],
    outputs=['result'],
    axis=axis,
    keepdims=keepdims,
    select_last_index=True)
# result: [[1], [1]]
result = argmax_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_negative_axis_keepdims_example_select_last_index')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [2, 3, 1]
result = argmax_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_negative_axis_keepdims_random_select_last_index')
no_keepdims
data = np.array([[2, 1], [3, 10]], dtype=np.float32)
axis = 1
keepdims = 0
node = onnx.helper.make_node(
    'ArgMax',
    inputs=['data'],
    outputs=['result'],
    axis=axis,
    keepdims=keepdims)
# result: [[0, 1]]
result = argmax_use_numpy(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_no_keepdims_example')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [2, 4]
result = argmax_use_numpy(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_no_keepdims_random')
no_keepdims_select_last_index
data = np.array([[2, 2], [3, 10]], dtype=np.float32)
axis = 1
keepdims = 0
node = onnx.helper.make_node(
    'ArgMax',
    inputs=['data'],
    outputs=['result'],
    axis=axis,
    keepdims=keepdims,
    select_last_index=True)
# result: [[1, 1]]
result = argmax_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_no_keepdims_example_select_last_index')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [2, 4]
result = argmax_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmax_no_keepdims_random_select_last_index')

ArgMin

Computes the indices of the min elements of the input tensor's element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the min is selected if the min appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.

Version

This version of the operator has been available since version 13 of the default ONNX operator set.

Other versions of this operator: 1, 11, 12

Attributes

axis : int (default is 0)
The axis in which to compute the arg indices. Accepted range is [-r, r-1] where r = rank(data).
keepdims : int (default is 1)
Keep the reduced dimension or not, default 1 mean keep reduced dimension.
select_last_index : int (default is 0)
Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index).

Inputs

data (non-differentiable) : T
An input tensor.

Outputs

reduced (non-differentiable) : tensor(int64)
Reduced output tensor with integer data type.

Type Constraints

T : tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16)
Constrain input and output types to all numeric tensors.

Examples

default_axes_keepdims
data = np.array([[2, 1], [3, 10]], dtype=np.float32)
keepdims = 1
node = onnx.helper.make_node(
    'ArgMin',
    inputs=['data'],
    outputs=['result'],
    keepdims=keepdims)

# The content of result is : [[0], [0]]
result = argmin_use_numpy(data, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_default_axis_example')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [1, 3, 4]
result = argmin_use_numpy(data, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_default_axis_random')
default_axes_keepdims_select_last_index
data = np.array([[2, 2], [3, 10]], dtype=np.float32)
keepdims = 1
node = onnx.helper.make_node(
    'ArgMin',
    inputs=['data'],
    outputs=['result'],
    keepdims=keepdims,
    select_last_index=True)

# result: [[0, 0]]
result = argmin_use_numpy_select_last_index(data, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_default_axis_example_select_last_index')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [1, 3, 4]
result = argmin_use_numpy_select_last_index(data, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_default_axis_random_select_last_index')
keepdims
data = np.array([[2, 1], [3, 10]], dtype=np.float32)
axis = 1
keepdims = 1
node = onnx.helper.make_node(
    'ArgMin',
    inputs=['data'],
    outputs=['result'],
    axis=axis,
    keepdims=keepdims)
# The content of result is : [[1], [0]]
result = argmin_use_numpy(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_keepdims_example')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [2, 1, 4]
result = argmin_use_numpy(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_keepdims_random')
keepdims_select_last_index
data = np.array([[2, 2], [3, 10]], dtype=np.float32)
axis = 1
keepdims = 1
node = onnx.helper.make_node(
    'ArgMin',
    inputs=['data'],
    outputs=['result'],
    axis=axis,
    keepdims=keepdims,
    select_last_index=True)
# result: [[1], [0]]
result = argmin_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_keepdims_example_select_last_index')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [2, 1, 4]
result = argmin_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_keepdims_random_select_last_index')
negative_axis_keepdims
data = np.array([[2, 1], [3, 10]], dtype=np.float32)
axis = -1
keepdims = 1
node = onnx.helper.make_node(
    'ArgMin',
    inputs=['data'],
    outputs=['result'],
    axis=axis,
    keepdims=keepdims)
# The content of result is : [[1], [0]]
result = argmin_use_numpy(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_negative_axis_keepdims_example')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [2, 3, 1]
result = argmin_use_numpy(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_negative_axis_keepdims_random')
negative_axis_keepdims_select_last_index
data = np.array([[2, 2], [3, 10]], dtype=np.float32)
axis = -1
keepdims = 1
node = onnx.helper.make_node(
    'ArgMin',
    inputs=['data'],
    outputs=['result'],
    axis=axis,
    keepdims=keepdims,
    select_last_index=True)
# result: [[1], [0]]
result = argmin_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_negative_axis_keepdims_example_select_last_index')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [2, 3, 1]
result = argmin_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_negative_axis_keepdims_random_select_last_index')
no_keepdims
data = np.array([[2, 1], [3, 10]], dtype=np.float32)
axis = 1
keepdims = 0
node = onnx.helper.make_node(
    'ArgMin',
    inputs=['data'],
    outputs=['result'],
    axis=axis,
    keepdims=keepdims)
# The content of result is : [[1, 0]]
result = argmin_use_numpy(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_no_keepdims_example')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [2, 4]
result = argmin_use_numpy(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_no_keepdims_random')
no_keepdims_select_last_index
data = np.array([[2, 2], [3, 10]], dtype=np.float32)
axis = 1
keepdims = 0
node = onnx.helper.make_node(
    'ArgMin',
    inputs=['data'],
    outputs=['result'],
    axis=axis,
    keepdims=keepdims,
    select_last_index=True)
# result: [[1, 0]]
result = argmin_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_no_keepdims_example_select_last_index')

data = np.random.uniform(-10, 10, [2, 3, 4]).astype(np.float32)
# result's shape: [2, 4]
result = argmin_use_numpy_select_last_index(data, axis=axis, keepdims=keepdims)
expect(node, inputs=[data], outputs=[result], name='test_argmin_no_keepdims_random_select_last_index')

Asin

Calculates the arcsine (inverse of sine) of the given input tensor, element-wise.

Version

This version of the operator has been available since version 7 of the default ONNX operator set.

Inputs

input (differentiable) : T
Input tensor

Outputs

output (differentiable) : T
The arcsine of the input tensor computed element-wise

Type Constraints

T : tensor(float16), tensor(float), tensor(double)
Constrain input and output types to float tensors.

Examples

asin
node = onnx.helper.make_node(
    'Asin',
    inputs=['x'],
    outputs=['y'],
)

x = np.array([-0.5, 0, 0.5]).astype(np.float32)
y = np.arcsin(x)
expect(node, inputs=[x], outputs=[y],
       name='test_asin_example')

x = np.random.rand(3, 4, 5).astype(np.float32)
y = np.arcsin(x)
expect(node, inputs=[x], outputs=[y],
       name='test_asin')

Asinh

Calculates the hyperbolic arcsine of the given input tensor element-wise.

Version

This version of the operator has been available since version 9 of the default ONNX operator set.

Inputs

input (differentiable) : T
Input tensor

Outputs

output (differentiable) : T
The hyperbolic arcsine values of the input tensor computed element-wise

Type Constraints

T : tensor(float16), tensor(float), tensor(double)
Constrain input and output types to float tensors.

Examples

asinh
node = onnx.helper.make_node(
    'Asinh',
    inputs=['x'],
    outputs=['y'],
)

x = np.array([-1, 0, 1]).astype(np.float32)
y = np.arcsinh(x)  # expected output [-0.88137358,  0.,  0.88137358]
expect(node, inputs=[x], outputs=[y],
       name='test_asinh_example')

x = np.random.randn(3, 4, 5).astype(np.float32)
y = np.arcsinh(x)
expect(node, inputs=[x], outputs=[y],
       name='test_asinh')

Atan

Calculates the arctangent (inverse of tangent) of the given input tensor, element-wise.

Version

This version of the operator has been available since version 7 of the default ONNX operator set.

Inputs

input (differentiable) : T
Input tensor

Outputs

output (differentiable) : T
The arctangent of the input tensor computed element-wise

Type Constraints

T : tensor(float16), tensor(float), tensor(double)
Constrain input and output types to float tensors.

Examples

atan
node = onnx.helper.make_node(
    'Atan',
    inputs=['x'],
    outputs=['y'],
)

x = np.array([-1, 0, 1]).astype(np.float32)
y = np.arctan(x)
expect(node, inputs=[x], outputs=[y],
       name='test_atan_example')

x = np.random.randn(3, 4, 5).astype(np.float32)
y = np.arctan(x)
expect(node, inputs=[x], outputs=[y],
       name='test_atan')

Atanh

Calculates the hyperbolic arctangent of the given input tensor element-wise.

Version

This version of the operator has been available since version 9 of the default ONNX operator set.

Inputs

input (differentiable) : T
Input tensor

Outputs

output (differentiable) : T
The hyperbolic arctangent values of the input tensor computed element-wise

Type Constraints

T : tensor(float16), tensor(float), tensor(double)
Constrain input and output types to float tensors.

Examples

atanh
node = onnx.helper.make_node(
    'Atanh',
    inputs=['x'],
    outputs=['y'],
)

x = np.array([-0.5, 0, 0.5]).astype(np.float32)
y = np.arctanh(x)  # expected output [-0.54930615,  0.,  0.54930615]
expect(node, inputs=[x], outputs=[y],
       name='test_atanh_example')

x = np.random.uniform(0.0, 1.0, (3, 4, 5)).astype(np.float32)
y = np.arctanh(x)
expect(node, inputs=[x], outputs=[y],
       name='test_atanh')

AveragePool

AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:

output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)

or

output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)

if ceil_mode is enabled

* pad_shape[i] is sum of pads along axis i

auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:

VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - kernel_spatial_shape[i] + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])

And pad shape will be following if SAME_UPPER or SAME_LOWER:

pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + kernel_spatial_shape[i] - input_spatial_shape[i]

The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).

Version

This version of the operator has been available since version 11 of the default ONNX operator set.

Other versions of this operator: 1, 7, 10

Attributes

auto_pad : string (default is NOTSET)
auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that `output_shape[i] = ceil(input_shape[i] / strides[i])` for each axis `i`. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER.
ceil_mode : int (default is 0)
Whether to use ceil or floor (default) to compute the output shape.
count_include_pad : int (default is 0)
Whether include pad pixels when calculating values for the edges. Default is 0, doesn't count include pad.
kernel_shape : list of ints (required)
The size of the kernel along each axis.
pads : list of ints
Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. `pads` format should be as follow [x1_begin, x2_begin...x1_end, x2_end,...], where xi_begin the number of pixels added at the beginning of axis `i` and xi_end, the number of pixels added at the end of axis `i`. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis.
strides : list of ints
Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis.

Inputs

X (differentiable) : T
Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...].

Outputs

Y (differentiable) : T
Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used

Type Constraints

T : tensor(float16), tensor(float), tensor(double)
Constrain input and output types to float tensors.

Examples

averagepool_1d_default
"""
input_shape: [1, 3, 32]
output_shape: [1, 3, 31]
"""
node = onnx.helper.make_node(
    'AveragePool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[2],
)
x = np.random.randn(1, 3, 32).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = [2]
strides = [1]
out_shape = get_output_shape('VALID', x_shape[2:], kernel_shape, strides)
padded = x
y = pool(padded, x_shape, kernel_shape, strides, out_shape, [0], 'AVG')

expect(node, inputs=[x], outputs=[y], name='test_averagepool_1d_default')
averagepool_2d_ceil
"""
input_shape: [1, 1, 4, 4]
output_shape: [1, 1, 2, 2]
"""
node = onnx.helper.make_node(
    'AveragePool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[3, 3],
    strides=[2, 2],
    ceil_mode=True
)
x = np.array([[[
    [1, 2, 3, 4],
    [5, 6, 7, 8],
    [9, 10, 11, 12],
    [13, 14, 15, 16],
]]]).astype(np.float32)
y = np.array([[[
    [6, 7.5],
    [12, 13.5]]]]).astype(np.float32)

expect(node, inputs=[x], outputs=[y], name='test_averagepool_2d_ceil')
averagepool_2d_default
"""
input_shape: [1, 3, 32, 32]
output_shape: [1, 3, 31, 31]
"""
node = onnx.helper.make_node(
    'AveragePool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[2, 2],
)
x = np.random.randn(1, 3, 32, 32).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = (2, 2)
strides = (1, 1)
out_shape = get_output_shape('VALID', x_shape[2:], kernel_shape, strides)
padded = x
y = pool(padded, x_shape, kernel_shape, strides, out_shape, (0, 0), 'AVG')

expect(node, inputs=[x], outputs=[y], name='test_averagepool_2d_default')
averagepool_2d_pads
"""
input_shape: [1, 3, 28, 28]
output_shape: [1, 3, 30, 30]
pad_shape: [4, 4] -> [2, 2, 2, 2] by axis
"""
node = onnx.helper.make_node(
    'AveragePool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[3, 3],
    pads=[2, 2, 2, 2]
)
x = np.random.randn(1, 3, 28, 28).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = (3, 3)
strides = (1, 1)
pad_bottom = 2
pad_top = 2
pad_right = 2
pad_left = 2
pad_shape = [pad_top + pad_bottom, pad_left + pad_right]
out_shape = get_output_shape('VALID', np.add(x_shape[2:], pad_shape), kernel_shape, strides)
padded = np.pad(x, ((0, 0), (0, 0), (pad_top, pad_bottom), (pad_left, pad_right)), mode='constant',
                constant_values=np.nan)
y = pool(padded, x_shape, kernel_shape, strides, out_shape, pad_shape, 'AVG')

expect(node, inputs=[x], outputs=[y], name='test_averagepool_2d_pads')
averagepool_2d_pads_count_include_pad
"""
input_shape: [1, 3, 28, 28]
output_shape: [1, 3, 30, 30]
pad_shape: [4, 4] -> [2, 2, 2, 2] by axis
"""
node = onnx.helper.make_node(
    'AveragePool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[3, 3],
    pads=[2, 2, 2, 2],
    count_include_pad=1,
)
x = np.random.randn(1, 3, 28, 28).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = (3, 3)
strides = (1, 1)
pad_bottom = 2
pad_top = 2
pad_right = 2
pad_left = 2
pad_shape = [pad_top + pad_bottom, pad_left + pad_right]
out_shape = get_output_shape('VALID', np.add(x_shape[2:], pad_shape), kernel_shape, strides)
padded = np.pad(x, ((0, 0), (0, 0), (pad_top, pad_bottom), (pad_left, pad_right)), mode='constant',
                constant_values=0)
y = pool(padded, x_shape, kernel_shape, strides, out_shape, pad_shape, 'AVG', count_include_pad=1)

expect(node, inputs=[x], outputs=[y], name='test_averagepool_2d_pads_count_include_pad')
averagepool_2d_precomputed_pads
"""
input_shape: [1, 1, 5, 5]
output_shape: [1, 1, 5, 5]
pad_shape: [4, 4] -> [2, 2, 2, 2] by axis
"""
node = onnx.helper.make_node(
    'AveragePool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[5, 5],
    pads=[2, 2, 2, 2]

)
x = np.array([[[
    [1, 2, 3, 4, 5],
    [6, 7, 8, 9, 10],
    [11, 12, 13, 14, 15],
    [16, 17, 18, 19, 20],
    [21, 22, 23, 24, 25],
]]]).astype(np.float32)
y = np.array([[[[7, 7.5, 8, 8.5, 9],
                [9.5, 10, 10.5, 11, 11.5],
                [12, 12.5, 13, 13.5, 14],
                [14.5, 15, 15.5, 16, 16.5],
                [17, 17.5, 18, 18.5, 19]]]]).astype(np.float32)

expect(node, inputs=[x], outputs=[y], name='test_averagepool_2d_precomputed_pads')
averagepool_2d_precomputed_pads_count_include_pad
"""
input_shape: [1, 1, 5, 5]
output_shape: [1, 1, 5, 5]
pad_shape: [4, 4] -> [2, 2, 2, 2] by axis
"""
node = onnx.helper.make_node(
    'AveragePool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[5, 5],
    pads=[2, 2, 2, 2],
    count_include_pad=1
)
x = np.array([[[
    [1, 2, 3, 4, 5],
    [6, 7, 8, 9, 10],
    [11, 12, 13, 14, 15],
    [16, 17, 18, 19, 20],
    [21, 22, 23, 24, 25],
]]]).astype(np.float32)
y = np.array([[[[2.5200, 3.6000, 4.8000, 4.0800, 3.2400],
                [4.5600, 6.4000, 8.4000, 7.0400, 5.5200],
                [7.2000, 10.0000, 13.0000, 10.8000, 8.4000],
                [6.9600, 9.6000, 12.4000, 10.2400, 7.9200],
                [6.1200, 8.4000, 10.8000, 8.8800, 6.8400]]]]).astype(np.float32)

expect(node, inputs=[x], outputs=[y], name='test_averagepool_2d_precomputed_pads_count_include_pad')
averagepool_2d_precomputed_same_upper
"""
input_shape: [1, 1, 5, 5]
output_shape: [1, 1, 3, 3]
pad_shape: [2, 2] -> [1, 1, 1, 1] by axis
"""
node = onnx.helper.make_node(
    'AveragePool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[3, 3],
    strides=[2, 2],
    auto_pad='SAME_UPPER'
)
x = np.array([[[
    [1, 2, 3, 4, 5],
    [6, 7, 8, 9, 10],
    [11, 12, 13, 14, 15],
    [16, 17, 18, 19, 20],
    [21, 22, 23, 24, 25],
]]]).astype(np.float32)
y = np.array([[[[4, 5.5, 7],
                [11.5, 13, 14.5],
                [19, 20.5, 22]]]]).astype(np.float32)

expect(node, inputs=[x], outputs=[y], name='test_averagepool_2d_precomputed_same_upper')
averagepool_2d_precomputed_strides
"""
input_shape: [1, 1, 5, 5]
output_shape: [1, 1, 2, 2]
"""
node = onnx.helper.make_node(
    'AveragePool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[2, 2],
    strides=[2, 2]
)
x = np.array([[[
    [1, 2, 3, 4, 5],
    [6, 7, 8, 9, 10],
    [11, 12, 13, 14, 15],
    [16, 17, 18, 19, 20],
    [21, 22, 23, 24, 25],
]]]).astype(np.float32)
y = np.array([[[[4, 6],
                [14, 16]]]]).astype(np.float32)

expect(node, inputs=[x], outputs=[y], name='test_averagepool_2d_precomputed_strides')
averagepool_2d_same_lower
"""
input_shape: [1, 3, 32, 32]
output_shape: [1, 3, 32, 32]
pad_shape: [1, 1] -> [1, 0, 1, 0] by axis
"""
node = onnx.helper.make_node(
    'AveragePool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[2, 2],
    auto_pad='SAME_LOWER'
)
x = np.random.randn(1, 3, 32, 32).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = (2, 2)
strides = (1, 1)
out_shape = get_output_shape('SAME_LOWER', x_shape[2:], kernel_shape, strides)
pad_shape = get_pad_shape('SAME_LOWER', x_shape[2:], kernel_shape, strides, out_shape)
pad_bottom = pad_shape[0] // 2
pad_top = pad_shape[0] - pad_bottom
pad_right = pad_shape[1] // 2
pad_left = pad_shape[1] - pad_right
padded = np.pad(x, ((0, 0), (0, 0), (pad_top, pad_bottom), (pad_left, pad_right)), mode='constant',
                constant_values=np.nan)
y = pool(padded, x_shape, kernel_shape, strides, out_shape, pad_shape, 'AVG')

expect(node, inputs=[x], outputs=[y], name='test_averagepool_2d_same_lower')
averagepool_2d_same_upper
"""
input_shape: [1, 3, 32, 32]
output_shape: [1, 3, 32, 32]
pad_shape: [1, 1] -> [0, 1, 0, 1] by axis
"""
node = onnx.helper.make_node(
    'AveragePool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[2, 2],
    auto_pad='SAME_UPPER'
)
x = np.random.randn(1, 3, 32, 32).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = (2, 2)
strides = (1, 1)
out_shape = get_output_shape('SAME_UPPER', x_shape[2:], kernel_shape, strides)
pad_shape = get_pad_shape('SAME_UPPER', x_shape[2:], kernel_shape, strides, out_shape)
pad_top = pad_shape[0] // 2
pad_bottom = pad_shape[0] - pad_top
pad_left = pad_shape[1] // 2
pad_right = pad_shape[1] - pad_left
padded = np.pad(x, ((0, 0), (0, 0), (pad_top, pad_bottom), (pad_left, pad_right)), mode='constant',
                constant_values=np.nan)
y = pool(padded, x_shape, kernel_shape, strides, out_shape, pad_shape, 'AVG')

expect(node, inputs=[x], outputs=[y], name='test_averagepool_2d_same_upper')
averagepool_2d_strides
"""
input_shape: [1, 3, 32, 32]
output_shape: [1, 3, 10, 10]
"""
node = onnx.helper.make_node(
    'AveragePool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[5, 5],
    strides=[3, 3]
)
x = np.random.randn(1, 3, 32, 32).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = (5, 5)
strides = (3, 3)
out_shape = get_output_shape('VALID', x_shape[2:], kernel_shape, strides)
padded = x
y = pool(padded, x_shape, kernel_shape, strides, out_shape, (0, 0), 'AVG')

expect(node, inputs=[x], outputs=[y], name='test_averagepool_2d_strides')
averagepool_3d_default
"""
input_shape: [1, 3, 32, 32, 32]
output_shape: [1, 3, 31, 31, 31]
"""
node = onnx.helper.make_node(
    'AveragePool',
    inputs=['x'],
    outputs=['y'],
    kernel_shape=[2, 2, 2],
)
x = np.random.randn(1, 3, 32, 32, 32).astype(np.float32)
x_shape = np.shape(x)
kernel_shape = [2, 2, 2]
strides = [1, 1, 1]
out_shape = get_output_shape('VALID', x_shape[2:], kernel_shape, strides)
padded = x
y = pool(padded, x_shape, kernel_shape, strides, out_shape, [0, 0, 0], 'AVG')

expect(node, inputs=[x], outputs=[y], name='test_averagepool_3d_default')

BatchNormalization

Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, There are five required inputs 'X', 'scale', 'B', 'input_mean' and 'input_var'. Note that 'input_mean' and 'input_var' are expected to be the estimated statistics in inference mode (training_mode=False, default), and the running statistics in training mode (training_mode=True). There are multiple cases for the number of outputs, which we list below:

Output case #1: Y, running_mean, running_var (training_mode=True) Output case #2: Y (training_mode=False)

When training_mode=False, extra outputs are invalid. The outputs are updated as follows when training_mode=True:

running_mean = input_mean * momentum + current_mean * (1 - momentum)
running_var = input_var * momentum + current_var * (1 - momentum)

Y = (X - current_mean) / sqrt(current_var + epsilon) * scale + B

where:

current_mean = ReduceMean(X, axis=all_except_channel_index)
current_var =  ReduceVar(X, axis=all_except_channel_index)

Notice that ReduceVar refers to the population variance, and it equals to
sum(sqrd(x_i - x_avg)) / N
where N is the population size (this formula does not use sample size N - 1).

When training_mode=False:

Y = (X - input_mean) / sqrt(input_var + epsilon) * scale + B

For previous (depreciated) non-spatial cases, implementors are suggested to flatten the input shape to (N x C * D1 * D2 * ... * Dn) before a BatchNormalization Op. This operator has optional inputs/outputs. See the doc for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument's name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.

Version

This version of the operator has been available since version 14 of the default ONNX operator set.

Other versions of this operator: 1, 6, 7, 9

Attributes

epsilon : float (default is 1e-05)
The epsilon value to use to avoid division by zero.
momentum : float (default is 0.9)
Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1 - momentum).
training_mode : int (default is 0)
If set to true, it indicates BatchNormalization is being used for training, and outputs 1, 2, 3, and 4 would be populated.

Inputs

X (differentiable) : T
Input data tensor from the previous operator; dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size, C is the number of channels. Statistics are computed for every channel of C over N and D1 to Dn dimensions. For image data, input dimensions become (N x C x H x W). The op also accepts single dimension input of size N in which case C is assumed to be 1
scale (differentiable) : T
Scale tensor of shape (C).
B (differentiable) : T
Bias tensor of shape (C).
input_mean (differentiable) : U
running (training) or estimated (testing) mean tensor of shape (C).
input_var (differentiable) : U
running (training) or estimated (testing) variance tensor of shape (C).

Outputs (1 - 3)

Y (differentiable) : T
The output tensor of the same shape as X
running_mean (optional, non-differentiable) : U
The running mean after the BatchNormalization operator.
running_var (optional, non-differentiable) : U
The running variance after the BatchNormalization operator. This op uses the population size (N) for calculating variance, and not the sample size N-1.

Type Constraints

T : tensor(float16), tensor(float), tensor(double), tensor(bfloat16)
Constrain input and output types to float tensors.
U : tensor(float16), tensor(float), tensor(double), tensor(bfloat16)
Constrain mean and variance types to float tensors. It allows all float type for U.

Examples

batchnormalization
# input size: (2, 3, 4, 5)
x = np.random.randn(2, 3, 4, 5).astype(np.float32)
s = np.random.randn(3).astype(np.float32)
bias = np.random.randn(3).astype(np.float32)
mean = np.random.randn(3).astype(np.float32)
var = np.random.rand(3).astype(np.float32)
y = _batchnorm_test_mode(x, s, bias, mean, var).astype(np.float32)

node = onnx.helper.make_node(
    'BatchNormalization',
    inputs=['x', 's', 'bias', 'mean', 'var'],
    outputs=['y'],
)

# output size: (2, 3, 4, 5)
expect(node, inputs=[x, s, bias, mean, var], outputs=[y],
       name='test_batchnorm_example')

# input size: (2, 3, 4, 5)
x = np.random.randn(2, 3, 4, 5).astype(np.float32)
s = np.random.randn(3).astype(np.float32)
bias = np.random.randn(3).astype(np.float32)
mean = np.random.randn(3).astype(np.float32)
var = np.random.rand(3).astype(np.float32)
epsilon = 1e-2
y = _batchnorm_test_mode(x, s, bias, mean, var, epsilon).astype(np.float32)

node = onnx.helper.make_node(
    'BatchNormalization',
    inputs=['x', 's', 'bias', 'mean', 'var'],
    outputs=['y'],
    epsilon=epsilon,
)

# output size: (2, 3, 4, 5)
expect(node, inputs=[x, s, bias, mean, var], outputs=[y],
       name='test_batchnorm_epsilon')
train
# input size: (2, 3, 4, 5)
x = np.random.randn(2, 3, 4, 5).astype(np.float32)
s = np.random.randn(3).astype(np.float32)
bias = np.random.randn(3).astype(np.float32)
mean = np.random.randn(3).astype(np.float32)
var = np.random.rand(3).astype(np.float32)
# using np.bool(1) while generating test data with "'bool' object has no attribute 'dtype'"
# working around by using np.byte(1).astype(bool)
training_mode = 1
y, output_mean, output_var = _batchnorm_training_mode(x, s, bias, mean, var)

node = onnx.helper.make_node(
    'BatchNormalization',
    inputs=['x', 's', 'bias', 'mean', 'var'],
    outputs=['y', 'output_mean', 'output_var'],
    training_mode=training_mode
)

# output size: (2, 3, 4, 5)
expect(node, inputs=[x, s, bias, mean, var],
       outputs=[y, output_mean, output_var],
       name='test_batchnorm_example_training_mode')

# input size: (2, 3, 4, 5)
x = np.random.randn(2, 3, 4, 5).astype(np.float32)
s = np.random.randn(3).astype(np.float32)
bias = np.random.randn(3).astype(np.float32)
mean = np.random.randn(3).astype(np.float32)
var = np.random.rand(3).astype(np.float32)
training_mode = 1
momentum = 0.9
epsilon = 1e-2
y, output_mean, output_var = _batchnorm_training_mode(x, s, bias, mean, var, momentum,
                                                      epsilon)

node = onnx.helper.make_node(
    'BatchNormalization',
    inputs=['x', 's', 'bias', 'mean', 'var'],
    outputs=['y', 'output_mean', 'output_var'],
    epsilon=epsilon,
    training_mode=training_mode
)

# output size: (2, 3, 4, 5)
expect(node, inputs=[x, s, bias, mean, var],
       outputs=[y, output_mean, output_var],
       name='test_batchnorm_epsilon_training_mode')

Bernoulli

Draws binary random numbers (0 or 1) from a Bernoulli distribution. The input tensor should be a tensor containing probabilities p (a value in the range [0,1]) to be used for drawing the binary random number, where an output of 1 is produced with probability p and an output of 0 is produced with probability (1-p).

This operator is non-deterministic and may not produce the same values in different implementations (even if a seed is specified).

Version

This version of the operator has been available since version 15 of the default ONNX operator set.

Attributes

dtype : int
The data type for the elements of the output tensor. if not specified, we will use the data type of the input tensor.
seed : float
(Optional) Seed to the random generator, if not specified we will auto generate one.

Inputs

input : T1
All values in input have to be in the range:[0, 1].

Outputs

output : T2
The returned output tensor only has values 0 or 1, same shape as input tensor.

Type Constraints

T1 : tensor(float16), tensor(float), tensor(double)
Constrain input types to float tensors.
T2 : tensor(float16), tensor(float), tensor(double), tensor(bfloat16), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bool)
Constrain output types to all numeric tensors and bool tensors.

Examples

bernoulli_with_dtype
node = onnx.helper.make_node(
    'Bernoulli',
    inputs=['x'],
    outputs=['y'],
    dtype=onnx.TensorProto.DOUBLE,
)

x = np.random.uniform(0.0, 1.0, 10).astype(np.float32)
y = bernoulli_reference_implementation(x, np.float64)
expect(node, inputs=[x], outputs=[y], name='test_bernoulli_double')
bernoulli_with_seed
seed = np.float(0)
node = onnx.helper.make_node(
    'Bernoulli',
    inputs=['x'],
    outputs=['y'],
    seed=seed,
)

x = np.random.uniform(0.0, 1.0, 10).astype(np.float32)
y = bernoulli_reference_implementation(x, np.float32)
expect(node, inputs=[x], outputs=[y