Skip to content

Commit

Permalink
Fix some typos. (onnx#4228)
Browse files Browse the repository at this point in the history
Signed-off-by: Yulv-git <yulvchi@qq.com>

Co-authored-by: Chun-Wei Chen <jacky82226@gmail.com>
  • Loading branch information
2 people authored and Bjarke Roune committed May 6, 2023
1 parent a2db068 commit 0a287f9
Show file tree
Hide file tree
Showing 19 changed files with 47 additions and 47 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/manylinux/entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -59,5 +59,5 @@ fi
# Remove useless *-linux*.whl; only keep manylinux*.whl
rm -f dist/*-linux*.whl

echo "Succesfully build wheels:"
echo "Successfully build wheels:"
find . -type f -iname "*manylinux*.whl"
8 changes: 4 additions & 4 deletions docs/Changelog-ml.md
Original file line number Diff line number Diff line change
Expand Up @@ -408,7 +408,7 @@ This version of the operator has been available since version 1 of the 'ai.onnx.

<dl>
<dt><tt>T1</tt> : tensor(float), tensor(double), tensor(int64), tensor(int32)</dt>
<dd>The input must be a tensor of a numeric type, and of of shape [N,C] or [C]. In the latter case, it will be treated as [1,C]</dd>
<dd>The input must be a tensor of a numeric type, and of shape [N,C] or [C]. In the latter case, it will be treated as [1,C]</dd>
<dt><tt>T2</tt> : tensor(string), tensor(int64)</dt>
<dd>The output will be a tensor of strings or integers.</dd>
</dl>
Expand Down Expand Up @@ -609,7 +609,7 @@ This version of the operator has been available since version 1 of the 'ai.onnx.
<dt><tt>T1</tt> : tensor(float), tensor(double), tensor(int64), tensor(int32)</dt>
<dd>The input must be a tensor of a numeric type, either [C] or [N,C].</dd>
<dt><tt>T2</tt> : tensor(string), tensor(int64)</dt>
<dd>The output type will be a tensor of strings or integers, depending on which of the the classlabels_* attributes is used. Its size will match the bactch size of the input.</dd>
<dd>The output type will be a tensor of strings or integers, depending on which of the classlabels_* attributes is used. Its size will match the bactch size of the input.</dd>
</dl>

### <a name="ai.onnx.ml.SVMRegressor-1"></a>**ai.onnx.ml.SVMRegressor-1**</a>
Expand Down Expand Up @@ -777,7 +777,7 @@ This version of the operator has been available since version 1 of the 'ai.onnx.
<dt><tt>T1</tt> : tensor(float), tensor(double), tensor(int64), tensor(int32)</dt>
<dd>The input type must be a tensor of a numeric type.</dd>
<dt><tt>T2</tt> : tensor(string), tensor(int64)</dt>
<dd>The output type will be a tensor of strings or integers, depending on which of the the classlabels_* attributes is used.</dd>
<dd>The output type will be a tensor of strings or integers, depending on which of the classlabels_* attributes is used.</dd>
</dl>

### <a name="ai.onnx.ml.TreeEnsembleRegressor-1"></a>**ai.onnx.ml.TreeEnsembleRegressor-1**</a>
Expand Down Expand Up @@ -1057,7 +1057,7 @@ This version of the operator has been available since version 3 of the 'ai.onnx.
<dt><tt>T1</tt> : tensor(float), tensor(double), tensor(int64), tensor(int32)</dt>
<dd>The input type must be a tensor of a numeric type.</dd>
<dt><tt>T2</tt> : tensor(string), tensor(int64)</dt>
<dd>The output type will be a tensor of strings or integers, depending on which of the the classlabels_* attributes is used.</dd>
<dd>The output type will be a tensor of strings or integers, depending on which of the classlabels_* attributes is used.</dd>
</dl>

### <a name="ai.onnx.ml.TreeEnsembleRegressor-3"></a>**ai.onnx.ml.TreeEnsembleRegressor-3**</a>
Expand Down
12 changes: 6 additions & 6 deletions docs/Changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -2112,7 +2112,7 @@ This version of the operator has been available since version 1 of the default O
user_defined_vals[i] = b + b;
/* End user-defined code */
}
// my_local = 123; // Can't do this. my_local was defined in the the body
// my_local = 123; // Can't do this. my_local was defined in the body

// These below values are live-out from the loop and therefore accessible
b_out; user_defined_vals; keepgoing_out;
Expand Down Expand Up @@ -8546,7 +8546,7 @@ This version of the operator has been available since version 9 of the default O
### <a name="MaxUnpool-9"></a>**MaxUnpool-9**</a>

MaxUnpool essentially computes the partial inverse of the MaxPool op.
The input information to this op is typically the the output information from a MaxPool op. The first
The input information to this op is typically the output information from a MaxPool op. The first
input tensor X is the tensor that needs to be unpooled, which is typically the pooled tensor (first output)
from MaxPool. The second input tensor, I, contains the indices to the (locally maximal) elements corrsponding
to the elements in the first input tensor X. Input tensor I is typically the second output of the MaxPool op.
Expand Down Expand Up @@ -11513,7 +11513,7 @@ This version of the operator has been available since version 11 of the default

<dl>
<dt><tt>outputs</tt> (variadic, heterogeneous) : V</dt>
<dd>Values that are live-out to the enclosing scope. The return values in the `then_branch` and `else_branch` must be of the same data type. The `then_branch` and `else_branch` may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if-node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the the first output of `then_branch` is typed float tensor with shape [2] and the first output of `else_branch` is another float tensor with shape [3], If's first output should have (a) no shape set, or (b) a shape of rank 1 with neither `dim_value` nor `dim_param` set, or (c) a shape of rank 1 with a unique `dim_param`. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.</dd>
<dd>Values that are live-out to the enclosing scope. The return values in the `then_branch` and `else_branch` must be of the same data type. The `then_branch` and `else_branch` may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if-node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the first output of `then_branch` is typed float tensor with shape [2] and the first output of `else_branch` is another float tensor with shape [3], If's first output should have (a) no shape set, or (b) a shape of rank 1 with neither `dim_value` nor `dim_param` set, or (c) a shape of rank 1 with a unique `dim_param`. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.</dd>
</dl>

#### Type Constraints
Expand Down Expand Up @@ -11881,7 +11881,7 @@ This version of the operator has been available since version 11 of the default
### <a name="MaxUnpool-11"></a>**MaxUnpool-11**</a>

MaxUnpool essentially computes the partial inverse of the MaxPool op.
The input information to this op is typically the the output information from a MaxPool op. The first
The input information to this op is typically the output information from a MaxPool op. The first
input tensor X is the tensor that needs to be unpooled, which is typically the pooled tensor (first output)
from MaxPool. The second input tensor, I, contains the indices to the (locally maximal) elements corrsponding
to the elements in the first input tensor X. Input tensor I is typically the second output of the MaxPool op.
Expand Down Expand Up @@ -16114,7 +16114,7 @@ This version of the operator has been available since version 13 of the default

<dl>
<dt><tt>outputs</tt> (variadic, heterogeneous) : V</dt>
<dd>Values that are live-out to the enclosing scope. The return values in the `then_branch` and `else_branch` must be of the same data type. The `then_branch` and `else_branch` may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if-node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the the first output of `then_branch` is typed float tensor with shape [2] and the first output of `else_branch` is another float tensor with shape [3], If's first output should have (a) no shape set, or (b) a shape of rank 1 with neither `dim_value` nor `dim_param` set, or (c) a shape of rank 1 with a unique `dim_param`. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.</dd>
<dd>Values that are live-out to the enclosing scope. The return values in the `then_branch` and `else_branch` must be of the same data type. The `then_branch` and `else_branch` may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if-node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the first output of `then_branch` is typed float tensor with shape [2] and the first output of `else_branch` is another float tensor with shape [3], If's first output should have (a) no shape set, or (b) a shape of rank 1 with neither `dim_value` nor `dim_param` set, or (c) a shape of rank 1 with a unique `dim_param`. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.</dd>
</dl>

#### Type Constraints
Expand Down Expand Up @@ -20021,7 +20021,7 @@ This version of the operator has been available since version 16 of the default

<dl>
<dt><tt>outputs</tt> (variadic, heterogeneous) : V</dt>
<dd>Values that are live-out to the enclosing scope. The return values in the `then_branch` and `else_branch` must be of the same data type. The `then_branch` and `else_branch` may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if-node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the the first output of `then_branch` is typed float tensor with shape [2] and the first output of `else_branch` is another float tensor with shape [3], If's first output should have (a) no shape set, or (b) a shape of rank 1 with neither `dim_value` nor `dim_param` set, or (c) a shape of rank 1 with a unique `dim_param`. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.</dd>
<dd>Values that are live-out to the enclosing scope. The return values in the `then_branch` and `else_branch` must be of the same data type. The `then_branch` and `else_branch` may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if-node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the first output of `then_branch` is typed float tensor with shape [2] and the first output of `else_branch` is another float tensor with shape [3], If's first output should have (a) no shape set, or (b) a shape of rank 1 with neither `dim_value` nor `dim_param` set, or (c) a shape of rank 1 with a unique `dim_param`. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.</dd>
</dl>

#### Type Constraints
Expand Down
2 changes: 1 addition & 1 deletion docs/MetadataProps.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The motivation of such a mechanism is to allow model authors to convey to model
In the case of images there are many option for providing valid image data. However a model which consumes images was trained with a particular set of these options which must
be used during inferencing.

The goal is this proposal is to provide enough metadata that the model consumer can perform their own featurization prior to running the model and provide a compatible input or retrive an output and know what its format is.
The goal is this proposal is to provide enough metadata that the model consumer can perform their own featurization prior to running the model and provide a compatible input or retrieve an output and know what its format is.

## Image Category Definition

Expand Down
6 changes: 3 additions & 3 deletions docs/Operators-ml.md
Original file line number Diff line number Diff line change
Expand Up @@ -459,7 +459,7 @@ This version of the operator has been available since version 1 of the 'ai.onnx.

<dl>
<dt><tt>T1</tt> : tensor(float), tensor(double), tensor(int64), tensor(int32)</dt>
<dd>The input must be a tensor of a numeric type, and of of shape [N,C] or [C]. In the latter case, it will be treated as [1,C]</dd>
<dd>The input must be a tensor of a numeric type, and of shape [N,C] or [C]. In the latter case, it will be treated as [1,C]</dd>
<dt><tt>T2</tt> : tensor(string), tensor(int64)</dt>
<dd>The output will be a tensor of strings or integers.</dd>
</dl>
Expand Down Expand Up @@ -664,7 +664,7 @@ This version of the operator has been available since version 1 of the 'ai.onnx.
<dt><tt>T1</tt> : tensor(float), tensor(double), tensor(int64), tensor(int32)</dt>
<dd>The input must be a tensor of a numeric type, either [C] or [N,C].</dd>
<dt><tt>T2</tt> : tensor(string), tensor(int64)</dt>
<dd>The output type will be a tensor of strings or integers, depending on which of the the classlabels_* attributes is used. Its size will match the bactch size of the input.</dd>
<dd>The output type will be a tensor of strings or integers, depending on which of the classlabels_* attributes is used. Its size will match the bactch size of the input.</dd>
</dl>


Expand Down Expand Up @@ -847,7 +847,7 @@ Other versions of this operator: <a href="Changelog-ml.md#ai.onnx.ml.TreeEnsembl
<dt><tt>T1</tt> : tensor(float), tensor(double), tensor(int64), tensor(int32)</dt>
<dd>The input type must be a tensor of a numeric type.</dd>
<dt><tt>T2</tt> : tensor(string), tensor(int64)</dt>
<dd>The output type will be a tensor of strings or integers, depending on which of the the classlabels_* attributes is used.</dd>
<dd>The output type will be a tensor of strings or integers, depending on which of the classlabels_* attributes is used.</dd>
</dl>


Expand Down
4 changes: 2 additions & 2 deletions docs/Operators.md
Original file line number Diff line number Diff line change
Expand Up @@ -8971,7 +8971,7 @@ Other versions of this operator: <a href="Changelog.md#If-1">1</a>, <a href="Cha

<dl>
<dt><tt>outputs</tt> (variadic, heterogeneous) : V</dt>
<dd>Values that are live-out to the enclosing scope. The return values in the `then_branch` and `else_branch` must be of the same data type. The `then_branch` and `else_branch` may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if-node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the the first output of `then_branch` is typed float tensor with shape [2] and the first output of `else_branch` is another float tensor with shape [3], If's first output should have (a) no shape set, or (b) a shape of rank 1 with neither `dim_value` nor `dim_param` set, or (c) a shape of rank 1 with a unique `dim_param`. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.</dd>
<dd>Values that are live-out to the enclosing scope. The return values in the `then_branch` and `else_branch` must be of the same data type. The `then_branch` and `else_branch` may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if-node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the first output of `then_branch` is typed float tensor with shape [2] and the first output of `else_branch` is another float tensor with shape [3], If's first output should have (a) no shape set, or (b) a shape of rank 1 with neither `dim_value` nor `dim_param` set, or (c) a shape of rank 1 with a unique `dim_param`. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.</dd>
</dl>

#### Type Constraints
Expand Down Expand Up @@ -12095,7 +12095,7 @@ This version of the operator has been available since version 1 of the default O
### <a name="MaxUnpool"></a><a name="maxunpool">**MaxUnpool**</a>

MaxUnpool essentially computes the partial inverse of the MaxPool op.
The input information to this op is typically the the output information from a MaxPool op. The first
The input information to this op is typically the output information from a MaxPool op. The first
input tensor X is the tensor that needs to be unpooled, which is typically the pooled tensor (first output)
from MaxPool. The second input tensor, I, contains the indices to the (locally maximal) elements corrsponding
to the elements in the first input tensor X. Input tensor I is typically the second output of the MaxPool op.
Expand Down
2 changes: 1 addition & 1 deletion docs/TypeDenotation.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Type Denotation is used to describe semantic information around what the inputs

## Motivation

The motivation of such a mechanism can be illustrated via a simple example. In the the neural network SqueezeNet, it takes in an NCHW image input float[1,3,244,244] and produces a output float[1,1000,1,1]:
The motivation of such a mechanism can be illustrated via a simple example. In the neural network SqueezeNet, it takes in an NCHW image input float[1,3,244,244] and produces a output float[1,1000,1,1]:

```
input_in_NCHW -> data_0 -> SqueezeNet() -> output_softmaxout_1
Expand Down
2 changes: 1 addition & 1 deletion onnx/compose.py
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,7 @@ def merge_models(
if m1.ir_version != m2.ir_version:
raise ValueError(
f"IR version mismatch {m1.ir_version} != {m2.ir_version}."
" Both models should have have the same IR version")
" Both models should have the same IR version")
ir_version = m1.ir_version

opset_import_map: MutableMapping[str, int] = {}
Expand Down
2 changes: 1 addition & 1 deletion onnx/defs/controlflow/defs.cc
Original file line number Diff line number Diff line change
Expand Up @@ -381,7 +381,7 @@ ONNX_OPERATOR_SET_SCHEMA(
"static shapes S1 and S2, then the shape of the corresponding output "
"variable of the if-node (if present) must be compatible with both S1 "
"and S2 as it represents the union of both possible shapes."
"For example, if in a model file, the the first "
"For example, if in a model file, the first "
"output of `then_branch` is typed float tensor with shape [2] and the "
"first output of `else_branch` is another float tensor with shape [3], "
"If's first output should have (a) no shape set, or (b) "
Expand Down
6 changes: 3 additions & 3 deletions onnx/defs/controlflow/old.cc
Original file line number Diff line number Diff line change
Expand Up @@ -679,7 +679,7 @@ C-style code:
user_defined_vals[i] = b + b;
/* End user-defined code */
}
// my_local = 123; // Can't do this. my_local was defined in the the body
// my_local = 123; // Can't do this. my_local was defined in the body
// These below values are live-out from the loop and therefore accessible
b_out; user_defined_vals; keepgoing_out;
Expand Down Expand Up @@ -1409,7 +1409,7 @@ ONNX_OPERATOR_SET_SCHEMA(
"static shapes S1 and S2, then the shape of the corresponding output "
"variable of the if-node (if present) must be compatible with both S1 "
"and S2 as it represents the union of both possible shapes."
"For example, if in a model file, the the first "
"For example, if in a model file, the first "
"output of `then_branch` is typed float tensor with shape [2] and the "
"first output of `else_branch` is another float tensor with shape [3], "
"If's first output should have (a) no shape set, or (b) "
Expand Down Expand Up @@ -1501,7 +1501,7 @@ ONNX_OPERATOR_SET_SCHEMA(
"static shapes S1 and S2, then the shape of the corresponding output "
"variable of the if-node (if present) must be compatible with both S1 "
"and S2 as it represents the union of both possible shapes."
"For example, if in a model file, the the first "
"For example, if in a model file, the first "
"output of `then_branch` is typed float tensor with shape [2] and the "
"first output of `else_branch` is another float tensor with shape [3], "
"If's first output should have (a) no shape set, or (b) "
Expand Down

0 comments on commit 0a287f9

Please sign in to comment.