Skip to content
This repository has been archived by the owner on Nov 16, 2023. It is now read-only.

Add cpu op Pow, Expand, Cast, Shape, UpsampleV9 #214

Merged
merged 5 commits into from
Sep 1, 2020

Conversation

28Smiles
Copy link
Contributor

@28Smiles 28Smiles commented Aug 23, 2020

To get my Model working, I needed some more ops.

  • Pow
  • Expand
  • Cast
  • Shape
  • UpsampleV9

I also "fixed" Slice to accept all number types, as the spec suggests.

I enabled the op-tests and also tested the runtime with my model.
image

@28Smiles 28Smiles reopened this Aug 23, 2020
@28Smiles 28Smiles changed the title Add cpu op Pow Add cpu op Pow, Expand, Cast, Shape, UpsampleV9 Aug 27, 2020
Copy link
Contributor

@fs-eire fs-eire left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change looks good to me; 2 nits to take a look. Thanks for contributing to ONNX.js!

@@ -19,6 +19,21 @@ export class CpuUpsample extends Upsample {
}
}

export class CpuUpsampleV9 extends UpsampleV9 {
run(inferenceHandler: CpuInferenceHandler, inputs: Tensor[]): Tensor[] {
const scales = inputs[1].floatData;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since the scales are dynamic now, should we check the length?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be resolved


const output = new Tensor(dimensions, x.type);

const result = BroadcastUtil.calc(x, output, (a, b) => a, false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it equivalent to do BroadcastUtil.calc(output, x, (a, b) => b, true); ? if so we can do inplace broadcast

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, since the argument order is important and only the right hand side can be used for the inplace. Only a custom implementation or an extension of the BroadcastUtil would work. Correct me if I am wrong

@fs-eire
Copy link
Contributor

fs-eire commented Aug 28, 2020

Explanation of why "test_cast_DOUBLE_to_FLOAT" failed:

ONNX.js does not strictly using data types in ONNX - float/double is not treated differently (all numbers are double in javascript anyway) when creating a tensor, but their data types are different. Currently we bypass this problem by avoid using float64; but there should be a better way to support those types in future.

@28Smiles
Copy link
Contributor Author

Explanation of why "test_cast_DOUBLE_to_FLOAT" failed:

ONNX.js does not strictly using data types in ONNX - float/double is not treated differently (all numbers are double in javascript anyway) when creating a tensor, but their data types are different. Currently we bypass this problem by avoid using float64; but there should be a better way to support those types in future.

But shouldn't this be the case the other way round?
test_cast_FLOAT_to_DOUBLE, float and double are the same, so output is float?
test_cast_DOUBLE_to_FLOAT, output should always be float?

Seems odd to me. Also test output for (test_cast_DOUBLE_to_FLOAT) is:

input tensor[0] check failed: expected type 'float64' but got float32

The test should expect a float, but expects a double. Most likey the two tests names are switched?

@fs-eire
Copy link
Contributor

fs-eire commented Aug 28, 2020

Explanation of why "test_cast_DOUBLE_to_FLOAT" failed:
ONNX.js does not strictly using data types in ONNX - float/double is not treated differently (all numbers are double in javascript anyway) when creating a tensor, but their data types are different. Currently we bypass this problem by avoid using float64; but there should be a better way to support those types in future.

But shouldn't this be the case the other way round?
test_cast_FLOAT_to_DOUBLE, float and double are the same, so output is float?
test_cast_DOUBLE_to_FLOAT, output should always be float?

Seems odd to me. Also test output for (test_cast_DOUBLE_to_FLOAT) is:

input tensor[0] check failed: expected type 'float64' but got float32

The test should expect a float, but expects a double. Most likey the two tests names are switched?

The test case (ONNX model) expects a float64 input tensor, but since "there is only one float type in ONNX.js" (described above) the input tensor is created as float32, which is a mismatch to the model's input type requirement.

@28Smiles
Copy link
Contributor Author

Explanation of why "test_cast_DOUBLE_to_FLOAT" failed:
ONNX.js does not strictly using data types in ONNX - float/double is not treated differently (all numbers are double in javascript anyway) when creating a tensor, but their data types are different. Currently we bypass this problem by avoid using float64; but there should be a better way to support those types in future.

But shouldn't this be the case the other way round?
test_cast_FLOAT_to_DOUBLE, float and double are the same, so output is float?
test_cast_DOUBLE_to_FLOAT, output should always be float?
Seems odd to me. Also test output for (test_cast_DOUBLE_to_FLOAT) is:

input tensor[0] check failed: expected type 'float64' but got float32

The test should expect a float, but expects a double. Most likey the two tests names are switched?

The test case (ONNX model) expects a float64 input tensor, but since "there is only one float type in ONNX.js" (described above) the input tensor is created as float32, which is a mismatch to the model's input type requirement.

Guess i missed the "input" in the error message

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants