Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Web] Shall we accept Uint16Array for 'float16' if Float16Array is available #23817

Closed
Honry opened this issue Feb 26, 2025 · 2 comments · Fixed by #23827
Closed

[Web] Shall we accept Uint16Array for 'float16' if Float16Array is available #23817

Honry opened this issue Feb 26, 2025 · 2 comments · Fixed by #23827
Labels
platform:web issues related to ONNX Runtime web; typically submitted using template

Comments

@Honry
Copy link
Contributor

Honry commented Feb 26, 2025

Describe the issue

The Float16Array has been enabled by default in latest Chrome Canary, in ort-web, the float16 data type will be map to Float16Array at here now, while we are still using Uint16Array as workaround input data, thus the code will go into following exception.

tensor-impl.ts:265 Uncaught (in promise) TypeError: A float16 tensor's data must be type of function Float16Array() { [native code] }
    at new Tensor (tensor-impl.ts:265:19)

This would break all apps that are still using Uint16Array as workaround for float16, shall we accept both Uint16Array and Float16Array for a period of time?

@fs-eire, @guschmue

To reproduce

Create ORT float16 CPU tensor with Uint16Array data:

new ort.Tensor('float16', new Uint16Array([0]), [1]);

Urgency

No response

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.20.1

Execution Provider

'wasm'/'cpu' (WebAssembly CPU)

@Honry Honry added the platform:web issues related to ONNX Runtime web; typically submitted using template label Feb 26, 2025
@fs-eire
Copy link
Contributor

fs-eire commented Feb 26, 2025

This is indeed a problem.

I think the behavior may need to be splitted into 2 discussions:

  • for model output, the problem is ORT-web need to decide which one to use. I think we can keep the behavior for output: if Float16Array is available, we use it; otherwise use Uint16Array.
  • for model input, we can loose the limit as suggested. When Float16Array is available, we should still accept Uint16Array for input data for a while.

@xenova
Copy link
Contributor

xenova commented Feb 26, 2025

Ran into this issue myself too, which #23817 aims to fix (i.e., use Float16Array if available)

@fs-eire fs-eire closed this as completed in 1872527 Mar 3, 2025
amarin16 pushed a commit that referenced this issue Mar 5, 2025
### Description

Resolve #23817



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
guschmue pushed a commit that referenced this issue Mar 6, 2025
### Description

Resolve #23817



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:web issues related to ONNX Runtime web; typically submitted using template
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants