Skip to content

Commit

Permalink
1.17.3 cherry-picks for ORT Web changes (#19926)
Browse files Browse the repository at this point in the history
### Description
This PR is a preview of cherry-picks for ort-web to `rel-1.17.3` based
on `rel-1.17.2`.

<details>

<summary>Changes of ort-web to cherry-pick</summary>

The following commits are from main branch.

`o` stands for pick, and `x` stands for skip.
```
o   2e0a388 [js/webgpu] Add HardSigmoid support (#19215)
o   d226e40 [js/webgpu] set query type in onRunStart (#19202)
o   61610ff [js/webgpu] Add FusedConv clip test case (#18900)
o   a33b5bd [JS/WebGPU] Added Uniforms to SkipLayerNorm. (#18788)
o   591f90c [js/webgpu] Fix issue of timestamp query (#19258)
o   7252c6e [WebNN EP] Support WebNN async API with Asyncify (#19145)
o   5b06505 [js/webgpu] Fix Tanh explosion (#19201)
o   656ca66 [js/webgpu] Support uniforms for conv, conv transpose, conv grouped (#18753)
o   a3f0e24 [js/webgpu] Support f16 uniform (#19098)
o   9e69606 fix f16 for attention, enable slice and flatten for more types (#19262)
o   624b4e2 [js/webgpu] Remove enableShapesUniforms (#19279)
o   90883a3 [js/webgpu] Add hardSigmoid activation for fusedConv (#19233)
o   85cef0a [js/webgpu] Support capture and replay for jsep (#18989)
o   d73131c [js/webgpu] Use DataType as uniform cpu type (#19281)
o   dd1f6cc [js/webgpu] resolve codescan alert (#19343)
o   3a2ab19 [js/webgpu] Refactor createTensorShapeVariables (#18883)
o   efc17e7 [js/webgpu] Fix the undefined push error (#19366)
 x  50806a7 [js/web] support external data in npm test (#19377)
o   ccbe264 [js/webgpu] Add LeakyRelu activation for fusedConv (#19369)
o   5ff27ef [js/webgpu] support customop FastGelu (#19392)
 x  03be65e [js/web] fix types exports in package.json (#19458)
o   06269a3 [js/webgpu] allow uint8 tensors for webgpu (#19545)
o   dfeda90 [JS/WebGPU] Add MatMulNBits (#19446)
o   1b48054 [js/webgpu] Create Split indices helpers by rank, not by shape (#19554)
o   3fe2c13 [js] small fix to workaround formatter (#19400)
 x  70567a4 [js/web] use ApiTensor insteadof onnxjs Tensor in TensorResultValidator (#19358)
o   6e04e36 [js/common] upgrade tsc in common from 4.9.5 to 5.2.2 (#19317)
o   58f4921 [js] changes to allow Float16Array if any polyfill is available (#19305)
o   57d6819 [js/web] Fix fused-conv is not included in npm test (#19581)
o   ebd220b Misspelling in README.md (#19433)
o   38c3432 Bump ip from 1.1.8 to 1.1.9 in /js/react_native (#19582)
o   fe82fcc [js/webgpu] Fix Conv2DTransposeMatMul f16 compilation failure (#19596)
o   76a2a48 Bump ip from 1.1.8 to 1.1.9 in /js/react_native/e2e (#19583)
o   29b1106 [node] Switch to setImmediate to avoid starving the Node.js event loop (#19610)
o   ae3d73c [JS/WebGPU] Fix Split and Where to handle corner cases. (#19613)
o   aec2389 [js/webgpu] allows a ProgramInfo's RunData to use zero sized output (#19614)
o   bb43a0f [js/webgpu] minor fixes to make tinyllama work (#19564)
o   0edb035 [js/web] fix suite test list for zero sized tensor (#19638)
o   3cb81cd [js/common] move 'env.wasm.trace' to 'env.trace' (#19617)
o   e30618d [js/webgpu] use Headless for webgpu test by default (#19702)
o   f06164e [js/web] transfer input buffer back to caller thread (#19677)
 x  a788514 [js/web] dump debug logs for karma for diagnose purpose (#19785)
o   24b72d2 [JS/WebGPU] Preserve zero size input tensor dims. (#19737)
o   4538d31 [js/webgpu] expose a few properties in WebGPU API (#19857)
o   53de2d8 [js/webgpu] Enable GroupedConvVectorize path (#19791)
o   ed250b8 [JS/WebGPU] Optimize MatMulNBits (#19852)
 x  e771a76 [js/test] align web test runner flags with ort.env (#19790)
o   79e50ae [js/web] rewrite backend resolve to allow multiple EPs (#19735)
o   acb0df2 Fix #19931 broken Get Started link of "ONNX Runtime JavaScript API" page (#19932)
o   b29849a [js/common] fix typedoc warnings (#19933)
o   afdab62 Bump follow-redirects from 1.15.4 to 1.15.6 in /js/web (#19949)
o   28ad6c3 Bump follow-redirects from 1.15.4 to 1.15.6 in /js/node (#19951)
o   7e0d424 accumulate in fp32 for Reduce* (#19868)
o   4c6a6a3 [js/webgpu] Fix NAN caused by un-initialized buffer in instance-norm (#19387)
o   01c7aaf [js/webgpu] allow setting env.webgpu.adapter (#19940)
o   c45cff6 [js/webgpu] fix maxpool / fp16 (#19981)
```

</details>

<details>
<summary>Cherry-pick commandlines</summary>

```sh
git cherry-pick 2e0a388
git cherry-pick d226e40
git cherry-pick 61610ff
git cherry-pick a33b5bd
git cherry-pick 591f90c
git cherry-pick 7252c6e
git cherry-pick 5b06505
git cherry-pick 656ca66
git cherry-pick a3f0e24
git cherry-pick 9e69606
git cherry-pick 624b4e2
git cherry-pick 90883a3
git cherry-pick 85cef0a  #<<<<< Note: conflicts
git cherry-pick d73131c
git cherry-pick dd1f6cc
git cherry-pick 3a2ab19
git cherry-pick efc17e7
git cherry-pick ccbe264
git cherry-pick 5ff27ef
git cherry-pick 06269a3
git cherry-pick dfeda90
git cherry-pick 1b48054
git cherry-pick 3fe2c13
git cherry-pick 6e04e36
git cherry-pick 58f4921
git cherry-pick 57d6819
git cherry-pick ebd220b
git cherry-pick 38c3432
git cherry-pick fe82fcc
git cherry-pick 76a2a48
git cherry-pick 29b1106
git cherry-pick ae3d73c
git cherry-pick aec2389
git cherry-pick bb43a0f
git cherry-pick 0edb035
git cherry-pick 3cb81cd
git cherry-pick e30618d
git cherry-pick f06164e
git cherry-pick 24b72d2
git cherry-pick 4538d31
git cherry-pick 53de2d8
git cherry-pick ed250b8
git cherry-pick 79e50ae
git cherry-pick acb0df2
git cherry-pick b29849a
git cherry-pick afdab62
git cherry-pick 28ad6c3
git cherry-pick 7e0d424
git cherry-pick 4c6a6a3
git cherry-pick 01c7aaf
git cherry-pick c45cff6
```
</details>

<details>
<summary>Cherry-pick conflicts</summary>

- 85cef0a #18989
this change is for enabling graph capture feature for JSEP, and it is
done after ROCM EP enabled graph capture feature. However, the ROCM EP
graph capture feature is not cherry-picked in rel-1.17.2.
</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Jiajia Qin <jiajia.qin@intel.com>
Co-authored-by: Xu Xing <xing.xu@intel.com>
Co-authored-by: satyajandhyala <satya.k.jandhyala@gmail.com>
Co-authored-by: Yang Gu <yang.gu@intel.com>
Co-authored-by: Wanming Lin <wanming.lin@intel.com>
Co-authored-by: Jiajie Hu <jiajie.hu@intel.com>
Co-authored-by: Guenther Schmuelling <guschmue@microsoft.com>
Co-authored-by: Matttttt <18152455+martholomew@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Segev Finer <segev208@gmail.com>
Co-authored-by: Belem Zhang <belem.zhang@intel.com>
  • Loading branch information
12 people committed Mar 29, 2024
1 parent 046d06f commit 45ff957
Show file tree
Hide file tree
Showing 114 changed files with 5,493 additions and 1,350 deletions.
121 changes: 90 additions & 31 deletions js/common/lib/backend-impl.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
// Licensed under the MIT License.

import {Backend} from './backend.js';
import {InferenceSession} from './inference-session.js';

interface BackendInfo {
backend: Backend;
Expand All @@ -10,6 +11,7 @@ interface BackendInfo {
initPromise?: Promise<void>;
initialized?: boolean;
aborted?: boolean;
error?: string;
}

const backends: Map<string, BackendInfo> = new Map();
Expand Down Expand Up @@ -60,43 +62,100 @@ export const registerBackend = (name: string, backend: Backend, priority: number
};

/**
* Resolve backend by specified hints.
* Try to resolve and initialize a backend.
*
* @param backendHints - a list of execution provider names to lookup. If omitted use registered backends as list.
* @returns a promise that resolves to the backend.
* @param backendName - the name of the backend.
* @returns the backend instance if resolved and initialized successfully, or an error message if failed.
*/
const tryResolveAndInitializeBackend = async(backendName: string): Promise<Backend|string> => {
const backendInfo = backends.get(backendName);
if (!backendInfo) {
return 'backend not found.';
}

if (backendInfo.initialized) {
return backendInfo.backend;
} else if (backendInfo.aborted) {
return backendInfo.error!;
} else {
const isInitializing = !!backendInfo.initPromise;
try {
if (!isInitializing) {
backendInfo.initPromise = backendInfo.backend.init(backendName);
}
await backendInfo.initPromise;
backendInfo.initialized = true;
return backendInfo.backend;
} catch (e) {
if (!isInitializing) {
backendInfo.error = `${e}`;
backendInfo.aborted = true;
}
return backendInfo.error!;
} finally {
delete backendInfo.initPromise;
}
}
};

/**
* Resolve execution providers from the specific session options.
*
* @param options - the session options object.
* @returns a promise that resolves to a tuple of an initialized backend instance and a session options object with
* filtered EP list.
*
* @ignore
*/
export const resolveBackend = async(backendHints: readonly string[]): Promise<Backend> => {
const backendNames = backendHints.length === 0 ? backendsSortedByPriority : backendHints;
const errors = [];
for (const backendName of backendNames) {
const backendInfo = backends.get(backendName);
if (backendInfo) {
if (backendInfo.initialized) {
return backendInfo.backend;
} else if (backendInfo.aborted) {
continue; // current backend is unavailable; try next
}
export const resolveBackendAndExecutionProviders = async(options: InferenceSession.SessionOptions):
Promise<[backend: Backend, options: InferenceSession.SessionOptions]> => {
// extract backend hints from session options
const eps = options.executionProviders || [];
const backendHints = eps.map(i => typeof i === 'string' ? i : i.name);
const backendNames = backendHints.length === 0 ? backendsSortedByPriority : backendHints;

const isInitializing = !!backendInfo.initPromise;
try {
if (!isInitializing) {
backendInfo.initPromise = backendInfo.backend.init(backendName);
// try to resolve and initialize all requested backends
let backend: Backend|undefined;
const errors = [];
const availableBackendNames = new Set<string>();
for (const backendName of backendNames) {
const resolveResult = await tryResolveAndInitializeBackend(backendName);
if (typeof resolveResult === 'string') {
errors.push({name: backendName, err: resolveResult});
} else {
if (!backend) {
backend = resolveResult;
}
if (backend === resolveResult) {
availableBackendNames.add(backendName);
}
}
await backendInfo.initPromise;
backendInfo.initialized = true;
return backendInfo.backend;
} catch (e) {
if (!isInitializing) {
errors.push({name: backendName, err: e});
}

// if no backend is available, throw error.
if (!backend) {
throw new Error(`no available backend found. ERR: ${errors.map(e => `[${e.name}] ${e.err}`).join(', ')}`);
}

// for each explicitly requested backend, if it's not available, output warning message.
for (const {name, err} of errors) {
if (backendHints.includes(name)) {
// eslint-disable-next-line no-console
console.warn(`removing requested execution provider "${
name}" from session options because it is not available: ${err}`);
}
backendInfo.aborted = true;
} finally {
delete backendInfo.initPromise;
}
}
}

throw new Error(`no available backend found. ERR: ${errors.map(e => `[${e.name}] ${e.err}`).join(', ')}`);
};
const filteredEps = eps.filter(i => availableBackendNames.has(typeof i === 'string' ? i : i.name));

return [
backend, new Proxy(options, {
get: (target, prop) => {
if (prop === 'executionProviders') {
return filteredEps;
}
return Reflect.get(target, prop);
}
})
];
};
6 changes: 3 additions & 3 deletions js/common/lib/backend.ts
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ export interface TrainingSessionHandler extends SessionHandler {
options: InferenceSession.RunOptions): Promise<SessionHandler.ReturnType>;

getParametersSize(trainableOnly: boolean): Promise<number>;
loadParametersBuffer(array: Uint8Array, trainableOnly: boolean): Promise<void>;
loadParametersBuffer(buffer: Uint8Array, trainableOnly: boolean): Promise<void>;
getContiguousParameters(trainableOnly: boolean): Promise<OnnxValue>;
}

Expand All @@ -77,8 +77,8 @@ export interface Backend {
Promise<InferenceSessionHandler>;

createTrainingSessionHandler?
(checkpointStateUriOrBuffer: TrainingSession.URIorBuffer, trainModelUriOrBuffer: TrainingSession.URIorBuffer,
evalModelUriOrBuffer: TrainingSession.URIorBuffer, optimizerModelUriOrBuffer: TrainingSession.URIorBuffer,
(checkpointStateUriOrBuffer: TrainingSession.UriOrBuffer, trainModelUriOrBuffer: TrainingSession.UriOrBuffer,
evalModelUriOrBuffer: TrainingSession.UriOrBuffer, optimizerModelUriOrBuffer: TrainingSession.UriOrBuffer,
options: InferenceSession.SessionOptions): Promise<TrainingSessionHandler>;
}

Expand Down
50 changes: 49 additions & 1 deletion js/common/lib/env.ts
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ export declare namespace Env {
/**
* set or get a boolean value indicating whether to enable trace.
*
* @deprecated Use `env.trace` instead. If `env.trace` is set, this property will be ignored.
* @defaultValue `false`
*/
trace?: boolean;
Expand Down Expand Up @@ -142,13 +143,52 @@ export declare namespace Env {
*/
ondata?: (data: WebGpuProfilingData) => void;
};
/**
* Set or get the power preference.
*
* Setting this property only has effect before the first WebGPU inference session is created. The value will be
* used as options for `navigator.gpu.requestAdapter()`.
*
* See {@link https://gpuweb.github.io/gpuweb/#dictdef-gpurequestadapteroptions} for more details.
*
* @defaultValue `undefined`
*/
powerPreference?: 'low-power'|'high-performance';
/**
* Set or get the force fallback adapter flag.
*
* Setting this property only has effect before the first WebGPU inference session is created. The value will be
* used as options for `navigator.gpu.requestAdapter()`.
*
* See {@link https://gpuweb.github.io/gpuweb/#dictdef-gpurequestadapteroptions} for more details.
*
* @defaultValue `undefined`
*/
forceFallbackAdapter?: boolean;
/**
* Set or get the adapter for WebGPU.
*
* Setting this property only has effect before the first WebGPU inference session is created. The value will be
* used as the GPU adapter for the underlying WebGPU backend to create GPU device.
*
* If this property is not set, it will be available to get after the first WebGPU inference session is created. The
* value will be the GPU adapter that created by the underlying WebGPU backend.
*
* When use with TypeScript, the type of this property is `GPUAdapter` defined in "@webgpu/types".
* Use `const adapter = env.webgpu.adapter as GPUAdapter;` in TypeScript to access this property with correct type.
*
* see comments on {@link Tensor.GpuBufferType}
*/
adapter: unknown;
/**
* Get the device for WebGPU.
*
* This property is only available after the first WebGPU inference session is created.
*
* When use with TypeScript, the type of this property is `GPUDevice` defined in "@webgpu/types".
* Use `const device = env.webgpu.device as GPUDevice;` in TypeScript to access this property with correct type.
*
* see comments on {@link GpuBufferType} for more details about why not use types defined in "@webgpu/types".
* see comments on {@link Tensor.GpuBufferType} for more details about why not use types defined in "@webgpu/types".
*/
readonly device: unknown;
/**
Expand All @@ -167,13 +207,21 @@ export interface Env {
* @defaultValue `'warning'`
*/
logLevel?: 'verbose'|'info'|'warning'|'error'|'fatal';

/**
* Indicate whether run in debug mode.
*
* @defaultValue `false`
*/
debug?: boolean;

/**
* set or get a boolean value indicating whether to enable trace.
*
* @defaultValue `false`
*/
trace?: boolean;

/**
* Get version of the current package.
*/
Expand Down
5 changes: 4 additions & 1 deletion js/common/lib/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
* - [onnxruntime-react-native](https://www.npmjs.com/package/onnxruntime-react-native)
*
* See also:
* - [Get Started](https://onnxruntime.ai/docs/get-started/with-javascript.html)
* - [Get Started](https://onnxruntime.ai/docs/get-started/with-javascript/)
* - [Inference examples](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/js)
*
* @packageDocumentation
Expand All @@ -21,6 +21,9 @@ export * from './backend.js';
export * from './env.js';
export * from './inference-session.js';
export * from './tensor.js';
export * from './tensor-conversion.js';
export * from './tensor-factory.js';
export * from './trace.js';
export * from './onnx-model.js';
export * from './onnx-value.js';
export * from './training-session.js';
10 changes: 4 additions & 6 deletions js/common/lib/inference-session-impl.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.

import {resolveBackend} from './backend-impl.js';
import {resolveBackendAndExecutionProviders} from './backend-impl.js';
import {InferenceSessionHandler} from './backend.js';
import {InferenceSession as InferenceSessionInterface} from './inference-session.js';
import {OnnxValue} from './onnx-value.js';
Expand Down Expand Up @@ -195,11 +195,9 @@ export class InferenceSession implements InferenceSessionInterface {
throw new TypeError('Unexpected argument[0]: must be \'path\' or \'buffer\'.');
}

// get backend hints
const eps = options.executionProviders || [];
const backendHints = eps.map(i => typeof i === 'string' ? i : i.name);
const backend = await resolveBackend(backendHints);
const handler = await backend.createInferenceSessionHandler(filePathOrUint8Array, options);
// resolve backend, update session options with validated EPs, and create session handler
const [backend, optionsWithValidatedEPs] = await resolveBackendAndExecutionProviders(options);
const handler = await backend.createInferenceSessionHandler(filePathOrUint8Array, optionsWithValidatedEPs);
TRACE_FUNC_END();
return new InferenceSession(handler);
}
Expand Down
51 changes: 42 additions & 9 deletions js/common/lib/inference-session.ts
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ export declare namespace InferenceSession {
optimizedModelFilePath?: string;

/**
* Wether enable profiling.
* Whether enable profiling.
*
* This setting is a placeholder for a future use.
*/
Expand Down Expand Up @@ -154,6 +154,12 @@ export declare namespace InferenceSession {
*/
preferredOutputLocation?: OnnxValueDataLocation|{readonly [outputName: string]: OnnxValueDataLocation};

/**
* Whether enable graph capture.
* This setting is available only in ONNXRuntime Web for WebGPU EP.
*/
enableGraphCapture?: boolean;

/**
* Store configurations for a session. See
* https://github.com/microsoft/onnxruntime/blob/main/include/onnxruntime/core/session/
Expand All @@ -180,22 +186,22 @@ export declare namespace InferenceSession {
// #region execution providers

// Currently, we have the following backends to support execution providers:
// Backend Node.js binding: supports 'cpu' and 'cuda'.
// Backend Node.js binding: supports 'cpu', 'dml' (win32), 'coreml' (macOS) and 'cuda' (linux).
// Backend WebAssembly: supports 'cpu', 'wasm', 'webgpu' and 'webnn'.
// Backend ONNX.js: supports 'webgl'.
// Backend React Native: supports 'cpu', 'xnnpack', 'coreml' (iOS), 'nnapi' (Android).
interface ExecutionProviderOptionMap {
coreml: CoreMLExecutionProviderOption;
cpu: CpuExecutionProviderOption;
coreml: CoreMlExecutionProviderOption;
cuda: CudaExecutionProviderOption;
dml: DmlExecutionProviderOption;
nnapi: NnapiExecutionProviderOption;
tensorrt: TensorRtExecutionProviderOption;
wasm: WebAssemblyExecutionProviderOption;
webgl: WebGLExecutionProviderOption;
xnnpack: XnnpackExecutionProviderOption;
webgpu: WebGpuExecutionProviderOption;
webnn: WebNNExecutionProviderOption;
nnapi: NnapiExecutionProviderOption;
xnnpack: XnnpackExecutionProviderOption;
}

type ExecutionProviderName = keyof ExecutionProviderOptionMap;
Expand All @@ -213,10 +219,6 @@ export declare namespace InferenceSession {
readonly name: 'cuda';
deviceId?: number;
}
export interface CoreMlExecutionProviderOption extends ExecutionProviderOption {
readonly name: 'coreml';
coreMlFlags?: number;
}
export interface DmlExecutionProviderOption extends ExecutionProviderOption {
readonly name: 'dml';
deviceId?: number;
Expand Down Expand Up @@ -247,8 +249,39 @@ export declare namespace InferenceSession {
}
export interface CoreMLExecutionProviderOption extends ExecutionProviderOption {
readonly name: 'coreml';
/**
* The bit flags for CoreML execution provider.
*
* ```
* COREML_FLAG_USE_CPU_ONLY = 0x001
* COREML_FLAG_ENABLE_ON_SUBGRAPH = 0x002
* COREML_FLAG_ONLY_ENABLE_DEVICE_WITH_ANE = 0x004
* COREML_FLAG_ONLY_ALLOW_STATIC_INPUT_SHAPES = 0x008
* COREML_FLAG_CREATE_MLPROGRAM = 0x010
* ```
*
* See include/onnxruntime/core/providers/coreml/coreml_provider_factory.h for more details.
*
* This flag is available only in ONNXRuntime (Node.js binding).
*/
coreMlFlags?: number;
/**
* Specify whether to use CPU only in CoreML EP.
*
* This setting is available only in ONNXRuntime (react-native).
*/
useCPUOnly?: boolean;
/**
* Specify whether to enable CoreML EP on subgraph.
*
* This setting is available only in ONNXRuntime (react-native).
*/
enableOnSubgraph?: boolean;
/**
* Specify whether to only enable CoreML EP for Apple devices with ANE (Apple Neural Engine).
*
* This setting is available only in ONNXRuntime (react-native).
*/
onlyEnableDeviceWithANE?: boolean;
}
export interface NnapiExecutionProviderOption extends ExecutionProviderOption {
Expand Down

0 comments on commit 45ff957

Please sign in to comment.