You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 4, 2023. It is now read-only.
The idea is to implement a WebExtensions experiment that
discovers the native libs for ML inference, available in the system, such as platform-provided frameworks like WinML, TF Lite, and Core ML, libs installed from packages, like onnxruntime (contains M$ telemetry!), libonnx and maybe even ONNX-mlir + LLVM 15
binds to native libs available in the system
provides WebExtensions with the API to infer models
The API should be designed so that almost exactly (but maybe under a different namespace, for example in webexts browser.onnx and in webpages as navigator.onnx) the same API could be exposed to web pages (allowing to use almost exactly the same code for them as within webexts).
You're absolutely right that native is faster. In fact the current WASM implementation is 10x slower than a proper native implementation, which could run in a sandbox. And that speed could also have been used to deliver better translation quality.
We are not going to make any improvements or fixes to the addon since we are now focusing on the built-in version.
Once some API along these lines is available in Firefox, we'll try to make use of it to speed up model inference.
The idea is to implement a WebExtensions experiment that
onnxruntime
(contains M$ telemetry!),libonnx
and maybe even ONNX-mlir + LLVM 15More info: open-source-ideas/ideas#69
The text was updated successfully, but these errors were encountered: