-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Look into pre-canned models #1
Comments
I am open to it being V2, but I would need to understand the timeframe on that. The performance issues with regards to fetching, downloading, loading and utilizing the model is a very important aspect that we should look into for ML in the browser to gain broad adoption. I think there are some common ones that can be baked into the engine that we can be working on during V1 of the API and if it isn't ready by then move it to V2. I don't want to punt things prematurely. |
Not having this nor a standardized primitive to do 1) resumable large file downloads (background fetch is one possibility) 2) reliable detection if the user is on 3G/wifi is a bit of a concern. |
Updated DML and add the ONNX column.
At F2F we agreed to look into pre-canned (built-in platform-provided) models. See https://www.w3.org/2018/10/26-webmachinelearning-minutes.html#x03 for related discussion.
The group seemed to agree that support for built-in models is a v2 feature, and in v1 the API would support custom pre-trained models fetched from the server.
Tagging @gregwhitworth @cynthia @mmccool @huningxin who took part in this discussion.
I suggest we use this issue to solicit further input while making sure the v1 API provides extension points to allow support for pre-canned models in v2.
The text was updated successfully, but these errors were encountered: