New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create plan for how to move forward with Inference API #51
Comments
Thanks for raising this @jbingham - I think I'd simply move forward with beginning spec text with a note that this is not ready for implementation but we welcome prototypes to continue to iterate on the spec. This CG is public and we're ready to begin authoring out API shape for both inputs and outputs, etc. |
I took an action to transfer https://github.com/jbingham/web-ml-inference under https://github.com/webmachinelearning GitHub org. Given cool URLs do not change, I'd like us to spend a minute to come up with a repo name that can stand the test of time. I felt To start bikeshedding, here are some suggestions: https://github.com/webmachinelearning/model-loader Using GH Pages hosting, the spec will appear at: Comments welcome! I know everyone has an opinion when we talk about naming things :-) |
Agree that ml-inference-api is not a great name.
The most descriptive is load-run-model, and model-loader sounds best to my
ears. Either is fine for me.
…On Thu, Apr 9, 2020 at 1:13 AM Anssi Kostiainen ***@***.***> wrote:
I took an action to transfer https://github.com/jbingham/web-ml-inference
under https://github.com/webmachinelearning GitHub org.
Given cool URLs do not change, I'd like us to spend a minute to come up
with a repo name that can stand the test of time. I felt web-ml-inference
may not be the greatest name? The name should be descriptive yet concise.
The convention is to use dash in place of whitespace, unless it's a word +
abbreviation, in which case nix the spaces. Think about names that allow
the repo live peacefully together with webnn, possibly get together with
it in the future.
To start bikeshedding, here are some suggestions:
https://github.com/webmachinelearning/model-loader
https://github.com/webmachinelearning/load-model
https://github.com/webmachinelearning/load-run-model
Using GH Pages hosting, the spec will appear at:
https://webmachinelearning.github.io/<repo-name>
Comments welcome! I know everyone has an opinion when we talk about naming
things :-)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#51 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEJPKNV6R2WJUWOWXH3XRDRLV7TNANCNFSM4LZGPRTA>
.
|
Let's go with @jbingham I'll get in touch with you offline to handle the transfer. |
The repo has been transferred and is accepting contributions at: |
I think we have a plan now given https://github.com/webmachinelearning/model-loader repo hosts an explainer and an early spec draft and soon we also get some implementation experience. As a gardening action, I'll close this issue. Thanks @jbingham for leading this effort! |
This is a followup to Issue 41.
Now that the group has decided that an inference API (also known as Load/ Run Model API) is within the charter, let's define the steps to move forward with this.
I've talked with some web standards experts, and will update this thread with concrete steps.
The text was updated successfully, but these errors were encountered: