-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support loading offline models #16
Comments
Since tf.js pre-trained model APIs can take model URLs as options, I think there are a few possibilities to support offline models:
const options = {
detectorModelUrl: "http://localhost:5000/hand/detector-full",
landmarkModelUrl: "http://localhost:5000/hand/landmark-full"
};
handpose = ml5.handpose(options);
const options = {
detectorModelUrl: "node_modules/@ml5-models/hand/detector-full",
landmarkModelUrl: "node_modules/@ml5-models/hand/landmark-full"
};
handpose = ml5.handpose(options);
const options = {
detectorModelUrl: "./hand-detector-full",
landmarkModelUrl: "./hand-landmark-full"
};
handpose = ml5.handpose(options); I personally think the third option is the simplest and most intuitive. However, I would love to get some ideas or suggestions from anyone! @shiffman @MOQN @yining1023 @gohai |
Could you help me understand the difference between options 1 and 3, @ziyuan-linn? In either case those two make the most sense to me. (It'd be great to run ml5 without having to know about node.js.) |
I agree with @gohai! I think supporting and documenting how to use node modules is above and beyond the scope of what we can handle. This will also be a fairly rare use case so I think we can leave it to the user to host their own server or download the models locally. We can provide a short guide on the website or GitHub markdown page but keep this fairly hidden from beginners just getting started with ml5.js. |
@gohai Option 1 uses ml5js/ml5-data-and-models-server repo which contains an express server that will automatically host the mode files locally. If I understand correctly, this will likely require two different local servers running for a project. One for the models and one for the actual project. Option 3 is similar to what @shiffman mentions, a short guide on the website or GitHub page on how to download and host the models locally. It is up to the user to decide where and how the downloaded models are hosted, and in most cases, the user can simply include the model files as part of the project files. I agree with everything said here! I think option 3 would work best for us. We can stop supporting the ml5-data-and-models-server which would probably be hard to maintain and confusing for the users. A simple guide would be perfectly fine for anyone who wants to host models locally. |
Add the ability to load offline models, so that users can still run their projects with spotty Internet / from places of the world where access to online models is restricted, i.e. China.
See prior discussion in ml5js/ml5-library#1254.
Notably, now that we're using TF version ^4.2.0, we should be able to specify where the model is loaded from. To quote Joey's reply in the issue mentioned above:
The text was updated successfully, but these errors were encountered: