Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support loading offline models #16

Open
sproutleaf opened this issue Jul 12, 2023 · 4 comments
Open

Support loading offline models #16

sproutleaf opened this issue Jul 12, 2023 · 4 comments

Comments

@sproutleaf
Copy link

Add the ability to load offline models, so that users can still run their projects with spotty Internet / from places of the world where access to online models is restricted, i.e. China.

See prior discussion in ml5js/ml5-library#1254.

Notably, now that we're using TF version ^4.2.0, we should be able to specify where the model is loaded from. To quote Joey's reply in the issue mentioned above:

If we do end up updating our tf versions to some of the more recent versions, then it looks like in the latest face-landmarks-detection lib we can specify where our model files should be loaded from -- https://github.com/tensorflow/tfjs-models/tree/master/face-landmarks-detection/src/mediapipe#create-a-detector -- which, in this case, would be somewhere on a local server.

@ziyuan-linn
Copy link
Member

Since tf.js pre-trained model APIs can take model URLs as options, I think there are a few possibilities to support offline models:

  1. The current approch. Have the user host all the models in a local server and pass in the model URL from the local server.
const options = { 
  detectorModelUrl: "http://localhost:5000/hand/detector-full",
  landmarkModelUrl: "http://localhost:5000/hand/landmark-full"
};
handpose = ml5.handpose(options);
  1. This is similar to MediaPipe's approch. We can publish the models to npm. The user can install the models and link them from /node_modules.
$ npm install @ml5-models/hand
const options = { 
  detectorModelUrl: "node_modules/@ml5-models/hand/detector-full",
  landmarkModelUrl: "node_modules/@ml5-models/hand/landmark-full"
};
handpose = ml5.handpose(options);
  1. We could provide a guide and some links to download the models from tfhub. The user can put the downloaded models in the project folder and link them.
const options = { 
  detectorModelUrl: "./hand-detector-full",
  landmarkModelUrl: "./hand-landmark-full"
};
handpose = ml5.handpose(options);

I personally think the third option is the simplest and most intuitive. However, I would love to get some ideas or suggestions from anyone! @shiffman @MOQN @yining1023 @gohai

@gohai
Copy link
Member

gohai commented Aug 3, 2023

Could you help me understand the difference between options 1 and 3, @ziyuan-linn? In either case those two make the most sense to me. (It'd be great to run ml5 without having to know about node.js.)

@shiffman
Copy link
Member

shiffman commented Aug 3, 2023

I agree with @gohai! I think supporting and documenting how to use node modules is above and beyond the scope of what we can handle. This will also be a fairly rare use case so I think we can leave it to the user to host their own server or download the models locally. We can provide a short guide on the website or GitHub markdown page but keep this fairly hidden from beginners just getting started with ml5.js.

@ziyuan-linn
Copy link
Member

@gohai Option 1 uses ml5js/ml5-data-and-models-server repo which contains an express server that will automatically host the mode files locally. If I understand correctly, this will likely require two different local servers running for a project. One for the models and one for the actual project.

Option 3 is similar to what @shiffman mentions, a short guide on the website or GitHub page on how to download and host the models locally. It is up to the user to decide where and how the downloaded models are hosted, and in most cases, the user can simply include the model files as part of the project files.

I agree with everything said here! I think option 3 would work best for us. We can stop supporting the ml5-data-and-models-server which would probably be hard to maintain and confusing for the users. A simple guide would be perfectly fine for anyone who wants to host models locally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants