New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use quantized as Float32Arrays #76
Comments
You cannot pass quantized weights as Float32Arrays to net.load, because dequantization requires meta data, such as |
Because the uncompressed weights have been removed in this repo, so I though that if I want to use http/axios then that would be the way to go. How would I then use this like that? |
If you want to load quantized weights simply use net.load('uri'). This downloads the weights using fetch and then dequantizes the weights (this step is just a call to tfjs-core actually). May I ask, why you want to use axios for that? By the way, if for some reason you still want to use the weights without quantization, you can still download them from face-api.js-models. |
Closing here now, since question should be answered. |
I noticed that the weight files are missing since the model weights have been quantized,
Does this mean that we now should use the shards if we want to load the weights as a Float32Arrays?
How would the following code then work?
The text was updated successfully, but these errors were encountered: