Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use quantized as Float32Arrays #76

Closed
dmastag opened this issue Aug 26, 2018 · 4 comments
Closed

How to use quantized as Float32Arrays #76

dmastag opened this issue Aug 26, 2018 · 4 comments

Comments

@dmastag
Copy link

dmastag commented Aug 26, 2018

I noticed that the weight files are missing since the model weights have been quantized,
Does this mean that we now should use the shards if we want to load the weights as a Float32Arrays?

How would the following code then work?

// using fetch
const res = await fetch('/models/face_detection_model.weights')
const weights = new Float32Array(await res.arrayBuffer())
net.load(weights)

// using axios
const res = await axios.get('/models/face_detection_model.weights', { responseType: 'arraybuffer' })
const weights = new Float32Array(res.data)
net.load(weights)
@dmastag dmastag changed the title How to use quantized as Fload32Arrays How to use quantized as Float32Arrays Aug 26, 2018
@justadudewhohacks
Copy link
Owner

justadudewhohacks commented Aug 26, 2018

You cannot pass quantized weights as Float32Arrays to net.load, because dequantization requires meta data, such as scale and min for each tensor. Why do you want to load quantized weights as a Float32Array in the first place?

@dmastag
Copy link
Author

dmastag commented Aug 27, 2018

Because the uncompressed weights have been removed in this repo, so I though that if I want to use http/axios then that would be the way to go.

How would I then use this like that?

@justadudewhohacks
Copy link
Owner

If you want to load quantized weights simply use net.load('uri'). This downloads the weights using fetch and then dequantizes the weights (this step is just a call to tfjs-core actually).

May I ask, why you want to use axios for that?

By the way, if for some reason you still want to use the weights without quantization, you can still download them from face-api.js-models.

@justadudewhohacks
Copy link
Owner

Closing here now, since question should be answered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants