New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to interpret output values #561
Comments
Thanks for your interest and sorry for the confusion.
That's right. I'm sorry that softmax is not officially supported in WebGL backend yet, and during transpiling, softmax layer is automatically removed if it is placed at the end of the model. Therefore you need to apply softmax operation as postprocess.
That's right.
is automatically converted the raw pixel value (in the range 0-255) into the the range 0-1.
Hmmm, something wrong... If you can, could you tell me your Keras model definition? Also, I'll upload a binary-classifier example in Keras with WebDNN a few hours later. |
I created an simple example of binary classification |
Sorry, I found our mistake. If you want to pass scale value to // NG
WebDNN.Image.getImageArray($canvas, {
scale: [255]
});
// OK
WebDNN.Image.getImageArray($canvas, {
scale: [255, 255, 255]
}); |
Hi, in my case I am using grayscale images. So a single [255] was correct, I think. More information about the network I am using is here. The network is: nb_filters = 32
nb_pool = 2
nb_conv = 3
model = Sequential()
model.add(Conv2D(nb_filters, (nb_conv, nb_conv), activation='relu', input_shape=X.shape[1:]))
model.add(Conv2D(nb_filters, (nb_conv, nb_conv), activation='relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) If the numbers are pre-softmax, how can I convert from Edit: Also, thank you for the example. I looked at your code and it makes sense. So there must be some mistake with my network, maybe the way that scaling is working? I will look into it. |
I've fixed the bug. Please use let x = runner.getInputViews()[0];
let y = runner.getOutputViews()[0];
x.set(await WebDNN.Image.getImageArray($canvas, {
color: WebDNN.Image.Color.GREY,
scale: [255]
}));
await runner.run();
y = y.toActual();
console.log(exp(y[0]) / (exp(y[0]) + exp(y[1])), exp(y[1]) / (exp(y[0]) + exp(y[1]))); |
Great! I can confirm this change fixes it. Thanks so much 🙏 |
First, thanks for the amazing toolkit! :)
I have a two-class classifier on small grayscale images, trained with Keras and converted to run on WebGL with WebDNN. When I evaluate the model in Keras, it gives me the output
[ 0.16141078 0.83858919]
. But when I evaluate the model on the same data with WebDNN it gives me[-309.3228759765625, 169.71974182128906]
. The values are ordered correctly (argmax still works), but the values are incorrect. I thought maybe they were pre-softmax values, but that doesn't seem correct.Next, I thought that maybe the problem was that I trained my network on images with pixels in the range 0-1, but maybe WebDNN expects pixels in the range 0-255. So I tried adding the
scale: [255]
option when setting the input for the network. Changing this gives me[-294116294656, 666073825280]
, and now the predictions are no longer correct.How can I interpret the output of the network, and convert these values to the same predictions I get from Keras?
Thanks!
The text was updated successfully, but these errors were encountered: