Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to interpret output values #561

Closed
kylemcdonald opened this issue Sep 19, 2017 · 6 comments
Closed

How to interpret output values #561

kylemcdonald opened this issue Sep 19, 2017 · 6 comments
Assignees

Comments

@kylemcdonald
Copy link
Contributor

First, thanks for the amazing toolkit! :)

I have a two-class classifier on small grayscale images, trained with Keras and converted to run on WebGL with WebDNN. When I evaluate the model in Keras, it gives me the output [ 0.16141078 0.83858919]. But when I evaluate the model on the same data with WebDNN it gives me [-309.3228759765625, 169.71974182128906]. The values are ordered correctly (argmax still works), but the values are incorrect. I thought maybe they were pre-softmax values, but that doesn't seem correct.

Next, I thought that maybe the problem was that I trained my network on images with pixels in the range 0-1, but maybe WebDNN expects pixels in the range 0-255. So I tried adding the scale: [255] option when setting the input for the network. Changing this gives me [-294116294656, 666073825280], and now the predictions are no longer correct.

How can I interpret the output of the network, and convert these values to the same predictions I get from Keras?

Thanks!

@Kiikurage Kiikurage self-assigned this Sep 19, 2017
@Kiikurage
Copy link
Member

Kiikurage commented Sep 19, 2017

Thanks for your interest and sorry for the confusion.

I thought maybe they were pre-softmax values.

That's right. I'm sorry that softmax is not officially supported in WebGL backend yet, and during transpiling, softmax layer is automatically removed if it is placed at the end of the model. Therefore you need to apply softmax operation as postprocess.

So I tried adding the scale: [255] option when setting the input for the network

That's right.

WebDNN.Image.getImageArray($canvas, {
	scale: [255, 255, 255]
});

is automatically converted the raw pixel value (in the range 0-255) into the the range 0-1.

I thought maybe they were pre-softmax values, but that doesn't seem correct.

Changing this gives me [-294116294656, 666073825280], and now the predictions are no longer correct.

Hmmm, something wrong... If you can, could you tell me your Keras model definition?

Also, I'll upload a binary-classifier example in Keras with WebDNN a few hours later.

@Kiikurage
Copy link
Member

I created an simple example of binary classification
https://github.com/Kiikurage/keras-webdnn-binary_classification-example
Please check it.

@Kiikurage
Copy link
Member

Kiikurage commented Sep 19, 2017

Sorry, I found our mistake. If you want to pass scale value to getImageArray, it should be specified for each color channel.

// NG
WebDNN.Image.getImageArray($canvas, {
	scale: [255]
});

// OK
WebDNN.Image.getImageArray($canvas, {
	scale: [255, 255, 255]
});

@kylemcdonald
Copy link
Contributor Author

kylemcdonald commented Sep 24, 2017

Hi, in my case I am using grayscale images. So a single [255] was correct, I think.

More information about the network I am using is here. The network is:

nb_filters = 32
nb_pool = 2
nb_conv = 3

model = Sequential()
model.add(Conv2D(nb_filters, (nb_conv, nb_conv), activation='relu', input_shape=X.shape[1:]))
model.add(Conv2D(nb_filters, (nb_conv, nb_conv), activation='relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

If the numbers are pre-softmax, how can I convert from [ -309.3228759765625, 169.71974182128906 ] to [ 0.16141078, 0.83858919 ]? I thought I should be able to do: exp(-309) / (exp(-309) + exp(169)), etc. but this gives a very, very small number. Sorry for my poor understanding.

Edit: Also, thank you for the example. I looked at your code and it makes sense. So there must be some mistake with my network, maybe the way that scaling is working? I will look into it.

@Kiikurage
Copy link
Member

I've fixed the bug. Please use webdnn.js in latest revision.

let x = runner.getInputViews()[0];
let y = runner.getOutputViews()[0];

x.set(await WebDNN.Image.getImageArray($canvas, {
    color: WebDNN.Image.Color.GREY,
    scale: [255]
}));

await runner.run();

y = y.toActual();
console.log(exp(y[0]) / (exp(y[0]) + exp(y[1])), exp(y[1]) / (exp(y[0]) + exp(y[1])));

@kylemcdonald
Copy link
Contributor Author

Great! I can confirm this change fixes it. Thanks so much 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants