Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

model testing #5

Closed
rakashi opened this issue Nov 18, 2017 · 13 comments
Closed

model testing #5

rakashi opened this issue Nov 18, 2017 · 13 comments

Comments

@rakashi
Copy link

rakashi commented Nov 18, 2017

Hi, I have trained my model using ILSVRC2012 dataset, after 10 epochs , I have tested the model with provided image I have got the following output with probabilities

python classify.py ./lussari.jpg

alp - score: 0.32444486022
valley, vale - score: 0.176716804504
monastery - score: 0.0272926297039
castle - score: 0.0269635356963
radio telescope, radio reflector - score: 0.0263287965208
stone wall - score: 0.0236576907337
dam, dike, dyke - score: 0.0211586020887
cliff, drop, drop-off - score: 0.0195624642074
church, church building - score: 0.0162345394492
ox - score: 0.0147680761293
mountain tent - score: 0.014573732391
oxcart - score: 0.013046768494
solar dish, solar collector, solar furnace - score: 0.0118312025443
bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis - score: 0.0116224717349
megalith, megalithic structure - score: 0.0104170087725
volcano - score: 0.0089854830876
Siberian husky - score: 0.0086589967832
king penguin, Aptenodytes patagonica - score: 0.00850845314562
fountain - score: 0.00850336998701
dalmatian, coach dog, carriage dog - score: 0.00831394083798

when I am comparing with calrifai I am not getting the results same as like in clarifai. How will I get same as like clarifai tags

@matteo-dunnhofer
Copy link
Owner

matteo-dunnhofer commented Nov 18, 2017

It is obvious that the results are different. Clarifai use a different and more complex model than this version of AlexNet and only way to get the same predictions is to implement their model (and maybe train it on datasets different from ImageNet).

However, from the scores of your predictions I can see that training is proceeding good.

@rakashi
Copy link
Author

rakashi commented Nov 20, 2017

thank you for your reply

Do you know what model(architecture) uses clarifai. I observed in clarifai is giving probability of every tag is above 80%. here when specifying categorical cross entropy with soft-max classifier it is giving overall probability as 1(sum of all probabilities), but when I am summing all probabilities of clarifai tags I am getting overall probability more than 1. How can I achieve every tag probability independently.

Please suggest me How can I achieve ?

@matteo-dunnhofer
Copy link
Owner

I searched the internet but found anything about their models.

If you want to compute the probabilities for each class independently you have to change the final softmax activation with the sigmoid one (and also change the softmax cross-entropy loss to tf.nn.sigmoid_cross_entropy_with_logits).

@rakashi
Copy link
Author

rakashi commented Nov 22, 2017

thanks alot for your reply

If I change the activation method to 'tf.nn.sigmoid_cross_entropy_with_logits'. Once again I need to train from scratch on imagnet dataset.
Is it right I am going in the right way ?

@matteo-dunnhofer
Copy link
Owner

Is it right I am going in the right way ?

Exactly. Change the final activation layer tf.nn.softmax to tf.nn.sigmoid in the AlexNet model definition (inside models/alexnet.py file) and the the loss tf.nn.softmax_cross_entropy_with_logits to tf.nn.sigmoid_cross_entropy_with_logits (in the train.py file). Then retrain everything from scratch. Let me know the results.

@rakashi
Copy link
Author

rakashi commented Nov 28, 2017

Hi, I have changed in alexnet.py script with sigmoid like this softmax = tf.nn.softmax(fc3) and also changes have been made in train.py at line number 63 in the place of cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y_b, name='cross-entropy'))
changed with
cross_entropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=pred, labels=y_b, name='cross-entropy'))

But while training I am getting accuracy is 0.0000 still after training 350 batches in first epoch
Here I kept name as cross-entropy. Is it right what I kept with sigmoid ?

@rakashi
Copy link
Author

rakashi commented Nov 30, 2017

Hi, I have made changes in alextnet.py script with sigmoid and also changed in train.py script with cross_entropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=pred, labels=y_b, name='cross-entropy'))
after 1 epoch, I have tested model with sample image
python classify.py ./imagga1.jpg
AlexNet saw:
safety pin - score: 0.000153716362547
leopard, Panthera pardus - score: 0.000141602737131
Airedale, Airedale terrier - score: 0.000141219818033
cassette - score: 0.000140135452966
night snake, Hypsiglena torquata - score: 0.00013754577958
gyromitra - score: 0.000137283146614
horizontal bar, high bar - score: 0.00013649876928
analog clock - score: 0.000136304399348
spotted salamander, Ambystoma maculatum - score: 0.000135600683279
torch - score: 0.00013555350597

But I am getting too low values and also when I am testing the other images also i am getting same values and same image tags

Is it right to do the way to get the independent class probabilities?

@matteo-dunnhofer
Copy link
Owner

Yes, it is the right method and that is used when you have to solve multi-class multi-labels , problems where the labels can have the form [1,1,0,0,1,0]. In that setting, you don't want to predict a normalised distribution as with softmax (where all probabilities sum to 1) but instead independent probabilities, and the way to do it is with the sigmoid activation.

For ImageNet it does not make sense to use the sigmoid, every label can have at most one 1. With a lot of classes (like our 1000) and just one positive class among them, to keep the loss low the model is good to set all probabilities low to 0 because an average of the classes with many 0s can mask the only 1 class. This behaviour could change if we had more 1s to the label (and so solving a multi-label problem).

@rakashi
Copy link
Author

rakashi commented Nov 30, 2017

thanks alot for your response

I am not only doing the classifying image. I have to get the confidence of each and every object in the image.
If i am giving the image i need to get the probability independently like this
screenshot from 2017-11-30 22 41 38

in above image they are getting probability independently and also very high. for me also trying to do the same thing how can I achieve
can you please suggest ?

@matteo-dunnhofer
Copy link
Owner

I believe Clarifai's system is trained in a way that is very different than the "standard" one proposed in this repository. Their system probably uses different kind of models to make the predictions, e.g. hierarchical classifiers or object detectors, as well different datasets. So, there is no way to replicate their model's behaviour using the code published here. Our goal is to predict only the correct class out of 1000, not to "see" what is there inside an image.

@rakashi
Copy link
Author

rakashi commented Dec 12, 2017

Hi, I have trained 5 epochs on Imagenet dataset by using alexnet.py, but here I have changed classifier as sigmoid and also in train.py script I have changed
cross_entropy=tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=pred,labels=y_b,name='cross-entropy'))
But I am not getting predictions in accepted level. here I am providing what I have got predictions

AlexNet saw:
mousetrap - score: 0.000748148886487
crossword puzzle, crossword - score: 0.00073797814548
ant, emmet, pismire - score: 0.000733552791644
vulture - score: 0.000711697910447
balance beam, beam - score: 0.000705952639692
white wolf, Arctic wolf, Canis lupus tundrarum - score: 0.00070254644379
tile roof - score: 0.000700722332112
bubble - score: 0.000700120232068
slot, one-armed bandit - score: 0.000698872725479
hammerhead, hammerhead shark - score: 0.000698209973052

How can I do achieve predictions independently

@matteo-dunnhofer
Copy link
Owner

@rakashi please see my second to last reply, I already explained that.

@rakashi
Copy link
Author

rakashi commented Dec 29, 2017

Hi, I tried with sigmoid, I have seen until 10 epochs but the results are not getting like independent probabilities.
Can you please help me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants