New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
model testing #5
Comments
It is obvious that the results are different. Clarifai use a different and more complex model than this version of AlexNet and only way to get the same predictions is to implement their model (and maybe train it on datasets different from ImageNet). However, from the scores of your predictions I can see that training is proceeding good. |
thank you for your reply Do you know what model(architecture) uses clarifai. I observed in clarifai is giving probability of every tag is above 80%. here when specifying categorical cross entropy with soft-max classifier it is giving overall probability as 1(sum of all probabilities), but when I am summing all probabilities of clarifai tags I am getting overall probability more than 1. How can I achieve every tag probability independently. Please suggest me How can I achieve ? |
I searched the internet but found anything about their models. If you want to compute the probabilities for each class independently you have to change the final softmax activation with the sigmoid one (and also change the softmax cross-entropy loss to |
thanks alot for your reply If I change the activation method to 'tf.nn.sigmoid_cross_entropy_with_logits'. Once again I need to train from scratch on imagnet dataset. |
Exactly. Change the final activation layer |
Hi, I have changed in alexnet.py script with sigmoid like this softmax = tf.nn.softmax(fc3) and also changes have been made in train.py at line number 63 in the place of cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y_b, name='cross-entropy')) But while training I am getting accuracy is 0.0000 still after training 350 batches in first epoch |
Hi, I have made changes in alextnet.py script with sigmoid and also changed in train.py script with cross_entropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=pred, labels=y_b, name='cross-entropy')) But I am getting too low values and also when I am testing the other images also i am getting same values and same image tags Is it right to do the way to get the independent class probabilities? |
Yes, it is the right method and that is used when you have to solve multi-class multi-labels , problems where the labels can have the form For ImageNet it does not make sense to use the sigmoid, every label can have at most one 1. With a lot of classes (like our 1000) and just one positive class among them, to keep the loss low the model is good to set all probabilities low to 0 because an average of the classes with many 0s can mask the only 1 class. This behaviour could change if we had more 1s to the label (and so solving a multi-label problem). |
I believe Clarifai's system is trained in a way that is very different than the "standard" one proposed in this repository. Their system probably uses different kind of models to make the predictions, e.g. hierarchical classifiers or object detectors, as well different datasets. So, there is no way to replicate their model's behaviour using the code published here. Our goal is to predict only the correct class out of 1000, not to "see" what is there inside an image. |
Hi, I have trained 5 epochs on Imagenet dataset by using alexnet.py, but here I have changed classifier as sigmoid and also in train.py script I have changed AlexNet saw: How can I do achieve predictions independently |
@rakashi please see my second to last reply, I already explained that. |
Hi, I tried with sigmoid, I have seen until 10 epochs but the results are not getting like independent probabilities. |
Hi, I have trained my model using ILSVRC2012 dataset, after 10 epochs , I have tested the model with provided image I have got the following output with probabilities
python classify.py ./lussari.jpg
alp - score: 0.32444486022
valley, vale - score: 0.176716804504
monastery - score: 0.0272926297039
castle - score: 0.0269635356963
radio telescope, radio reflector - score: 0.0263287965208
stone wall - score: 0.0236576907337
dam, dike, dyke - score: 0.0211586020887
cliff, drop, drop-off - score: 0.0195624642074
church, church building - score: 0.0162345394492
ox - score: 0.0147680761293
mountain tent - score: 0.014573732391
oxcart - score: 0.013046768494
solar dish, solar collector, solar furnace - score: 0.0118312025443
bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis - score: 0.0116224717349
megalith, megalithic structure - score: 0.0104170087725
volcano - score: 0.0089854830876
Siberian husky - score: 0.0086589967832
king penguin, Aptenodytes patagonica - score: 0.00850845314562
fountain - score: 0.00850336998701
dalmatian, coach dog, carriage dog - score: 0.00831394083798
when I am comparing with calrifai I am not getting the results same as like in clarifai. How will I get same as like clarifai tags
The text was updated successfully, but these errors were encountered: