Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accuracy not the same as shown in the Post #53

Closed
gkrish19 opened this issue Jul 9, 2018 · 8 comments
Closed

Accuracy not the same as shown in the Post #53

gkrish19 opened this issue Jul 9, 2018 · 8 comments

Comments

@gkrish19
Copy link

gkrish19 commented Jul 9, 2018

Hello,

I tried to use the same code as posted for the AlexNet network and then used the validation for the same. I have tried the same code as in the post and am getting a very bad accuracy for some reason.

('Class name:', 'zebra', ' and Probability ', 0.6496317)
('Class name:', 'sea lion', ' and Probability ', 0.34226722)
('Class name:', 'llama', ' and Probability ', 0.36589968)

I used python and not Jupyter notebook for the validation so I printed the accuracy in the command line. I'm not sure why I have a different value. Can anyone give me some on this?

@kratzert
Copy link
Owner

I'm not sure if I understand you right. You should get the same results whether you run the code in a jupyter notebook or in a python script. If not, you must have changed something or not copied everything. But it is hard to tell from your description what you have done (wrong)

@yangerkun
Copy link

@gkrish19 I got the same bad results.

@gkrish19
Copy link
Author

@kratzert So I ran the code as per the one given in the post without any change. Here I have used Python 2.7 and not Jupyter. I don't know why the results are not good. Is it because it's not python3.

@kratzert
Copy link
Owner

Maybe. I don't know and won't test it. It is explicitly stated, that this code is for Python 3 so you should try this. If you get the same wrong results with python 3 let me know

@beebrain
Copy link

beebrain commented Aug 3, 2018

I got the same result. In Python 3.5 and tensorflow #1.5, I had a result same @gkrish19 when running with bvlc_alexnet.npy weight on the Jupyter.

@TotalVariation
Copy link

I got the same result. Then I found the issue is the setting of parameters:
lrn(2, 2e-05, 0.75, name='norm1'),lrn(2, 2e-05, 0.75, name='norm2')
refer to http://www.cs.toronto.edu/~guerzhoy/tf_alexnet/myalexnet_forward_newtf.py

@beebrain
Copy link

@TotalVariation Thanks man. I changed the parameters and executed it. I have a good result.
This problem is solved by @TotalVariation

@kratzert
Copy link
Owner

Sorry for not answering for such a long time.I became father recently and found not much time to look into this. Seems like there were some mistakes and not updated images due to some adaptions of the normalization factor. I did now update the factors according to the link posted by @TotalVariation and updated the image in the post as well with the correct probabilities after running the notebook again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants