Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caffe ResNet model, net.predict() function predict same probability #4012

Open
JudeLee19 opened this issue Apr 19, 2016 · 3 comments
Open

Caffe ResNet model, net.predict() function predict same probability #4012

JudeLee19 opened this issue Apr 19, 2016 · 3 comments

Comments

@JudeLee19
Copy link

JudeLee19 commented Apr 19, 2016

I trained ResNet-101 following caffe model(https://github.com/KaimingHe/deep-residual-networks) with 800000 data for training and 200000 data for validation. After I train this model, I got 59% accuracy for 1st and accuracy-top5 is 82% with 30 epoch as seen below picture.
2016-04-19 3 38 46

But when I tried predict some images with this model(net.forward()), the results always produce same probability like below even though I tried with other images.
2016-04-19 3 41 28

First thing I thought was image preprocessing problem in predict step like subtracting mean values or adequating batch size corresponding with training step. But all of these step was correctly set up. I checked all other questions having a same problem with me but couldn't find a solution. As the above picture(1) is showing, I assume the training process wasn't something wrong.

I followed "Image Classification and Filter Visualization(http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb)" file provided from Caffe editing model_def and model_weights with my model_def and model_weights having a 30 epoch.

@seanbell
Copy link

From https://github.com/BVLC/caffe/blob/master/CONTRIBUTING.md:

When reporting a bug, it's most helpful to provide the following information, where applicable:

  • What steps reproduce the bug?
  • Can you reproduce the bug using the latest master, compiled with the DEBUG make option?
  • What hardware and operating system/distribution are you running?
  • If the bug is a crash, provide the backtrace (usually printed by Caffe; always obtainable with gdb).

@bearpaw
Copy link

bearpaw commented Apr 23, 2016

@JudeLee19 Hi. Just a little shift away from your question: how you visualize the training? Thank you!

@BobLiu20
Copy link

BobLiu20 commented Jul 12, 2017

you can get answer of this question in here

answer:
The problem was use_global_stats setting in deploy.prototxt. In training step, use_global_stats has to be set as false because mean/var need's to be update. But when I predict using deploy.prototxt use_global_stats has to be set as true.

So please close this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants