-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't reproduce results with pre-trained models #19
Comments
the synset mapping looks correct (it's the same as caffe):
versus caffe:
|
Hi @mesnilgr I just did a test on the validation set using 256 batch size data. I didn't reproduce you problem. Here is my output.
Then I tried your method adding some extra outputs showing the loss, top-1 error, top-5 error, y_pred and ground truth of the first hkl file "val_hkl_b256_b_256/0000.hkl". And still didn't reproduce your problem. Here is my output:
Then I guess the problem might be either your 0000.hkl file or the img_mean.npy file. So I print part of the subtracted numpy array content here just for you to check:
Please let me know your checking result to help us find where the bug might be. |
ok I got the same input now for:
I found my problem was that I ran the No bugs on your side. Thanks for your help! Now I obtain the following running it on the whole validation set:
which is to a few percents what you obtain. |
@mesnilgr no problem. Thanks for trying out our code. It can be used to visualize batch images and their corresponding word description. |
@gwtaylor Thanks! very useful. |
Hi - thanks a lot for releasing your code.
I downloaded the img_mean + parameters of your model.
As a sanity check, I just ran validate_performance with your model but I can't reproduce the accuracy.
Here is the output of the script:
Any idea on what went wrong here? It goes off for the rest of the dataset too.
The text was updated successfully, but these errors were encountered: