-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dimension mismatch in test evaluation #7
Comments
After further investigation it seems that each row in #scr.append(1 * r['fc8'])
scr.append(1*np.expand_dims(r['fc8'][0], axis=0)) # (10,20) -> (1,20) For 1 crop only the code runs without errors but produces poor mAP results (order of 3%), which is much lower than the 53% you get by random guessing... |
Nevermind, this is what happens if you give a (faulty?) deploy.prototxt as input as opposed to a train.prototxt. If you're looking for a working example have a look here: https://github.com/jeffdonahue/bigan |
@polo5 Did you solve the mismatch problem? I spent a lot of days debugging the code but failed. The jeffdonahue/bigan code has same problem. Besides, my test_resize_layer.cpp also failed in the 'make test' compiling. |
The code to evaluate the test mAP fails due to a factor of 10 mismatch in the ground truth and model scores
Evaluating on test set:
Evaluating on val set:
Here the ground truths are shape (20, X) but the scores are (20, X*10). I'm running on python 2.
My test_resize_layer.cpp failed in the 'make test' compiling so I removed it. Perhaps this is causing those shape mismatches now? Thanks for any tips.
The text was updated successfully, but these errors were encountered: