Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For 12 channel data the scores returned by wrapper and that by the lmdb are different #21

Closed
twongjirad opened this issue May 5, 2016 · 9 comments
Assignees
Labels

Comments

@twongjirad
Copy link
Member

twongjirad commented May 5, 2016

But at least the forward pass is working.

First thing to check is if the image in the datum is in the same order as the data in the Image2D object.

Check the mean blob, too.

@twongjirad twongjirad self-assigned this May 5, 2016
@vgenty vgenty added the bug label May 5, 2016
@twongjirad
Copy link
Member Author

confirmed that the values in the data blobs are completely different. showing plot. seems there is an inversion in the wire and time axes somewhere. maybe when i made the image2d file.

screen shot 2016-05-05 at 3 21 46 pm

@vgenty
Copy link
Member

vgenty commented May 5, 2016

despite image2d and lmdb difference I was able to "remove the neutrino" with the zero tool and the network probability changed

before
screen shot 2016-05-05 at 5 08 55 pm

after

screen shot 2016-05-05 at 5 16 24 pm

@vgenty
Copy link
Member

vgenty commented May 5, 2016

commit is on the rgb_caffemod branch btw
e4d5342

@twongjirad
Copy link
Member Author

twongjirad commented May 6, 2016

MATCHES!!!

screen shot 2016-05-05 at 11 35 41 pm

@vgenty
Copy link
Member

vgenty commented May 6, 2016

super !

@twongjirad
Copy link
Member Author

twongjirad commented May 6, 2016

committed changes @ c9a268b

NOTE: I turned off the thresholding functionality for the 12 channel data

Also disabled the additional mean subtraction.

The order of thresholding and mean subtraction has to be discussed.

@twongjirad twongjirad reopened this May 6, 2016
@twongjirad
Copy link
Member Author

the scores from the gui interface was matched to a python script used to run the BNB. This turns out to be different from the scores when I run the BNB data through the network using 'caffe test'.

New targets:

I0506 01:00:48.000690 18747 caffe.cpp:275] Batch 0, probt = 0.230329
I0506 01:00:48.000699 18747 caffe.cpp:275] Batch 0, probt = 0.769671
I0506 01:00:48.046295 18747 caffe.cpp:275] Batch 1, acc = 1
I0506 01:00:48.046342 18747 caffe.cpp:275] Batch 1, probt = 0.966876
I0506 01:00:48.046350 18747 caffe.cpp:275] Batch 1, probt = 0.033124
I0506 01:00:48.087106 18747 caffe.cpp:275] Batch 2, acc = 1
I0506 01:00:48.087162 18747 caffe.cpp:275] Batch 2, probt = 0.600837
I0506 01:00:48.087169 18747 caffe.cpp:275] Batch 2, probt = 0.399163
I0506 01:00:48.131290 18747 caffe.cpp:275] Batch 3, acc = 1
I0506 01:00:48.131337 18747 caffe.cpp:275] Batch 3, probt = 0.61427
I0506 01:00:48.131343 18747 caffe.cpp:275] Batch 3, probt = 0.38573
I0506 01:00:48.172060 18747 caffe.cpp:275] Batch 4, acc = 1
I0506 01:00:48.172116 18747 caffe.cpp:275] Batch 4, probt = 0.618634
I0506 01:00:48.172122 18747 caffe.cpp:275] Batch 4, probt = 0.381366

@twongjirad
Copy link
Member Author

So I verified that the network data for both the GUI and 'caffe test' mode are the same. The sum of diffs for all pixels is zero. Still, the network output is different by a few percent. WTF.

@drinkingkazu
Copy link

I believe this is resolved, or please reopen if it persists!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants