Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different prediction results with ImageNet Inference than Digits on Custom GoogleNet Model #48

Closed
S4WRXTTCS opened this issue Mar 2, 2017 · 7 comments

Comments

@S4WRXTTCS
Copy link

S4WRXTTCS commented Mar 2, 2017

I trained a GoogleNet image classification network on a custom dataset using the default Googlenet network within Digits. My dataset consists of lots of 256x256 squashed images of playing cards. Basically crops of the playing cards that Digits squashed to 256x256. I had 11 different playing cards in total.

The training results were close to 100%, and Digits classify one reports accurate results for the test images I tested.

On the TX1 I simply copied over the Google Caffe Model, modified the prototype file for 11 classes, created the label file, and deleted the Cache'd file. I then used ImageNet-Console (with googlenet set) on the test images (after I ran them through digits to squash them to 256x256).

On most of the test images the results were good where they were consistent with what Digits reported, but on a few them they were way off.

Did I make some obvious mistake somewhere?

I also got the same kind of result with AlexNet. Where most of the test results are consistent with Digits except for a few cases that are way off.

@S4WRXTTCS
Copy link
Author

S4WRXTTCS commented Mar 4, 2017

I went through everything I could think of to try to fix this, but nothing has worked so far.

Here are the things I've tried

1.) I trained my data using AlexNet with pretrained weights from the original alexnet bvlc_alexnet.caffemodel. This made the training extremely fast, and a hair under 100% for the accuracy that Digits reported. All the test images reported correct under digits with >89% confidence. But, one of the test images still failed on the TX1 imagenet-console. On further testing I realized that it was having issues with the playingcards that were slanted about 45 degrees. None of the 0 degree or 90 degree cards failed.

2.) from the jetson-inference source code it looked like the images were being resized to 227x227 so I tried resizing them to 227x227 to see what Digits reported with those sizes. Digits still reported them correctly.

3.) I tried changing the mean value subtraction to all 0.0f's, but that didn't seem to help at all. I didn't expect it to because the data is trained using mean subtraction.

4.) I tried disabling FP16, and then deleting the tensor caffe file before retrying. But, that didn't help. I didn't expect it to as everything I've read suggest that FP16 has no noticeable degradation in accuracy.

The only thing I can think of doing next is changing the mean value subtraction, but I don't know how to get those values yet. I'm a bit skeptical of that though, and it seems like a resizing issue.

Any suggestions on what to try?

@dusty-nv
Copy link
Owner

dusty-nv commented Mar 4, 2017 via email

@S4WRXTTCS
Copy link
Author

The images I tested were 256x256. I purposely squashed them to this size to make sure they matched the size that Digits trained on.

I went ahead and uploaded the Model being used along with the Test Images.

https://github.com/S4WRXTTCS/PlayingCards

The Test Image it has problems on is Squashed4.jpg

@S4WRXTTCS
Copy link
Author

After investigating it further it seems it was related to the mean values.

I realized that use_archive.py was also giving me the same incorrect answer for that one example. Where it was completely off the results from digits.

One limitation of use_archive.py is it subtracts a mean pixel rather than the whole mean file. So that meant the mean subtraction was likely the culprit.

So I retrained the images using Mean Pixel subtraction instead of image in an attempt to at least get use_archive.py to report the same result as Digits.

The new Model worked for Digits+use_archive.py, and also gave me similar results with the jetson inference code.

@dusty-nv
Copy link
Owner

dusty-nv commented Mar 7, 2017 via email

@S4WRXTTCS
Copy link
Author

Thanks for your help.

@xuqintao
Copy link

@S4WRXTTCS hi ,i also train a GoogleNet image classification network on a custom dataset using the default Googlenet network within Digits. but i don't know how to move it to jetson-inference and use the model that i got it by training. could you tell me the detail. thank you very much. you can send email to me .xqt@brainybeeuav.com

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants