-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different prediction results with ImageNet Inference than Digits on Custom GoogleNet Model #48
Comments
I went through everything I could think of to try to fix this, but nothing has worked so far. Here are the things I've tried 1.) I trained my data using AlexNet with pretrained weights from the original alexnet bvlc_alexnet.caffemodel. This made the training extremely fast, and a hair under 100% for the accuracy that Digits reported. All the test images reported correct under digits with >89% confidence. But, one of the test images still failed on the TX1 imagenet-console. On further testing I realized that it was having issues with the playingcards that were slanted about 45 degrees. None of the 0 degree or 90 degree cards failed. 2.) from the jetson-inference source code it looked like the images were being resized to 227x227 so I tried resizing them to 227x227 to see what Digits reported with those sizes. Digits still reported them correctly. 3.) I tried changing the mean value subtraction to all 0.0f's, but that didn't seem to help at all. I didn't expect it to because the data is trained using mean subtraction. 4.) I tried disabling FP16, and then deleting the tensor caffe file before retrying. But, that didn't help. I didn't expect it to as everything I've read suggest that FP16 has no noticeable degradation in accuracy. The only thing I can think of doing next is changing the mean value subtraction, but I don't know how to get those values yet. I'm a bit skeptical of that though, and it seems like a resizing issue. Any suggestions on what to try? |
Sorry, I was going to recommend the FP16/FP32 thing but it seems you already tried that. In that case it would seem the preprocessing may be next, i.e. the rescaling as you mentioned. What is the original resolution of your images?
On Mar 3, 2017 9:04 PM, Jason Mecham <notifications@github.com> wrote:
I went through everything I could think of to try to fix this, but nothing has worked so far.
Here are the things I've tried
1.) I trained my data using AlexNet with pretrained weights from the original alexnet bvlc_alexnet.caffemodel. This made the training extremely fast, and a hair under 100% for the accuracy that Digits reported. All the test cards reported correct under digits with >89% confidence. But, one of the test cards still failed on the TX1 imagenet-console. On further testing a lot more failed and all the ones that failed were where the playingcards were slanted about 45 degrees. None of the 0 degree or 90 degree cards failed.
2.) from the jetson-inference source code it looked like the images were being resized to 227x227 so I tried resizing them to 227x227 to see what Digits reported. Digits still reported them correctly.
3.) I tried changing the mean value subtraction to all 0.0f's, but that didn't seem to help at all.
4.) I tried disabling FP16, and then deleting the tensor caffe file before retrying. But, that didn't help.
The only thing I can think of doing next is changing the mean value subtraction, but I don't know how to get those values yet. I'm a bit skeptical of that though, and it seems like a resizing issue.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<#48 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AOpDK9APONAB5bbCH47pLWD_SouOe6sbks5riMa6gaJpZM4MRrB8>.
…-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
|
The images I tested were 256x256. I purposely squashed them to this size to make sure they matched the size that Digits trained on. I went ahead and uploaded the Model being used along with the Test Images. https://github.com/S4WRXTTCS/PlayingCards The Test Image it has problems on is Squashed4.jpg |
After investigating it further it seems it was related to the mean values. I realized that use_archive.py was also giving me the same incorrect answer for that one example. Where it was completely off the results from digits. One limitation of use_archive.py is it subtracts a mean pixel rather than the whole mean file. So that meant the mean subtraction was likely the culprit. So I retrained the images using Mean Pixel subtraction instead of image in an attempt to at least get use_archive.py to report the same result as Digits. The new Model worked for Digits+use_archive.py, and also gave me similar results with the jetson inference code. |
OK, thank you for confirming. If you need to change the mean pixel value, you can currently set that in detectNet.cpp line 167.
From: Jason Mecham [mailto:notifications@github.com]
Sent: Monday, March 06, 2017 8:50 PM
To: dusty-nv/jetson-inference
Cc: Dustin Franklin; Comment
Subject: Re: [dusty-nv/jetson-inference] Different prediction results with ImageNet Inference than Digits on Custom GoogleNet Model (#48)
After investigating it further it seems it was related to the mean values.
I realized that use_archive.py was also giving me the same incorrect answer for that one example. Where it was completely off the results from digits.
One limitation of use_archive.py is it subtracts a mean pixel rather than the whole mean file. So that meant the mean subtraction was likely the culprit.
So I retrained the images using Mean Pixel subtraction instead of image in an attempt to at least get use_archive.py to report the same result as Digits.
The new Model worked for Digits+use_archive.py, and also gave me similar results with the jetson inference code.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#48 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AOpDKwy5Irf8mHuU9kga9RaIA3LxD9Liks5rjLfdgaJpZM4MRrB8>.
…-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
|
Thanks for your help. |
@S4WRXTTCS hi ,i also train a GoogleNet image classification network on a custom dataset using the default Googlenet network within Digits. but i don't know how to move it to jetson-inference and use the model that i got it by training. could you tell me the detail. thank you very much. you can send email to me .xqt@brainybeeuav.com |
I trained a GoogleNet image classification network on a custom dataset using the default Googlenet network within Digits. My dataset consists of lots of 256x256 squashed images of playing cards. Basically crops of the playing cards that Digits squashed to 256x256. I had 11 different playing cards in total.
The training results were close to 100%, and Digits classify one reports accurate results for the test images I tested.
On the TX1 I simply copied over the Google Caffe Model, modified the prototype file for 11 classes, created the label file, and deleted the Cache'd file. I then used ImageNet-Console (with googlenet set) on the test images (after I ran them through digits to squash them to 256x256).
On most of the test images the results were good where they were consistent with what Digits reported, but on a few them they were way off.
Did I make some obvious mistake somewhere?
I also got the same kind of result with AlexNet. Where most of the test results are consistent with Digits except for a few cases that are way off.
The text was updated successfully, but these errors were encountered: