-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NVDLA result of Alexnet is different with caffe. #45
Comments
mine neither |
Hi |
@chagyun0213 Thank you for your answer. And Could you provide your example code? |
Hi @chagyun0213 Can you please share an example of network and code (AlexNet or ResNet) that you were able to get to work? For example, how do you feed the input image into the network and where do you apply raw scaling? Really would appreciate any answer on this. Thanks! |
@ned-varnica previously we were normalizing input image by 255.0 by default which was causing to get incorrect results as pointed out by @chagyun0213 due to incorrect pre-processing. We have fixed it by providing runtime argument --normalize to specify value. Please try using 1.0 with this option. We were able to get correct results with it. |
Here is key information. name: "AlexNet"
As summary, the correct pre-processing for AlexNet included in BVLC is I hope it would be sufficient information for your problem. CJ |
@chagyun0213 |
I use cifar10 quick as inference model, and I ran some image include test images and train images.
|
@MINZHIJI can you generate similar loss report only for top-5 or top-1? Results looks sensible, reviewing cases where we see mismatch in top-1/top-5. |
@MINZHIJI results look good, please reopen issue if you see any problem |
How are you getting the mean value? |
[Solved] The problem is NVDLA runtime need to fit the image preprocessing with matching traning method.
Such as raw scaling, image subtracting and the order of RBG channel ... etc.
Environment:
Question:
I run Alexnet on NVDLA and caffe, but I get results are different.
Result: (Link)
The text was updated successfully, but these errors were encountered: