-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
problem running ExecuteNetwork test #7
Comments
Hi @liviolima80 the Caffe parser doesn't support the 'Data' layer: the supported layers are listed in CaffeSupport.md in the armnnCaffeParser source directory. However I am aware that we have a working test using Lenet. I'll find out which version of the model we're using and post back here. |
Hi @MatthewARM , https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet and you run ExecuteNetwork on the model with first layer named "data" and last layer named "prob" you have the following error: Armnn Error: Couldn't find requested output layer 'prob' in graph even if the "prob" layer is the last layer defined in the model as you can check in the deploy.prototxt If I pass the prototxt file instead I have the following error: Armnn Error: Expected data blob at index 0 in layer conv1 not found Could you please verify these problems? |
Hi @MatthewARM , any news? |
Hi @liviolima80, I've been looking into the issues you reported and I will have some news for you today |
Hi @liviolima80 , Apologies for the delay, I had to clarify a few things before I could provide a clear answer. I was able to reproduce the errors you reported. 1 - Passing a caffe-text input: The error is because the .protoxt file doesn't have the weights included. In Tensorflow you can merge the network weights with the model definition in the .protoxt but Caffe doesn't really support this, it was an oversight on our side and we will be clarifying the expected behaviour on our next release. The caffe-binary input for Lenet is working, see below. 1.1 Armnn Error: Unsupported layer type 'Data' It's case sensitive. Change the input layer to '-i data' 2 - Alexnet caffe-binary input not working. I was able to reproduce this issue as well. We have an Alexnet test that runs fine outside the ExecuteNetwork app, so we must be doing something wrong in the app. I'll open a bug to investigate this further and we'll let you know of the outcome. |
Hi again @liviolima80, It appears that my 1.1 answer above is incorrect. That error is caused when you use a caffe .prototxt with the old syntax (which Armnn doesn't support). That can be easily fixed with something like: Let us know if that works |
Hi @TelmoARM , In the meantime, can you provide me the pyton code you use to build the model for train in caffe in order to create a model that works with armnn? I usually use the following n = caffe.NetSpec() n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb, transform_param=dict(scale=1./255), ntop=2) n.conv1 = L.Convolution(n.data, kernel_size=3, num_output=32, weight_filler=dict(type='xavier')) ...... Thanks |
Hi @TelmoARM , Thank you |
Hi @liviolima80 , Good to hear the fix worked. I've also found the problem with Alexnet. The Alexnet deploy.prototxt has got the batch size set to 10 but Armnn uses batch size in the model and there's no way to override it, so we need to generate a new model with the batch size set to 1. To do this: Open the Alexnet deploy.prototxt and change 'input_param { shape: { dim: 10 dim: 3 dim: 227 dim: 227 } }' to 'input_param { shape: { dim: 1 dim: 3 dim: 227 dim: 227 } }' and save it as Alexnet1.prototxt. Run the following python script import caffe net = caffe.Net('deploy.prototxt', '/home/bvlc_alexnet.caffemodel', caffe.TEST) The new bvlc_batchsize_1.caffemodel runs through Armnn. |
Closing this ticket as I believe both issues have been solved. |
How to solve the problem ?Armnn Error: Couldn't find requested output layer 'prob' in graph. I used the trained caffemodel directly. Should I convert it OR use both develop.prototxt and trained caffemodel?@liviolima80 |
Hi Garry, This ticket has been resolved. To help us diagnose the problem could you tell us if you are following any of the instructions in this thread? Are you using the Execute Network app? If so, how are you using it? The easiest way to address your issue would be to open a new ticket and provide the configuration, set-up and relevant information. Thanks, |
@TelmoARM I have solved my problem. I have wrote a blog (of course in Chinese), and I put my codes in github. |
Hi,
I'm trying to test ExecuteNetwork example on LeNet example in Caffe tutorial
http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/01-learning-lenet.ipynb
After training process I have both binary weights file and the prototxt deploy model attached
new_shape_deploy.zip
If I run ExecuteNetwork passing binary file I have the following error
Armnn Error: Unsupported layer type 'Data'
since only "Input" type layer is supported.
If I run ExecuteNetwork passing text file I have the following error
Armnn Error: Expected data blob at index 0 in layer conv1 not found
Can someone help me to solve the issue?
The text was updated successfully, but these errors were encountered: