-
I am using an artpec-7 camera and trying object detection program. I made a custom MobilentSSDv2 model for detection.But, the output from the model running on the camera are different from the inference on an image taken with the camera. I ran an inference using tensorflow tflite-interpreter on 1280X780pixel image resized to 320X320 pixels on an image taken from the camera. Meanwhile, the output log of the object detection app on the camera gives different results. The bounding boxes are drifted and sometimes, the detected classes are different. Few sample outputs are below: [ Left hand images are the result from tensorflow lite interpreter, right hand side images show the bounding boxes from object detection plotted on the image] I am focusing on better localization of bounding boxes. Is this difference in the outputs normal ? Can I get better fitting bounding boxes on larod as that from the tflite-interpreter ? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
Hi @amararjun , thanks for your question, we'll get back to you when we had time to look in to this. |
Beta Was this translation helpful? Give feedback.
-
Hello @amararjun Which chip of the axis device are you using? Is the model running on CPU or on TPU? If the model is exactly the same, the output should be equal as well, but if you are using the TPU, going to the process of conversion to edgeTPU it is possible that the model looses some precision there. An alternative, is that there is something wrong in the way the output of the model is interpreted to generate the boxes, you should try to isolate these scenarios and test them one by one. Here I am going to assume that you are trying to run the model for edgeTPU.
If that help, to try your model on the camera you can SSH into it, and run: Here's an example of how you could do it. |
Beta Was this translation helpful? Give feedback.
Hello @amararjun
Which chip of the axis device are you using? Is the model running on CPU or on TPU?
Are you comparing the "pure" tflite lite model with a version converted for edgeTPU?
If the model is exactly the same, the output should be equal as well, but if you are using the TPU, going to the process of conversion to edgeTPU it is possible that the model looses some precision there.
An alternative, is that there is something wrong in the way the output of the model is interpreted to generate the boxes, you should try to isolate these scenarios and test them one by one.
Here I am going to assume that you are trying to run the model for edgeTPU.
First take a camera frame, and run an …