-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Colorized depth map + fps limit and multi tensor output #38
Conversation
…depthai-python-extras into multiple_output_tensors
Thanks @GergelySzabolcs!
Clarifying: does this addresses crashes you've seen on our current |
Yes, thanks @GergelySzabolcs !
So for our previewout-only example script (the default test.py), will this now be limited to 10FPS by default as well? I ask because it's compelling initial demo for that to show the full 30FPS. Although 10FPS isn't bad, so that might be fine as well. Either way, just wanting to make sure I understand. Thanks again! |
Crashes seen on master |
Limiting the fps it is supposedly just an example of usage. |
Fantastic. Thanks. This is a great feature to have - the capability to limit the framerate so as to not saturate USB2 bandwith. I'm going to move this to complete on the roadmap. Thanks for knocking out this feature in the process of solving the calibration/colorized-depth display problem. For the default display: Thanks again! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added comments. Biggest issues:
test.py doesn't run for me
On the 1097, I'll see an initial frame for some of the streams. However, almost immediately the log is full of "Data queue is full previewout/metaout/depth_sipp/left/right".
test.py is outdated
This is using an outdated version of test.py, so a number of important things were dropped (argument parsing, 1097-based defaults, etc.
@@ -0,0 +1,115 @@ | |||
import sys |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This example runs for me. 👍
@@ -29,7 +29,8 @@ | |||
|
|||
# 35 landmarks | |||
'blob_file_config': consts.resource_paths.prefix + 'nn/object_recognition_4shave/landmarks/landmarks-config-35.json', | |||
'blob_file': consts.resource_paths.prefix + 'nn/object_recognition_4shave/landmarks/facial-landmarks-35-adas-0002.blob' | |||
'blob_file': consts.resource_paths.prefix + 'nn/object_recognition_4shave/landmarks/facial-landmarks-35-adas-0002.blob', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works for me. 👍
@@ -0,0 +1,142 @@ | |||
import sys |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This runs for a bit, but classifies me as a red car when I'm wearing a blue sweatshirt.
After a minute or so, it crashes with:
ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size.
Traceback (most recent call last):
File "example_run_vehicle_attributes_recognition_barrier.py", line 121, in <module>
data0 = data[0,:,:]
IndexError: too many indices for array
Stopping threads: ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one was done by yurii.
I just merged his latest work with multiple_outpout_tensors
branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’d be curious if we fed that same image into the same network running on a host computer for inference it it would have the same output.
"output_properties_type": "f16" | ||
} | ||
], | ||
"mappings": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are these labels included?
], | ||
"mappings": | ||
{ | ||
"labels": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are these labels included?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know. It was done by yurii.
@itsderek23 i just updated the branch to match master. Also merged yurii's multiple output tensors branch, which might contain fixes for models that are not working? |
@GergelySzabolcs - test.py still freezes for me on the 1097 after showing the first frame w/data queue full for metaout/preview. I haven't yet dug through the host-side Python code to see if we've introduced any performance changes there. |
Yes - I wouldn't be shocked if this was an issue w/the NN itself.
This does appear to be stable now though. Thanks!
…On Tue, Mar 3, 2020 at 1:08 PM Luxonis-Brandon ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In example_run_vehicle_attributes_recognition_barrier.py
<#38 (comment)>
:
> @@ -0,0 +1,142 @@
+import sys
I’d be curious if we fed that same image into the same network running on
a host computer for inference it it would have the same output.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#38>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAB5SEHB3AAMOUZ7PXQXLDRFVPUNANCNFSM4K54GIZQ>
.
|
Yes. Could be. I think we completely punt on it (example_run_vehicle_attributes_recognition_barrier.py) for now and remove it from any all current efforts and planned future efforts and/or tests. And also remove example scripts which run anything other than an object detector from the repository. With the focus on the following instead:
So I'm thinking we focus all PRs/efforts going forward to cover those until we get that dialed. Once we have that dialed, we can move on to support of other network types, multi-output-tensors, etc. But I'm thinking we should really make it a conscious decision, with a check-off list that those are dialed. Thoughts? |
Superseded by #41. |
ImageManip: rotated/arbitrary cropping, add camera controls
This pr contains:
-multi tensor output support
-fix for occasional crashes