Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Colorized depth map + fps limit and multi tensor output #38

Closed
wants to merge 15 commits into from

Conversation

iamsg95
Copy link
Contributor

@iamsg95 iamsg95 commented Feb 28, 2020

This pr contains:
-multi tensor output support
-fix for occasional crashes

@itsderek23
Copy link
Contributor

Thanks @GergelySzabolcs!

-fix for occasional crashes

Clarifying: does this addresses crashes you've seen on our current master branch, or crashes introduced in development of this branch?

@Luxonis-Brandon
Copy link
Contributor

Yes, thanks @GergelySzabolcs !

rebuild libs; limit previewout, depth sipp to 10 fps

So for our previewout-only example script (the default test.py), will this now be limited to 10FPS by default as well?

I ask because it's compelling initial demo for that to show the full 30FPS. Although 10FPS isn't bad, so that might be fine as well.

Either way, just wanting to make sure I understand.

Thanks again!

@iamsg95
Copy link
Contributor Author

iamsg95 commented Feb 29, 2020

Thanks @GergelySzabolcs!

-fix for occasional crashes

Clarifying: does this addresses crashes you've seen on our current master branch, or crashes introduced in development of this branch?

Crashes seen on master

@iamsg95
Copy link
Contributor Author

iamsg95 commented Feb 29, 2020

Yes, thanks @GergelySzabolcs !

rebuild libs; limit previewout, depth sipp to 10 fps

So for our previewout-only example script (the default test.py), will this now be limited to 10FPS by default as well?

I ask because it's compelling initial demo for that to show the full 30FPS. Although 10FPS isn't bad, so that might be fine as well.

Either way, just wanting to make sure I understand.

Thanks again!

Limiting the fps it is supposedly just an example of usage.
We should decide which streams we enable by default.
In test.py depth aipp, previewout and metaout are enabled, that's why the limiting to 10 fps, so we don't saturate usb2 bandwidth. If depth sipp is not enabled we can do 30 fps.

@Luxonis-Brandon
Copy link
Contributor

Fantastic. Thanks.

This is a great feature to have - the capability to limit the framerate so as to not saturate USB2 bandwith.

I'm going to move this to complete on the roadmap. Thanks for knocking out this feature in the process of solving the calibration/colorized-depth display problem.

For the default display:
I'm thinking we stick w/ previewout w/ the metadata visualized on it. As it's nice and 30FPS even on a Pi w/ USB2. CC: @itsderek23 for any thoughts on this.

Thanks again!

Copy link
Contributor

@itsderek23 itsderek23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added comments. Biggest issues:

test.py doesn't run for me

On the 1097, I'll see an initial frame for some of the streams. However, almost immediately the log is full of "Data queue is full previewout/metaout/depth_sipp/left/right".

test.py is outdated

This is using an outdated version of test.py, so a number of important things were dropped (argument parsing, 1097-based defaults, etc.

test.py Outdated Show resolved Hide resolved
test.py Show resolved Hide resolved
test.py Outdated Show resolved Hide resolved
test.py Outdated Show resolved Hide resolved
test.py Show resolved Hide resolved
@@ -0,0 +1,115 @@
import sys
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This example runs for me. 👍

@@ -29,7 +29,8 @@

# 35 landmarks
'blob_file_config': consts.resource_paths.prefix + 'nn/object_recognition_4shave/landmarks/landmarks-config-35.json',
'blob_file': consts.resource_paths.prefix + 'nn/object_recognition_4shave/landmarks/facial-landmarks-35-adas-0002.blob'
'blob_file': consts.resource_paths.prefix + 'nn/object_recognition_4shave/landmarks/facial-landmarks-35-adas-0002.blob',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works for me. 👍

@@ -0,0 +1,142 @@
import sys
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This runs for a bit, but classifies me as a red car when I'm wearing a blue sweatshirt.

After a minute or so, it crashes with:

ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size.
Traceback (most recent call last):
  File "example_run_vehicle_attributes_recognition_barrier.py", line 121, in <module>
    data0 = data[0,:,:]
IndexError: too many indices for array
Stopping threads: ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one was done by yurii.
I just merged his latest work with multiple_outpout_tensors branch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inference still seems incorrect:

image

...I am pale/white for sure, but not a car :).

Good news is it hasn't crashed for me over a period of a couple of minutes ... going to let it run longer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’d be curious if we fed that same image into the same network running on a host computer for inference it it would have the same output.

"output_properties_type": "f16"
}
],
"mappings":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these labels included?

],
"mappings":
{
"labels":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these labels included?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know. It was done by yurii.

@iamsg95
Copy link
Contributor Author

iamsg95 commented Mar 3, 2020

@itsderek23 i just updated the branch to match master. Also merged yurii's multiple output tensors branch, which might contain fixes for models that are not working?

@itsderek23
Copy link
Contributor

@GergelySzabolcs - test.py still freezes for me on the 1097 after showing the first frame w/data queue full for metaout/preview. I haven't yet dug through the host-side Python code to see if we've introduced any performance changes there.

@itsderek23
Copy link
Contributor

itsderek23 commented Mar 3, 2020 via email

@Luxonis-Brandon
Copy link
Contributor

Luxonis-Brandon commented Mar 3, 2020

Yes. Could be.

I think we completely punt on it (example_run_vehicle_attributes_recognition_barrier.py) for now and remove it from any all current efforts and planned future efforts and/or tests.

And also remove example scripts which run anything other than an object detector from the repository.

With the focus on the following instead:

  1. Object detector models that don't have any additional/special metadata. So that normally just return class and bounding box locations.
  2. Augmentation of these object detector models with disparity depth results - to allow 3D object localization (i.e. the object class, and the XYZ locations of the centroid of the bounding box).
  3. Check of calibration - so the capability to visualize the depth map on a Raspberry Pi 3B+ with JET color map. The check of calibration is necessary for users to enable 2.

So I'm thinking we focus all PRs/efforts going forward to cover those until we get that dialed.

Once we have that dialed, we can move on to support of other network types, multi-output-tensors, etc. But I'm thinking we should really make it a conscious decision, with a check-off list that those are dialed.

Thoughts?

@iamsg95 iamsg95 closed this Mar 5, 2020
@itsderek23
Copy link
Contributor

Superseded by #41.

@SzabolcsGergely SzabolcsGergely deleted the colorized_depth branch April 9, 2020 21:00
jdavidberger pushed a commit to constructiverealities/depthai that referenced this pull request May 26, 2022
ImageManip: rotated/arbitrary cropping, add camera controls
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants