Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hard Sync of NN result, color, and depth for accurate 3D position of moving objects #138

Closed
Luxonis-Brandon opened this issue Jun 30, 2020 · 2 comments
Labels
enhancement New feature or request

Comments

@Luxonis-Brandon
Copy link
Contributor

Start with the why:

As it stands now the depth data, color stream, and neural bounding box and not synced (although they are time-stamped - so the host could synchronize them after the fact).

They are combined in a best-effort fashion (which maximizes framerate and reduces latency for each individual stream) but for quick-moving objects, the xyz location for the centroid of a detected can be wrong because the latency of the neural network results is higher than that of the depth estimation. So the neural network results lags behind the color stream and the depth stream. So when the depth corresponding to the bounding box location is pulled, it's actually in the future relative to the neural inference results, so the object may no longer be in that position, and thus the depth data will be pulled from whatever happens to be there at that time (i.e. it will be wrong for the object).

An example of these being out of alignment is here. Note that this also shows position of 0 meters for everything, that has since been fixed and was a result of a max-distance of 5 meters being hard-coded in the firmware (fix/control from host here).

The ‘how’:

Implement an optional hard-sync option in the DepthAI pipeline which will force alignment of depth and neural inferences results. This will come at some cost of reduced framerate and increased latency as a result of the interdependency/blocking nature between syncing one stream (e.g. depth) with others (neural inference, color, etc.).

So as such, having this as as an option is desirable as it allows the user choose between

  1. Hard-sync on DepthAI itself, with some cost to latency and framerate. No load to host.
  2. Sync on the host (using the existing timestamps). Higher framerate but higher load to host.
  3. No sync (in cases where this slight offset doesn't mater). Higher framerate. No load to host.

The ‘what’:

Optional Hard-Synchronization between the neural inference stream, color stream (for display purposes) and the depth stream (for positional accuracy) and the neural network results (the bounding box).

@Luxonis-Brandon Luxonis-Brandon added the enhancement New feature or request label Jun 30, 2020
@Luxonis-Brandon
Copy link
Contributor Author

Initially implemented in #157

@Luxonis-Brandon
Copy link
Contributor Author

Was recently merged to master.

Below is a demo run with python3 test.py -nce 2 -sh 10 -cmx 10 -sync:

Spatial AI

The hard-sync solution modifies much of the internal pipeline to optimize for synchronization and reducing latency and connections. You can see how much this improves latency and sync in the video above.

It's worth noting that for larger neural models (which cannot run at 30FPS), it is advisable to reduce the camera FPS to match the speed at which the NN can operate, or just below it, to reduce latency and jitter.

jdavidberger pushed a commit to constructiverealities/depthai that referenced this issue May 26, 2022
Update FW with fix for random crashes (kernel crash on RPI/jetson)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant