You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As it stands now the depth data, color stream, and neural bounding box and not synced (although they are time-stamped - so the host could synchronize them after the fact).
They are combined in a best-effort fashion (which maximizes framerate and reduces latency for each individual stream) but for quick-moving objects, the xyz location for the centroid of a detected can be wrong because the latency of the neural network results is higher than that of the depth estimation. So the neural network results lags behind the color stream and the depth stream. So when the depth corresponding to the bounding box location is pulled, it's actually in the future relative to the neural inference results, so the object may no longer be in that position, and thus the depth data will be pulled from whatever happens to be there at that time (i.e. it will be wrong for the object).
An example of these being out of alignment is here. Note that this also shows position of 0 meters for everything, that has since been fixed and was a result of a max-distance of 5 meters being hard-coded in the firmware (fix/control from host here).
The ‘how’:
Implement an optional hard-sync option in the DepthAI pipeline which will force alignment of depth and neural inferences results. This will come at some cost of reduced framerate and increased latency as a result of the interdependency/blocking nature between syncing one stream (e.g. depth) with others (neural inference, color, etc.).
So as such, having this as as an option is desirable as it allows the user choose between
Hard-sync on DepthAI itself, with some cost to latency and framerate. No load to host.
Sync on the host (using the existing timestamps). Higher framerate but higher load to host.
No sync (in cases where this slight offset doesn't mater). Higher framerate. No load to host.
The ‘what’:
Optional Hard-Synchronization between the neural inference stream, color stream (for display purposes) and the depth stream (for positional accuracy) and the neural network results (the bounding box).
The text was updated successfully, but these errors were encountered:
Below is a demo run with python3 test.py -nce 2 -sh 10 -cmx 10 -sync:
The hard-sync solution modifies much of the internal pipeline to optimize for synchronization and reducing latency and connections. You can see how much this improves latency and sync in the video above.
It's worth noting that for larger neural models (which cannot run at 30FPS), it is advisable to reduce the camera FPS to match the speed at which the NN can operate, or just below it, to reduce latency and jitter.
Start with the
why
:As it stands now the depth data, color stream, and neural bounding box and not synced (although they are time-stamped - so the host could synchronize them after the fact).
They are combined in a best-effort fashion (which maximizes framerate and reduces latency for each individual stream) but for quick-moving objects, the xyz location for the centroid of a detected can be wrong because the latency of the neural network results is higher than that of the depth estimation. So the neural network results lags behind the color stream and the depth stream. So when the depth corresponding to the bounding box location is pulled, it's actually
in the future
relative to the neural inference results, so the object may no longer be in that position, and thus the depth data will be pulled from whatever happens to be there at that time (i.e. it will be wrong for the object).An example of these being out of alignment is here. Note that this also shows position of 0 meters for everything, that has since been fixed and was a result of a max-distance of 5 meters being hard-coded in the firmware (fix/control from host here).
The ‘how’:
Implement an optional hard-sync option in the DepthAI pipeline which will force alignment of depth and neural inferences results. This will come at some cost of reduced framerate and increased latency as a result of the interdependency/blocking nature between syncing one stream (e.g. depth) with others (neural inference, color, etc.).
So as such, having this as as an option is desirable as it allows the user choose between
The ‘what’:
Optional Hard-Synchronization between the neural inference stream, color stream (for display purposes) and the depth stream (for positional accuracy) and the neural network results (the bounding box).
The text was updated successfully, but these errors were encountered: