New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the latency and positioning accuracy in the ZED camera? #7
Comments
Hi Alexey, The latency is the same for RGB and depth as the processing is synchronized by the grab() function. The length of the USB cable doesn't have an impact on the latency. However good quality, preferably shorter cables are more reliable since the bandwidth needed is very high. For long distances, an optic fiber can be used, you can check out our help center article for more info. The positional error is highly dependent on many elements, mainly the type of motion (fast aggressive motion is harder to track) and the parameters. A higher framerate is always better to avoid motion blur and huge motions between frames. For positional tracking error, our tests show in automotive scenario (e.g Kitti sequence like, with a quite low framerate), that the drift is around 2m at 100m. The loop closure can detect loop at 10m+ but is closer to ~1m jump on average in this configuration. On MAV sequence (EuRoC medium like, with good framerate and fast motions), the cumulative drift is around 1.2m at 80m. |
Hi Aymeric, Thanks! I answered your email. |
For mixed reality applications, we recommend the use of the ZED-Mini which have a baseline matching the eye and therefore can be comfortably used for stereo pass-through. However, you can also build a textured mesh of the environment using the spatial mapping API available in the ZED SDK with a ZED or a ZED-Mini, it will output an NB: You can also save the Positional tracking spatial memory, and then reload it using the ZED SDK. However, this is intended to match the same environment and it only affects the positional tracking performances and cannot be used for visualization. |
Hi Alexey, What is the accuracy during static object detection and tracking? What is the accuracy in case of dynamic object detection and tracking? I'm more interested in Jetson nano + ZED mini + darknet + OpenCV. I shall be waiting for your valuable feedback. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment otherwise it will be automatically closed in 5 days |
Hi,
I think about possibly the integration of some 3D camera support into https://github.com/AlexeyAB/darknet
It's interesting that we can use ZED-camera on Jetson TX1/TX2/Xavier (L4T) to get 3D-coordinates (in meters) of detected objects relative to either Camera or World Frame (stationary point in the world):
I have a few questions:
What is the latency (ms) of the camera: for the RGB and for distances?
How does the length of the USB cable and the performance of the GPU (on Jetson Xavier) affect the latency?
What is the average value of the cumulative camera positioning errors, are there any measurements or tests? For example, what is the average jump size (0.1 - 1 - 10 meters?) generated when we are closing a loop of 100 meters?
The text was updated successfully, but these errors were encountered: