Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the latency and positioning accuracy in the ZED camera? #7

Closed
AlexeyAB opened this issue Feb 8, 2019 · 5 comments
Closed
Labels
closed_for_stale Issue closed for inactivity Stale

Comments

@AlexeyAB
Copy link
Contributor

AlexeyAB commented Feb 8, 2019

Hi,
I think about possibly the integration of some 3D camera support into https://github.com/AlexeyAB/darknet
It's interesting that we can use ZED-camera on Jetson TX1/TX2/Xavier (L4T) to get 3D-coordinates (in meters) of detected objects relative to either Camera or World Frame (stationary point in the world):

  • depth range: 0.5m - 20m (32-bits) error 1% - 9% and tracking errors: +/- 1mm position, 0.1° orientation
  • 1920x1080 30fps, 1280x720 60fps, 672x376 100fps
  • with Field of View: 90° (H) x 60° (V) x 110° (D) max

I have a few questions:

  • What is the latency (ms) of the camera: for the RGB and for distances?

  • How does the length of the USB cable and the performance of the GPU (on Jetson Xavier) affect the latency?

  • What is the average value of the cumulative camera positioning errors, are there any measurements or tests? For example, what is the average jump size (0.1 - 1 - 10 meters?) generated when we are closing a loop of 100 meters?

@adujardin
Copy link
Member

adujardin commented Feb 8, 2019

Hi Alexey,

The latency is the same for RGB and depth as the processing is synchronized by the grab() function.
The latency depends on the load since the computation has to be finished before getting images and depth data.
You can expect around 56ms on Xavier and 65ms (60 fps) on TX2 for RGB and depth with light settings, and around 190ms (24 fps) on TX2 for the highest (DEPTH_MODE_ULTRA, with depth stabilizer and tracking).

The length of the USB cable doesn't have an impact on the latency. However good quality, preferably shorter cables are more reliable since the bandwidth needed is very high. For long distances, an optic fiber can be used, you can check out our help center article for more info.

The positional error is highly dependent on many elements, mainly the type of motion (fast aggressive motion is harder to track) and the parameters. A higher framerate is always better to avoid motion blur and huge motions between frames.

For positional tracking error, our tests show in automotive scenario (e.g Kitti sequence like, with a quite low framerate), that the drift is around 2m at 100m. The loop closure can detect loop at 10m+ but is closer to ~1m jump on average in this configuration. On MAV sequence (EuRoC medium like, with good framerate and fast motions), the cumulative drift is around 1.2m at 80m.

@AlexeyAB
Copy link
Contributor Author

Hi Aymeric,

Thanks! I answered your email.
Also can we use the ZED camera together with the Oculus/Vive (mixed reality) to build a spatial map (SLAM), and then use the saved spatial map to move inside it only by using the Oculus/Vive without the ZED camera (virtual reality)? And what soft should we use for this?

@adujardin
Copy link
Member

For mixed reality applications, we recommend the use of the ZED-Mini which have a baseline matching the eye and therefore can be comfortably used for stereo pass-through.
We provide integration with Unity and Unreal 4 engines.

However, you can also build a textured mesh of the environment using the spatial mapping API available in the ZED SDK with a ZED or a ZED-Mini, it will output an .obj file. You can then load it as a standard mesh (containing geometry and textures) for VR visualization.
This entire process can be done using Unity or Unreal alongside our plugins for the ZED SDK.

NB: You can also save the Positional tracking spatial memory, and then reload it using the ZED SDK. However, this is intended to match the same environment and it only affects the positional tracking performances and cannot be used for visualization.

@pkr97
Copy link

pkr97 commented Jul 12, 2019

Hi Alexey,
Have you tried and implemented your idea of using ZED camera with darknet in Jetson module for object detection?
How does it perform? How does it perform in case of static object detection and in case of dynamic object detection?

What is the accuracy during static object detection and tracking?

What is the accuracy in case of dynamic object detection and tracking?

I'm more interested in Jetson nano + ZED mini + darknet + OpenCV.

I shall be waiting for your valuable feedback.

@github-actions
Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment otherwise it will be automatically closed in 5 days

@github-actions github-actions bot added Stale closed_for_stale Issue closed for inactivity labels Apr 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
closed_for_stale Issue closed for inactivity Stale
Development

No branches or pull requests

3 participants