Skip to content

Latest commit


Type Name Latest commit message Commit time
Failed to load latest commit information. Adding python samples (#218) Feb 20, 2020 Adding python samples (#218) Feb 20, 2020

Tutorial 6: 3D Object Detection with ZED 2

This tutorial shows how to use the object detection module with the ZED 2.
We assume that you have followed previous tutorials.


Code overview

Create a camera

As in previous tutorials, we create, configure and open the ZED 2. Please note that the ZED 1 is not compatible with the object detection module.

This module uses the GPU to perform deep neural networks computations. On platforms with limited amount of memory such as jetson Nano, it's advise to disable the GUI to improve the performances and avoid memory overflow.

# Create a Camera object
zed = sl.Camera()

# Create a InitParameters object and set configuration parameters
init_params = sl.InitParameters()
init_params.camera_resolution = sl.RESOLUTION.HD720  # Use HD720 video mode
init_params.depth_mode = sl.DEPTH_MODE.PERFORMANCE
init_params.coordinate_units = sl.UNIT.METER
init_params.sdk_verbose = True

# Open the camera
err =
if err != sl.ERROR_CODE.SUCCESS:

Enable Object detection

We will define the object detection parameters. Notice that the object tracking needs the positional tracking to be able to track the objects in the world reference frame.

# Define the Objects detection module parameters
obj_param = sl.ObjectDetectionParameters()

# Object tracking requires the positional tracking module
camera_infos = zed.get_camera_information()
if obj_param.enable_tracking :

Then we can start the module, it will load the model. This operation can take a few seconds. The first time the module is used, the model will be optimized for the hardware and will take more time. This operation is done only once.

err = zed.enable_object_detection(obj_param)
if err != sl.ERROR_CODE.SUCCESS :
    print (repr(err))

The object detection is now activated.

Capture data

The object confidence threshold can be adjusted at runtime to select only the revelant objects depending on the scene complexity. Since the parameters have been set to image_sync, for each grab call, the image will be fed into the AI module and will output the detections for each frames.

# Detection Output
objects = sl.Objects()
# Detection runtime parameters
obj_runtime_param = sl.ObjectDetectionRuntimeParameters()
while zed.grab() == sl.ERROR_CODE.SUCCESS:
    zed_error = zed.retrieve_objects(objects, obj_runtime_param);
    if objects.is_new :
        print(str(len(objects.object_list))+" Object(s) detected ("+str(zed.get_current_fps())+" FPS)")

Disable modules and exit

Once the program is over the modules can be disabled and the camera closed. This step is optional since the zed.close() will take care of disabling all the modules. This function is also called automatically by the destructor if necessary.

# Disable object detection and close the camera
return 0

And this is it!

You can now detect object in 3D with the ZED 2.

You can’t perform that action at this time.