Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Framerate drops as soon as subscribing pointcloud on Jetson Xavier #850

Closed
2 tasks done
SiBensberg opened this issue Jul 22, 2022 · 12 comments
Closed
2 tasks done

Framerate drops as soon as subscribing pointcloud on Jetson Xavier #850

SiBensberg opened this issue Jul 22, 2022 · 12 comments
Assignees
Labels

Comments

@SiBensberg
Copy link

SiBensberg commented Jul 22, 2022

Preliminary Checks

  • This issue is not a duplicate. Before opening a new issue, please search existing issues.
  • This issue is not a question, feature request, or anything other than a bug report directly related to this project.

Description

Hi,
I have installed the ZED SDK 3.7.6 on my Jetson Xavier with JetPack 5.0. Starting the Wrapper the framerate is at a stable 60hz but as soon as I subscribe the pointcloud with a simple rostopic hz /zed/zed_node/point_cloud/cloud_registered the framerate drops to about 3hz. It appears to be the whole node slowing down but the usage of the Jetson is normal. The problem was reproducable with a Ubuntu Laptop with a 1660Ti with SDK 3.7.5 and cuda 11.0.
CPU cores are about 10-30% and GPU about 10%.
Power mode is MAXN.
Jetson clocks is activated.
I also see the following warning:

Elaboration takes longer (0.302286 sec) than requested by the FPS rate (0.016666667 sec). Please consider to lower the 'frame_rate' setting or to reduce the power requirements reducing the resolutions.

There seems to be an old issue with a similiar problem: #227

Steps to Reproduce

1.Start Ros Wrapper
2.Monitor framerate
3.subscribe to pointcloud

Expected Result

60hz

Actual Result

about 3 hz

ZED Camera model

ZED2i

Environment

Nvidia Jetson Xavier
Ubuntu 20.04
Jetpack 5.0
SDK 3.7.6

Anything else?

No response

@SiBensberg SiBensberg added the bug label Jul 22, 2022
@Myzhar
Copy link
Member

Myzhar commented Jul 22, 2022

Hi @SiBensberg
is it an Nvidia Jetson Xavier AGX or NX?
It is expected that the point cloud subscription lowers the frequency of the topics. But 3Hz is actually low and not expected.

Can you provide more information about the configuration of the node? (resolution, depth mode, etc)
If you can share common.yaml and zed2i.yaml it's even better.

@SiBensberg
Copy link
Author

Its an AGX. The depth viewer works well and gets me a proper frequency with no drop.

common.yaml:

# params/common.yaml
# Common parameters to Stereolabs ZED and ZED mini cameras
---

# Dynamic parameters cannot have a namespace
brightness:                 4                                   # Dynamic
contrast:                   4                                   # Dynamic
hue:                        0                                   # Dynamic
saturation:                 4                                   # Dynamic
sharpness:                  4                                   # Dynamic
gamma:                      8                                   # Dynamic - Requires SDK >=v3.1
auto_exposure_gain:         true                                # Dynamic
gain:                       100                                 # Dynamic - works only if `auto_exposure_gain` is false
exposure:                   100                                 # Dynamic - works only if `auto_exposure_gain` is false
auto_whitebalance:          true                                # Dynamic
whitebalance_temperature:   42                                  # Dynamic - works only if `auto_whitebalance` is false
depth_confidence:           30                                  # Dynamic
depth_texture_conf:         100                                 # Dynamic
pub_frame_rate:             15.0                                # Dynamic - frequency of publishing of video and depth data
point_cloud_freq:           10.0                                # Dynamic - frequency of the pointcloud publishing (equal or less to `grab_frame_rate` value)

general:
    camera_name:                zed                             # A name for the camera (can be different from camera model and node name and can be overwritten by the launch file)
    zed_id:                     0
    serial_number:              0
    resolution:                 2                               # '0': HD2K, '1': HD1080, '2': HD720, '3': VGA
    grab_frame_rate:            15                              # Frequency of frame grabbing for internal SDK operations
    gpu_id:                     -1
    base_frame:                 'base_link'                     # must be equal to the frame_id used in the URDF file
    verbose:                    false                           # Enable info message by the ZED SDK
    svo_compression:            2                               # `0`: LOSSLESS, `1`: AVCHD, `2`: HEVC
    self_calib:                 true                            # enable/disable self calibration at starting
    camera_flip:                false

video:
    img_downsample_factor:      1.0                             # Resample factor for images [0.01,1.0] The SDK works with native image sizes, but publishes rescaled image.
    extrinsic_in_camera_frame:  true                            # if `false` extrinsic parameter in `camera_info` will use ROS native frame (X FORWARD, Z UP) instead of the camera frame (Z FORWARD, Y DOWN) [`true` use old behavior as for version < v3.1]

depth:
    quality:                    4                               # '0': NONE, '1': PERFORMANCE, '2': QUALITY, '3': ULTRA, '4': NEURAL
    sensing_mode:               0                               # '0': STANDARD, '1': FILL (not use FILL for robotic applications)
    depth_stabilization:        1                               # `0`: disabled, `1`: enabled
    openni_depth_mode:          false                           # 'false': 32bit float meters, 'true': 16bit uchar millimeters
    depth_downsample_factor:    1.0                             # Resample factor for depth data matrices [0.01,1.0] The SDK works with native data sizes, but publishes rescaled matrices (depth map, point cloud, ...)

pos_tracking:
    pos_tracking_enabled:       true                            # True to enable positional tracking from start
    publish_tf:                 true                            # publish `odom -> base_link` TF
    publish_map_tf:             true                            # publish `map -> odom` TF
    map_frame:                  'map'                           # main frame
    odometry_frame:             'odom'                          # odometry frame
    area_memory_db_path:        'zed_area_memory.area'          # file loaded when the node starts to restore the "known visual features" map. 
    save_area_memory_db_on_exit: false                          # save the "known visual features" map when the node is correctly closed to the path indicated by `area_memory_db_path`
    area_memory:                true                            # Enable to detect loop closure
    floor_alignment:            false                           # Enable to automatically calculate camera/floor offset
    initial_base_pose:          [0.0,0.0,0.0, 0.0,0.0,0.0]      # Initial position of the `base_frame` -> [X, Y, Z, R, P, Y]
    init_odom_with_first_valid_pose: true                       # Enable to initialize the odometry with the first valid pose
    path_pub_rate:              2.0                             # Camera trajectory publishing frequency
    path_max_count:             -1                              # use '-1' for unlimited path size
    two_d_mode:                 false                           # Force navigation on a plane. If true the Z value will be fixed to "fixed_z_value", roll and pitch to zero
    fixed_z_value:              0.00                            # Value to be used for Z coordinate if `two_d_mode` is true

mapping:
    mapping_enabled:            false                           # True to enable mapping and fused point cloud pubblication
    resolution:                 0.05                            # maps resolution in meters [0.01f, 0.2f]
    max_mapping_range:          -1                              # maximum depth range while mapping in meters (-1 for automatic calculation) [2.0, 20.0]
    fused_pointcloud_freq:      1.0                             # frequency of the publishing of the fused colored point cloud

zed2i.yaml

# params/zed2i.yaml
# Parameters for Stereolabs ZED2 camera
---

general:
    camera_model:               'zed2i'

depth:
    min_depth:                  0.3             # Min: 0.2, Max: 3.0 - Default 0.7 - Note: reducing this value wil require more computational power and GPU memory
    max_depth:                  20.0            # Max: 40.0

pos_tracking:
    imu_fusion:                 true            # enable/disable IMU fusion. When set to false, only the optical odometry will be used.

sensors:
    sensors_timestamp_sync:     false           # Synchronize Sensors messages timestamp with latest received frame
    publish_imu_tf:             true            # publish `IMU -> <cam_name>_left_camera_frame` TF

object_detection:
    od_enabled:                 false           # True to enable Object Detection [not available for ZED]
    model:                      1               # '0': MULTI_CLASS_BOX - '1': MULTI_CLASS_BOX_ACCURATE - '2': HUMAN_BODY_FAST - '3': HUMAN_BODY_ACCURATE - '4': MULTI_CLASS_BOX_MEDIUM - '5': HUMAN_BODY_MEDIUM - '6': PERSON_HEAD_BOX
    confidence_threshold:       50              # Minimum value of the detection confidence of an object [0,100]
    max_range:                  15.             # Maximum detection range
    object_tracking_enabled:    true            # Enable/disable the tracking of the detected objects
    body_fitting:               false           # Enable/disable body fitting for 'HUMAN_BODY_X' models
    mc_people:                  true            # Enable/disable the detection of persons for 'MULTI_CLASS_BOX_X' models
    mc_vehicle:                 true            # Enable/disable the detection of vehicles for 'MULTI_CLASS_BOX_X' models
    mc_bag:                     true            # Enable/disable the detection of bags for 'MULTI_CLASS_BOX_X' models
    mc_animal:                  true            # Enable/disable the detection of animals for 'MULTI_CLASS_BOX_X' models
    mc_electronics:             true            # Enable/disable the detection of electronic devices for 'MULTI_CLASS_BOX_X' models
    mc_fruit_vegetable:         true            # Enable/disable the detection of fruits and vegetables for 'MULTI_CLASS_BOX_X' models
    mc_sport:                   true            # Enable/disable the detection of sport-related objects for 'MULTI_CLASS_BOX_X' models

@Myzhar
Copy link
Member

Myzhar commented Jul 22, 2022

OK, I'm going to test the same configuration on the Xavier AGX and let you know.

@SiBensberg
Copy link
Author

There sneaked in an error from my last tries.

pub_frame_rate:             60.0                                # Dynamic - frequency of publishing of video and depth data
point_cloud_freq:           60.0                                # Dynamic - frequency of the pointcloud publishing (equal or less to `grab_frame_rate` value)
grab_frame_rate:            60                              # Frequency of frame grabbing for internal SDK operations

and pos tracking is disabled.

@Myzhar
Copy link
Member

Myzhar commented Jul 22, 2022

pos tracking is disabled.
is there any other node publishing the map->odom-> base_link -> zed2i_base_link TF chain?
If that's not the case, then the cause of the slowing down is the disabling of the positional tracking.
If map and odom frames are not published, the zed nodelet at each iteration waits for valid transforms...

@SiBensberg
Copy link
Author

With proper TF chain I was able to obtain 24Hz. Thank you. Is 24Hz an expected drop?
Was the map transform required in older versions too? Because I do not remember using them before.

@Myzhar
Copy link
Member

Myzhar commented Jul 22, 2022

24 Hz is OK because you are using the NEURAL depth mode.

Was the map transform required in older versions too? Because I do not remember using them before.
Yes, it has been added a long time ago to match the requirements of the ROS REP105 standard

You can read about a similar problem in an issue submitted in 2020: #525

@Myzhar Myzhar self-assigned this Jul 22, 2022
@SiBensberg
Copy link
Author

Ah ok, thank you very much for the quick help!

@Song-Jingyu
Copy link

Hi,

I have exactly the same issue. Is it expected that the RGB image will have the same frame rate as the point cloud? In my mind, if processing the depth needs more time, it should only affect the fps of the depth topic. However, on my end, when depth topic is subscribed, both the depth and RGB topics have decreased fps. Any suggestion will be highly appreciated!

@SiBensberg
Copy link
Author

Hi @Song-Jingyu
did you check the TF-Tree like described above?

@Song-Jingyu
Copy link

Thanks so much for your reply. I don't think my tf tree has issue. Just curious, did the RGB topics on your end have the same fps as the depth topic (24hz) or have a different fps?

@Myzhar any suggestion on how to get max fps for RGB topics will be really really appreciated!

@Song-Jingyu
Copy link

To provide more details, I'm setting the grad fps to be 60 with 720p and using neural mode for depth. The ROS internal publish rate is set to 60. When the node is launched I can get ~60Hz RGB topics. However, when I started to ROS hz depth topics the fps of RGB topics drops to the same as depth topic (~15 hz close to the point cloud rate I set).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

No branches or pull requests

3 participants