Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple publishers to a transient local topic gives Problem reserving CacheChange in reader #2799

Closed
1 task done
Aposhian opened this issue Jun 30, 2022 · 6 comments
Closed
1 task done
Labels
triage Issue pending classification

Comments

@Aposhian
Copy link

Aposhian commented Jun 30, 2022

Is there an already existing issue for this?

  • I have searched the existing issues

Expected behavior

No errors when publishing to the same topic from multiple publishers with TRANSIENT_LOCAL durability

Current behavior

The listener receives data, but prints an error.

Steps to reproduce

Use the following docker compose:

services:
  talker:
    image: ros:humble
    command: ros2 topic pub /data std_msgs/msg/Int32 "{}" --qos-depth=1 --qos-history=keep_last --qos-reliability=reliable --qos-durability=transient_local
    ipc: host
    network_mode: host
  talker2:
    image: ros:humble
    command: ros2 topic pub /data std_msgs/msg/Int32 "{}" --qos-depth=1 --qos-history=keep_last --qos-reliability=reliable --qos-durability=transient_local
    ipc: host
    network_mode: host
  listener:
    image: ros:humble
    command: ros2 topic echo /data std_msgs/msg/Int32 --qos-depth=1 --qos-history=keep_last --qos-reliability=reliable --qos-durability=transient_local
    ipc: host
    network_mode: host

Fast DDS version/commit

2.6.0-3jammy.20220520.002055

Platform/Architecture

Other. Please specify in Additional context section.

Transport layer

Default configuration, UDPv4 & SHM

Additional context

amd64 arch on Ubuntu 20.04 Host.

XML configuration file

No response

Relevant log output

talker2_1   | publishing #11: std_msgs.msg.Int32(data=0)
talker2_1   | 
talker_1    | publishing #11: std_msgs.msg.Int32(data=0)
talker_1    | 
listener_1  | 2022-06-30 20:53:18.284 [RTPS_MSG_IN Error] (ID:139967146595904) Problem reserving CacheChange in reader: 01.0f.0a.33.01.00.68.11.01.00.00.00|0.0.5.4 -> Function processDataMsg
listener_1  | data: 0
listener_1  | ---
listener_1  | data: 0
listener_1  | ---

Network traffic capture

No response

@Aposhian Aposhian added the triage Issue pending classification label Jun 30, 2022
@SteveMacenski
Copy link

I've seen this too - I assumed it was some odd artifact because I was running some programs through valgrind, but I've seen this in the controller server of Nav2.

@jsan-rt
Copy link
Contributor

jsan-rt commented Jul 8, 2022

Hi @Aposhian @SteveMacenski

What we are seeing here is not a faulty behaviour of the library. When using KEEP_LAST, the maximum number of samples that can be held is dependent on the depth specified. When additional samples arrive, they are rejected. The log trace you are seeing is part of that process.
Rejected samples will be notified accordingly and, since the reliability is set to Reliable, they will be sent again.

Upon further review, there are some things that we could do:

  • The severity of this log entry can be lowered to a Warning (since it's not an error, it's an expected consequence of the QoS settings)
  • Modify the log message itself to give more information regarding why this is happening.
  • Fast DDS documentation could add some more information regarding the interaction of History QoS settings and Resource Limits QoS for this particular case.

@Aposhian
Copy link
Author

Aposhian commented Jul 8, 2022

Thank you for the explanation! Yes, I think updating the log message and changing the severity would be very good: I was assuming that something was fundamentally wrong, and the log message is too arcane for me to figure out it is just a normal behavior of history depth.

@fujitatomoya
Copy link
Contributor

@jsantiago-eProsima do you have any update on this? especially on #2799 (comment)?

@jsan-rt
Copy link
Contributor

jsan-rt commented Sep 6, 2022

The Pull Requests that modify this log entry's severity and message have already been merged on Fast DDS's 2.6.x (#2942) and master (#2824) branches.

@fujitatomoya
Copy link
Contributor

@Aposhian this should be fixed in rolling and humble in ROS 2 distro. It has been addressed with update logging message and severity as you can see https://github.com/eProsima/Fast-DDS/pull/2942/files. can we close this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triage Issue pending classification
Projects
None yet
Development

No branches or pull requests

4 participants