You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've noticed that in the case where we have one node creating two publishers on the same topic, but with different types, we seem to run into undefined behavior. In other words, the behavior varies significantly across RMWs.
For instance, with Fast-DDS everything seems to work fine.
We can publish different types to the same topic and subscribe to them in another node without any apparent errors.
But with CycloneDDS, we hit a runtime error at the time of publisher creation:
terminate called after throwing an instance of 'rclcpp::exceptions::RCLError'
what(): could not create publisher: failed to create topic, at /home/jacob/ws/ros/latest/src/ros2/rmw_cyclonedds/rmw_cyclonedds_cpp/src/rmw_node.cpp:1904, at /home/jacob/ws/ros/latest/src/ros2/rcl/rcl/src/rcl/publisher.c:114
With Connext it get's weird. We can create the publishers and subscriptions without an error, but the data coming in at the subscribers end contains garbage data and sometimes we the following error logged:
[ERROR] [1614734253.894871361] [rclcpp]: executor taking a message from topic '/chatter' unexpectedly failed: can't convert cdr stream to ros message, at /home/jacob/ws/ros/latest/src/ros2/rmw_connext/rmw_connext_cpp/src/rmw_take.cpp:193, at /home/jacob/ws/ros/latest/src/ros2/rcl/rcl/src/rcl/subscription.c:219
Perhaps this is more of a question than a bug report, but what is the expected behavior?
It's not clear if this is something that should be fixed in rmw_connext and rmw_cyclonedds, or if we should disallow this use-case and return proper error codes everywhere. In any case, I think we should certainly document what the expected behavior is somewhere (e.g. in rmw or separately for each implementation).
Required Info:
Operating System:
Ubuntu 20.04
Installation type:
from source or binaries
Version or commit hash:
Rolling
DDS implementation:
CycloneDDS, Fast-DDS, and Connext
Steps to reproduce issue
Apply the following patch to the talker demo, build, and run it:
--- a/demo_nodes_cpp/src/topics/talker.cpp+++ b/demo_nodes_cpp/src/topics/talker.cpp@@ -21,6 +21,7 @@
#include "rclcpp_components/register_node_macro.hpp"
#include "std_msgs/msg/string.hpp"
+#include "std_msgs/msg/int64.hpp"
#include "demo_nodes_cpp/visibility_control.h"
@@ -43,15 +44,19 @@ public:
[this]() -> void
{
msg_ = std::make_unique<std_msgs::msg::String>();
+ msg2_ = std::make_unique<std_msgs::msg::Int64>();
msg_->data = "Hello World: " + std::to_string(count_++);
+ msg2_->data = static_cast<long>(count_);
RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", msg_->data.c_str());
// Put the message into a queue to be processed by the middleware.
// This call is non-blocking.
pub_->publish(std::move(msg_));
+ pub2_->publish(std::move(msg2_));
};
// Create a publisher with a custom Quality of Service profile.
rclcpp::QoS qos(rclcpp::KeepLast(7));
pub_ = this->create_publisher<std_msgs::msg::String>("chatter", qos);
+ pub2_ = this->create_publisher<std_msgs::msg::Int64>("chatter", qos);
// Use a timer to schedule periodic message publishing.
timer_ = this->create_wall_timer(1s, publish_message);
@@ -60,7 +65,9 @@ public:
private:
size_t count_ = 1;
std::unique_ptr<std_msgs::msg::String> msg_;
+ std::unique_ptr<std_msgs::msg::Int64> msg2_;
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr pub_;
+ rclcpp::Publisher<std_msgs::msg::Int64>::SharedPtr pub2_;
rclcpp::TimerBase::SharedPtr timer_;
};
Apply the following patch to the listener demo, build, and run it:
--- a/demo_nodes_cpp/src/topics/listener.cpp+++ b/demo_nodes_cpp/src/topics/listener.cpp@@ -15,6 +15,7 @@
#include "rclcpp/rclcpp.hpp"
#include "rclcpp_components/register_node_macro.hpp"
+#include "std_msgs/msg/int64.hpp"
#include "std_msgs/msg/string.hpp"
#include "demo_nodes_cpp/visibility_control.h"
@@ -38,15 +39,23 @@ public:
{
RCLCPP_INFO(this->get_logger(), "I heard: [%s]", msg->data.c_str());
};
+ auto callback2 =+ [this](const std_msgs::msg::Int64::SharedPtr msg) -> void+ {+ RCLCPP_INFO(this->get_logger(), "I also heard: [%ld]", msg->data);+ };+
// Create a subscription to the topic which can be matched with one or more compatible ROS
// publishers.
// Note that not all publishers on the same topic with the same type will be compatible:
// they must have compatible Quality of Service policies.
sub_ = create_subscription<std_msgs::msg::String>("chatter", 10, callback);
+ sub2_ = create_subscription<std_msgs::msg::Int64>("chatter", 10, callback2);
}
private:
rclcpp::Subscription<std_msgs::msg::String>::SharedPtr sub_;
+ rclcpp::Subscription<std_msgs::msg::Int64>::SharedPtr sub2_;
};
} // namespace demo_nodes_cpp
Expected behavior
I'm not sure what to expect.
Actual behavior
It depends on the RMW. See description above.
Additional information
Here is an old issue that is somewhat related: ros2/rmw_connext#234
From the discussion there, it sounds like this use-case is not supported by the DDS spec.
The text was updated successfully, but these errors were encountered:
jacobperron
changed the title
One node and multiple publishers on same topic with different type
One node and multiple publishers on same topic with different types
Mar 3, 2021
Bug report
I've noticed that in the case where we have one node creating two publishers on the same topic, but with different types, we seem to run into undefined behavior. In other words, the behavior varies significantly across RMWs.
For instance, with Fast-DDS everything seems to work fine.
We can publish different types to the same topic and subscribe to them in another node without any apparent errors.
But with CycloneDDS, we hit a runtime error at the time of publisher creation:
With Connext it get's weird. We can create the publishers and subscriptions without an error, but the data coming in at the subscribers end contains garbage data and sometimes we the following error logged:
Perhaps this is more of a question than a bug report, but what is the expected behavior?
It's not clear if this is something that should be fixed in rmw_connext and rmw_cyclonedds, or if we should disallow this use-case and return proper error codes everywhere. In any case, I think we should certainly document what the expected behavior is somewhere (e.g. in rmw or separately for each implementation).
Required Info:
Steps to reproduce issue
Apply the following patch to the talker demo, build, and run it:
Apply the following patch to the listener demo, build, and run it:
Expected behavior
I'm not sure what to expect.
Actual behavior
It depends on the RMW. See description above.
Additional information
Here is an old issue that is somewhat related: ros2/rmw_connext#234
From the discussion there, it sounds like this use-case is not supported by the DDS spec.
The text was updated successfully, but these errors were encountered: