Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

invalid_argument when using intra-process comms with queue size 0 #727

Closed
cottsay opened this issue May 21, 2019 · 5 comments · Fixed by #729
Closed

invalid_argument when using intra-process comms with queue size 0 #727

cottsay opened this issue May 21, 2019 · 5 comments · Fixed by #729
Labels
bug Something isn't working

Comments

@cottsay
Copy link
Member

cottsay commented May 21, 2019

Bug report

Required Info:

  • Operating System:
    • Ubuntu 16.04
  • Installation type:
    • source
  • Version or commit hash:
    • 005131
  • DDS implementation:
    • Fast-RTPS
  • Client library (if applicable):
    • rclcpp

Steps to reproduce issue

  1. Set use_intra_process_comms(true)
  2. Set QoS queue size to 0 with "keep all" behavior (maybe even SystemDefaultsQoS??)
  3. Create a publisher

Expected behavior

At the very least an actionable error message, but preferably a functioning node and publisher

Actual behavior

terminate called after throwing an instance of 'std::invalid_argument'
  what():  size must be a positive, non-zero value
...
#6  0xXXXXXXXXXXXXXXXX in rclcpp::mapped_ring_buffer::MappedRingBuffer<geometry_msgs::msg::Twist_<std::allocator<void> >, std::allocator<geometry_msgs::msg::Twist_<std::allocator<void> > > >::MappedRingBuffer (this=0xXXXXXXXXXXXX, size=0, 
    allocator=std::shared_ptr<std::allocator<geometry_msgs::msg::Twist_<std::allocator<void> > >> (use count 2, weak count 0) = {...}) at /opt/ros/master/install/include/rclcpp/mapped_ring_buffer.hpp:82
...

Additional information

Workaround is to set a non-zero queue size and use "keep last" behavior.

@cottsay cottsay added the bug Something isn't working label May 21, 2019
@cottsay
Copy link
Member Author

cottsay commented May 21, 2019

@rotu - feel free to add anything I might have missed.

@ivanpauno
Copy link
Member

The problem here, is that the size of the MappedRingBuffer used for intraprocess communication, it's created with the history depth value. When using KeepAll, we should rely in another value for sizing the MappedRingBuffer.
I think internally DDS use a value based on RESOURCE_LIMITS for sizing its buffers with KeepAll qos policy, so we should do something similar. Actually, that information isn't accessible in rclcpp and I'm not completely sure what is the "correct" value to use.
Maybe, just throwing a nice error saying "KEEP_ALL durability policy not supported with intra process communication" is enough for the moment.
@wjwwood any thoughts?

@wjwwood
Copy link
Member

wjwwood commented May 21, 2019

I asked @cottsay to open it, so I'm familiar with the cause of the issue.


maybe even SystemDefaultsQoS??

SystemDefaultsQoS shouldn't be keep all unless you configure it as such with an external XML file using the configurations provided by the middleware (not via ROS), but perhaps the default might be that. Either way, I don't recommend anyone use that unless you explicitly want to use the external overrides.

I think internally DDS use a value based on RESOURCE_LIMITS for sizing its buffers with KeepAll qos policy, so we should do something similar. Actually, that information isn't accessible in rclcpp and I'm not completely sure what is the "correct" value to use.

We need to add the resource limits to our QoS, otherwise KEEP_ALL cannot be configured at all. Now that we support IDL and therefore keyed messages (and therefore we could have instances), we need the resource limits as well as that's how you can configure the number of allowed instances on a subscription (DataReader).

I don't know if there's a ticket for that or not.

Maybe, just throwing a nice error saying "KEEP_ALL durability policy not supported with intra process communication" is enough for the moment.

That plus an error if it is KEEP_LAST depth 0. I think better errors is the right way to go for now.

@ivanpauno
Copy link
Member

That plus an error if it is KEEP_LAST depth 0. I think better errors is the right way to go for now.

Ok, mind if I open a PR with that?

@wjwwood
Copy link
Member

wjwwood commented May 21, 2019

Sure.

nnmm pushed a commit to ApexAI/rclcpp that referenced this issue Jul 9, 2022
* Add fault injection macros to rcl functions

Signed-off-by: Stephen Brawner <brawner@gmail.com>

* PR fixup

Signed-off-by: Stephen Brawner <brawner@gmail.com>
DensoADAS pushed a commit to DensoADAS/rclcpp that referenced this issue Aug 5, 2022
* Add set_rate to PlayerClock

Signed-off-by: Emerson Knapp <eknapp@amazon.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
3 participants