Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate nodelet support for capabilities and apps #115

Closed
bit-pirate opened this issue Dec 9, 2013 · 7 comments
Closed

Integrate nodelet support for capabilities and apps #115

bit-pirate opened this issue Dec 9, 2013 · 7 comments
Assignees

Comments

@bit-pirate
Copy link
Collaborator

Implement nodelet manager support for capabilities on the app manager side. Changes required on the capability server side are already in place: osrf/capabilities#31

@ghost ghost assigned bit-pirate Dec 9, 2013
@bit-pirate
Copy link
Collaborator Author

Test this using turtlebot apps and make sure it solves this issue: turtlebot/turtlebot_apps#48

@bit-pirate
Copy link
Collaborator Author

Looking into this I realised it's not a trivial issue, since there are multiple use cases:

  1. App
    1. wants to load one or more of its nodelets into one or more of its own nodelet managers.
      • need to support this, since the nodelet manager's name changes when being started by the app manager (namespace!)
    2. wants to load one or more of its nodelets into one or more nodelet mangers being part of one or more capabilities
      • need to differentiate this from the above case and assign the nodelets to the correct nodelet manager
  2. Cap
    • wants to load one or more nodelets into one or more nodelet mangers being part of capabilities it depends on.
      • capability server's responsibility

I'm missing any use cases? @stonier @jihoonl @wjwwood

I'll tackle case 1.ii. in this issue, create a new one for case 1.i. (#119) and discuss case 2. over at osrf/capabilities#31.

@bit-pirate
Copy link
Collaborator Author

The TB follower app is a good example for case 1.ii.. It starts up two nodelet's to be loaded into the mobile base's nodelet manager and one nodelet to be loaded into the rgbd sensor's nodelet manager.

Here is my current idea on how to handle this case:

For loading a nodelet into the nodelet manager of the right capability, we use the capability's name as a placeholder. Here is an example how this would look like in the rapp's launcher:

<launch>
  <!-- Load controllers into the base's nodelet manager -->
  <include file="$(find turtlebot_follower)/launch/includes/velocity_smoother.launch.xml">
    <arg name="nodelet_manager_name" value="std_capabilities/DifferentialMobileBase/NodeletManager"/>
  </include>
  <include file="$(find turtlebot_follower)/launch/includes/safety_controller.launch.xml">
    <arg name="nodelet_manager_name" value="std_capabilities/DifferentialMobileBase/NodeletManager"/>
  </include>

  <!--  Load turtlebot follower into the 3d sensors nodelet manager to avoid point cloud serializing -->
  <node pkg="nodelet" type="nodelet" name="turtlebot_follower"
        args="load turtlebot_follower/TurtlebotFollower std_capabilities/RGBDSensor/NodeletManager">
    <remap from="turtlebot_follower/cmd_vel" to="follower_velocity_smoother/raw_cmd_vel"/>
    <remap from="depth/points" to="camera/depth/points"/>
    <param name="enabled" value="true" />
    <param name="x_scale" value="7.0" />
    <param name="z_scale" value="2.0" />
    <param name="min_x" value="-0.35" />
    <param name="max_x" value="0.35" />
    <param name="min_y" value="0.1" />
    <param name="max_y" value="0.5" />
    <param name="max_z" value="1.2" />
    <param name="goal_z" value="0.6" />
  </node>
</launch>

So, this solution would work for capabilities containing only one nodelet manager. However, a more complicated capability like navigation might contain more than one manager. For this case I can only come up with standardising names, which are then used instead of NodeletManger in the example above.

Thoughts?

@wjwwood
Copy link

wjwwood commented Dec 11, 2013

For case 2, I think it is up to the robot developer to ensure capabilities can share nodelet managers by way of the provider implementations.

One other thought is that each capability server could set up one well known nodelet manager. There isn't a good reason to have more than one nodelet manager, other than fault isolation, that I can see.

@stonier
Copy link
Member

stonier commented Dec 12, 2013

Fault isolation...and I'd also say ease of introspection. Nodelets can be a little perverse on a complicated robot sometimes.

I could work with both ideas...1) single capability nodelet manager, or 2) guideline for a capability's nodelet naming as marcus suggested (with 1 per capability).

Perhaps we could start simple with 1) and move to 2) if we start feeling like 1) is unduly limiting, uncomfortable or insufficient?

@bit-pirate
Copy link
Collaborator Author

For case 2, I think it is up to the robot developer to ensure capabilities can share nodelet managers by way of the provider implementations.

AFAIK this can only be done by using absolute namespaces, which are sometimes hard to implement and maintain.

One other thought is that each capability server could set up one well known nodelet manager. There isn't a good reason to have more than one nodelet manager, other than fault isolation, that I can see.

I remember Daniel having weird issues with nodelets when they processed a lot of traffic. Increasing the number of threads solved it, if I remember right. Unfortunately, this number would be fixed, when using only one nodelet manager for all capabilities. Would setting a really high number of threads by default cause any issues?

@wjwwood
Copy link

wjwwood commented Dec 12, 2013

I remember Daniel having weird issues with nodelets when they processed a lot of traffic. Increasing the number of threads solved it, if I remember right. Unfortunately, this number would be fixed, when using only one nodelet manager for all capabilities. Would setting a really high number of threads by default cause any issues?

It depends, if the nodelets are using getNodeHandle then the number of threads you give the nodelet manager doesn't matter, because each nodelet gets its own thread. If the nodelets are using getMTNodeHandle, then those nodelets share a callback queue thread pool, which is affected by the number of threads you give the nodelet manager.

I don't think there is any risk of having a higher number of threads than you need, unless your system can really parallelize and then you go into thrashing.

My suggestion would be to start with the default thread setting and adjust it if there are problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants