New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Actions proposal #193

Open
wants to merge 70 commits into
base: gh-pages
from

Conversation

@sloretz
Contributor

sloretz commented Oct 12, 2018

This PR is contains proposed changes and additions to the actions design doc started by @gbiggs in PR #183. It is targeted at gh-pages instead of gbiggs:gh-pages so the PR appears on this repo and the waffle board.

@sloretz

This comment has been minimized.

Contributor

sloretz commented on articles/actions.md in 95f6ad3 Oct 11, 2018

Recommend auto future = action_client->async_send_goal(goal_msg, feedback_callback) to match service client->async_send_request(...) https://github.com/ros2/examples/blob/3b60ee1b2e4712d329511531107ec8d88eb03cd7/rclcpp/minimal_client/main.cpp#L38

sloretz and others added some commits Oct 11, 2018

It is responsible for
* advertising the action to other ROS entities
* accepting or rejecting requests from one or more action clients

This comment has been minimized.

@jacobperron

jacobperron Oct 15, 2018

Member

For consistency, we may consider using the term "goal" here (and elsewhere in the doc) when referring to requests from action clients.

* *Aborted*
* The action server failed reached the goal
### Feedback Topic

This comment has been minimized.

@paulbovbel

paulbovbel Oct 15, 2018

It looks like this is mirroring the concept of a single 'feedback' topic from ROS1. This is really suboptimal in terms of bandwidth and processing. If you have one busy action server sending feedback to many clients, the server ends up sending out data about M goals to N clients.

Is there any reasonable mechanism that would allow the clients to only receive/process the feedback they care about?

This comment has been minimized.

@jacobperron

jacobperron Oct 15, 2018

Member

The only thing that comes to mind is using topic namespaces to reduce overhead. For example, we could use some unique information about the client as part of the feedback/status topic names:

_action/fibonacci/<client_id>/feedback
_action/fibonacci/<client_id>/status

But, then there should be some mechanism for communicating this information to the server...
Alternatively, the goal IDs could be used instead of client IDs.

Any other thoughts/suggestions on the matter are welcome.

This comment has been minimized.

@jacobperron

jacobperron Oct 15, 2018

Member

It's possible this solution increases overhead in the middleware, but reduces bandwidth and processing for the nodes?

This comment has been minimized.

@wjwwood

wjwwood Oct 15, 2018

Member

This sounds like an ideal case for the "keyed data" feature in DDS which we're hopefully going to expose once we support IDL. The idea is that in your message definition you annotate that one of the fields is a key and then when subscribing each different value for the key produce a separate "instance" which essentially means a separate message queue. But also, importantly for this case, you can do what's called "content filtered subscription" which allows subscribers to express something like "I only want messages where the key field ID is X", and if the middleware supports it then it will do publisher side filtering based on that request.

The other option, as @jacobperron pointed out, is to include the goal id into the topic name, which will have its own overhead.

In both cases, I'd guess that small numbers of simultaneous goals would be better off filtering unwanted messages on the subscriber side. Only in extreme cases where the feedback topic's content is especially large or when many, many goals are in progress simultaneously (and from different clients) would there actually be an incentive to have a separate mechanism. So I fear we might be optimizing for the uncommon cases at this point.

This comment has been minimized.

@paulbovbel

paulbovbel Oct 15, 2018

You could specify the feedback/status endpoint in the goal I suppose. Per-process endpoints (e.g. client_id, or node_name) could be the right balance between bandwidth overhead and middleware overhead.

What is middleware overhead in this context? The number of topics?

This comment has been minimized.

@wjwwood

wjwwood Oct 15, 2018

Member

Allowing the client to specify a custom feedback topic would be useful to alleviate the extreme cases I pointed out, but there's also value in being able to know what the feedback topic will be without fail (for things like recording). It would also complicate the ROS interface between action client and server, as well as complicating the implementation of the action server.

That being said it might be worth doing.

What is middleware overhead in this context? The number of topics?

Yes, but I actually think of it as two dimensional. First, the number of topics obviously increases the memory overhead and potentially number of sockets used. Second, the creation and destruction of new topics has overhead (think of it like threads). This might be alleviated with "topic pools" which are reused overtime, but that would not be complicated if the client choose the topic.

Basically, every time you create a pub or sub on a topic there is discovery information and QoS/type matching that needs to occur which is asynchronous and relatively expensive (sometimes involving broadcasting information to whole graph, also requiring the creation of new sockets, etc...). It would also likely increase the latency between "goal accepted" and first "goal feedback received", assuming feedback would be immediately available.

This comment has been minimized.

@gbiggs

gbiggs Oct 30, 2018

Contributor

I'm not keen on allowing clients to specify the feedback topic as a solution to the feedback fire hose problem. I think this requires a design decision in the implementation of the client that may not be decidable when that client is implemented.

To be specific, let's say I have an existing action server I want to use and it is popular in my system - its actions are called frequently from many clients. Furthermore, some of those clients are developed by me and some are not. I can make my clients declare a personal feedback topic to avoid the information overload, but I cannot guarantee that the other clients do. Their developers may have assumed that the action would be used only by one or a couple of clients at a time, for example. This may mean that those other clients get overloaded with probably irrelevant feedback because the assumption made when they were developed is no longer valid.

So to summarise, if clients specifying the feedback topic is how the feedback volume problem will be solved, then it needs to be a requirement for all clients, not an option.

I am not opposed to allowing clients to specify the feedback topic in general, just as a solution to this particular problem.

I prefer using keyed data as the ideal solution. It's features like that that are why ROS2 uses DDS in the first place, right? If keyed data is not a satisfactory solution, then either assume this is a rare enough case to not bother with, or create feedback topics based on the goal ID.

* **Content**: Goal id, user defined feedback message
This topic is published by the server to send application specific progress about the goal.
It is up to the author of the action server to decide how often to publish the feedback.

This comment has been minimized.

@paulbovbel

paulbovbel Oct 15, 2018

It may also be interesting to be able to specify either client or server-side, whether the feedback can be lossy. There's definitely applications for either option, and it would be nice if it wasn't a hard-coded internal QoS setting.

This comment has been minimized.

@wjwwood

wjwwood Oct 15, 2018

Member

I think the intention was to expose the feedback topic's QoS settings, but that might be complicated if different clients ask for different QoS settings, so it would likely need to be set by the server and it would be up to the clients to use compatible QoS settings.

This comment has been minimized.

@jacobperron

jacobperron Oct 15, 2018

Member

Yes, I can add a section about QoS settings.

In ROS 1, Action clients are responsible for creating a goal ID when submitting a goal.
In ROS 2 the action server will be responsible for generating the goal ID and notifying the client.
The server is better equiped to generate a unique goal id than the client because there may be multiple clients who could independently generate the same goal id.

This comment has been minimized.

@paulbovbel

paulbovbel Oct 15, 2018

using a UUID may mitigate this, and still allow the client to 'know' the ID of a goal without ever talking to a server. This can be helpful when correlating 'actions' with other libraries/systems.

This comment has been minimized.

@jacobperron

jacobperron Oct 15, 2018

Member

This seems reasonable. The original reason for having the server generate the ID is to avoid a race condition where the client may try to use the ID to cancel a goal before it is accepted. But in retrospect, by including the feature to "cancel all goals" or "cancel all goals before timestamp" we can still have a scenario where the cancel request arrives before a goal request. I guess this scenario can be resolved by processing requests as they arrive at the server; if a cancel request arrives first, then any subsequent goal requests are not retroactively canceled. Clients will still be informed that the goal is accepted via the goal response.

In the event that a client tries to use the explicit ID to cancel a goal before it is accepted, the server can reject the request if there are no goals with that ID in a valid state (PENDING or EXECUTING).

Therefore, I see no issue with allowing clients to generate a UUID.

This comment has been minimized.

@mikeferguson

mikeferguson Oct 15, 2018

You used the name "PENDING/EXECUTING" here -- I would suggest we actually use "ACCEPTED" and "EXECUTING" as the two state names -- it really tells you what stage the goal is in (accepted seems to have a better semantic meaning than pending, pending could mean that handle_goal() has yet to be called or something else. accepted really does state the the action server has acknowledged the goal, but not starting executing it).

This comment has been minimized.

@sloretz

sloretz Oct 16, 2018

Contributor

But in retrospect, by including the feature to "cancel all goals" or "cancel all goals before timestamp" we can still have a scenario where the cancel request arrives before a goal request. I guess this scenario can be resolved by processing requests as they arrive at the server; if a cancel request arrives first, then any subsequent goal requests are not retroactively canceled

I wouldn't expect a goal to be cancellable until it has been accepted. Even if the goal is cancelled between the goal acceptance being transmitted from server to client then the client would still receive a response to its goal request indicating the goal was accepted, followed by a response to its result request indicating the goal was cancelled.

I guess the real problem in ROS 1 isn't the goal id generation, but that actionlib doesn't notify the client when a cancel request fails. It seems like the race condition is actually solved by cancel becoming a service.

using a UUID may mitigate this, and still allow the client to 'know' the ID of a goal without ever talking to a server. This can be helpful when correlating 'actions' with other libraries/systems.

A UUID generated by a client would work.

* **Direction**: Server publishes
* **Content**: List of in-progress goals with: Goal ID, time accepted, and an enum indicating the status
This topic is published by the server to broadcast the status of goals it has accepted.

This comment has been minimized.

@paulbovbel

paulbovbel Oct 15, 2018

Same problem as feedback (https://github.com/ros2/design/pull/193/files#r225264321) with clients receiving data they don't care about. Additionally, there are issues where a 'lost on the wire' status message could cause a failure to transition properly client side.

I think it's really important to consider how this respin of actions is going to work on unreliable networks, as that's one of the major applications (and goals?) of ROS2/DDS.

Reading over this and the 'get_result' section above, would it make sense to have the action server 'push' results and statuses to clients via a service? It would be up to the client to specify a callback where to process the results for any submitted goals.

This comment has been minimized.

@sloretz

sloretz Oct 16, 2018

Contributor

Same problem as feedback (https://github.com/ros2/design/pull/193/files#r225264321) with clients receiving data they don't care about.

I don't think a client will subscribe to this topic. It is meant for introspection only.

Additionally, there are issues where a 'lost on the wire' status message could cause a failure to transition properly client side.

I think it's really important to consider how this respin of actions is going to work on unreliable networks, as that's one of the major applications (and goals?) of ROS2/DDS.

Agreed. The QoS settings definitely need to be exposed to the user.

Reading over this and the 'get_result' section above, would it make sense to have the action server 'push' results and statuses to clients via a service? It would be up to the client to specify a callback where to process the results for any submitted goals.

Since ROS 2 services are asynchronous (in rcl and rclpy, exposing this in rclcpp is tracked in ros2/rclcpp#491) I think the result is the same. Calling get_result is like asking the server to push the result to the action client in the future. It just happens the action result is communicated as the response to a service call.

This comment has been minimized.

@paulbovbel

paulbovbel Oct 16, 2018

I don't think a client will subscribe to this topic. It is meant for introspection only.

Thanks for clarifying that, I assumed too much carried over from ROS1 actionlib. This picture should be updated (https://github.com/ros2/design/blob/5de40c7802fb39d71ea3489cc6a6ac65e17c61aa/img/actions/interaction_overview.png).

I think the result is the same

If the result is the same, then unless I'm missing something, the point of having the client call a service to retrieve the result, is to avoid having the result proliferated to every client via topic, otherwise the 'request' portion of get_result seems redundant.

If that assumption is correct, then there exists a similar problem for the feedback comms (proliferation to all connected clients). It seems like it would make sense to use the same approach for both - i.e. the client repeatedly calling a get_feedback service.

Based on https://github.com/ros2/design/pull/193/files#r225325429, it may then make sense to migrate to a 'keyed' topic based on goal id for both feedback and result once the mechanism is available.

This comment has been minimized.

@sloretz

sloretz Oct 16, 2018

Contributor

If the result is the same, then unless I'm missing something, the point of having the client call a service to retrieve the result, is to avoid having the result proliferated to every client via topic, otherwise the 'request' portion of get_result seems redundant.

That case wasn't considered. The assumption is that there are usually only a handful of clients (in most cases just one) per server. Are you currently using many action clients per server in ROS 1? If so, how well does it perform?

Services make some parts of the client implementation simpler. The client doesn't need to track the server's state machine. wait_for_action_server could reuse wait_for_service. It does have a drawback that for clients who never fetch the result, the server will have to decide how long to hold onto it before discarding it.

This comment has been minimized.

@paulbovbel

paulbovbel Oct 16, 2018

Starting to get our 'results' mixed up :)

I've seen low-to-mid double digits clients per server. To some degree that's probably because of how inflexible 'services' were in ROS1, but regardless...

I don't want to advise for over-optimization, and I appreciate that with a central rcl implementation of action comms, you have the option to restructure 'later', but it may be worthwhile to make certain scaling-friendly decisions up front. Transmitting N results/feedbacks to M clients over a potentially limited comms channel feels like one of those decisions. If get_result is an acceptable strategy, then maybe get_feedback is as well.

This comment has been minimized.

@jacobperron

jacobperron Oct 17, 2018

Member

I don't think a client will subscribe to this topic. It is meant for introspection only.

Even though the internal workings of an action client doesn't need to know the status of a goal, I can see it being convenient for application developers to have access to the status via the action client API. This means the client would subscribe to the status topic.

Otherwise, assuming the topic is in some hidden namespace, it seems awkward for a developer to create a subscriber to _action/fibonacci/status to get the status.

If bandwidth/processing is a concern, we could make the status subscriber optional at the C API level. The same could be done for the feedback topic. But I'm leaning towards waiting for the DDS "keyed data" feature that was mentioned in #193 (comment) to be exposed and make use of that to address resource usage concerns, which seems like a more elegant solution IMO.

@jacobperron

This comment has been minimized.

Member

jacobperron commented Oct 15, 2018

Let's assume I'm porting something that was previously implemented with a SimpleActionServer:

  • I presume that I would set a thread pool of 1 to process the execute() callback. This will make sure we only process one goal at a time.
  • I presume there are some other thread(s) involved that call handle_goal(). Thus, multiple goals may be "accepted", each is added to some internal queue.
  • The single thread calls execute() with each of my accepted goals. Just before the execute() is called, the caller marks the goal as having state "Executing". The execute() callback then eventually returns a result at which point the goal is either Succeeded, Cancelled, or Aborted.

Thus anytime I have more goals than execute threads, I need both "Executing" and "Accepted" as valid states, so that someone can determine which goals are actually being executed, versus which are queued up.

I think this logic is correct, and there is value in an additional state, "ACCEPTED". Another way about it is to have handle_goal() defer the result (which should be possible with ROS 2 services). In this scenario, the goal service requests could be queued and responded to when ready, but I guess this leaves some state tracking to the client. Originally, I was trying to avoid a preliminary state to EXECUTING to reduce complexity of the state machine. Adding an ACCEPTED state means that it is possible for a goal to be "recalled" before it transitions to EXECUTING. This starts to become more like the original ROS 1 Action Server state machine, where ACCEPTED is analogous to PENDING:

But even if we add an ACCEPTED (or PENDING) state, we still might be able to avoid the extra states implemented in ROS 1:

  • REJECTED does not have to be explicitly modeled since a service call must be completed to accept a goal before a client is allowed to cancel.
  • Unless people feel strongly that a RECALLING state is valuable, we could instead make a transition directly from ACCEPTED to CANCELLING (or CANCELED) when a client makes a cancel request.

In summary, I think adding a new state is a good idea, but I would call it PENDING.


I'd suggest we might want to add some additional information on the assumed threading model in this document, since that was a common headache with ROS1 actionlib.

Agreed. We were thinking that the user could have control over the threading model (e.g. use some predefined policies or implement their own). Maybe a section on this topic and some common use cases we imagine is warranted.

@mikeferguson

This comment has been minimized.

mikeferguson commented Oct 15, 2018

@jacobperron I concur that ACCEPTED is useful, I also concur that we should try to avoid RECALLING/RECALLED and just go straight to CANCELLED.

@jacobperron jacobperron referenced this pull request Oct 15, 2018

Open

Actions #306

4 of 8 tasks complete
@paulbovbel

This comment has been minimized.

paulbovbel commented Oct 15, 2018

#193 (comment) makes me wonder:

_action/fibonacci/<client_id>/feedback
_action/fibonacci/<client_id>/status

those look like 'internal' topics of some sort. Will actions play nicely with ROS2 bagging? In ROS1 actions were bagged (allowing introspection, logging, etc.), but ROS2 actions will likely involve services. Can services be bagged natively in ROS2?

@wjwwood

This comment has been minimized.

Member

wjwwood commented Oct 15, 2018

those look like 'internal' topics of some sort.

The _ at the beginning of a "token" implies it is hidden by default. All this means in practice is that command line tools will not show it unless you ask them to do so (e.g. ros2 topic --include-hidden-topics ...), but otherwise they are just normal topics and can be subscribed to, echoed, or recorded. See:

https://design.ros2.org/articles/topic_and_service_names.html#hidden-topic-or-service-names

In ROS1 actions were bagged (allowing introspection, logging, etc.), but ROS2 actions will likely involve services. Can services be bagged natively in ROS2?

The plan is to have some way of recording service calls and responses in rosbag's, but there are some additional constraints we'd have to put on services in ROS 2 to make that happen (currently it's not required to use pub/sub to implement services, but something like that would be required in order to have a recording service snoop on the traffic between requester and replied). If you want to push on that issue, I'd bring it up on the rosbag discussions (though they may be too busy to discuss it in detail atm).

Also, we could have rosbag be aware of actions as a communication primitive and record the right things depending on your preferences (feedback and status only, requests and replies, etc...).

@sloretz

This comment has been minimized.

Contributor

sloretz commented Oct 16, 2018

I'd suggest we might want to add some additional information on the assumed threading model in this document, since that was a common headache with ROS1 actionlib.

@mikeferguson the threading model is controlled by the executor, which means a compose-able node with an action server doesn't have control over it. If an action server blocks in execute, then goal requests will be blocked too if a user runs it in a single threaded executor.

I'm not sure if the execute callback is a good idea for most action servers. Instead compose-able nodes should avoid blocking to be reusable in more executors. An action server author would need to use some form of a asynchronous programming with timers or guard condition callbacks to provide feedback and return a result. On the other hand, an execute callback that blocks forces an action server implementation to eventually return a result. The current plan is to offer both an execute callback and a way to provide a result later.

Thus anytime I have more goals than execute threads, I need both "Executing" and "Accepted" as valid states, so that someone can determine which goals are actually being executed, versus which are queued up.

With just Executing the only way to tell what goals are actually in progress is by the presence of feedback. Accepting before Executing would make it clearer when an action server is queuing goals. Adding Accepting also means handle_goal() needs a way to indicate that a goal is accepted but should not be executed yet.

@ruffsl

This comment has been minimized.

Member

ruffsl commented Oct 16, 2018

#193 (comment) makes me wonder:

_action/fibonacci/<client_id>/feedback
_action/fibonacci/<client_id>/status

That same comment makes me alarmed! ☠️ ☢️ ☣️

I would like to formally petition against the use of dynamic namespacing, or use of namespaces for ROS2 subsystems that are not deterministically defined before runtime and outside of user control; due in part that doing otherwise would significantly complicate security, specifically both the Policy Decision Point (PDP) and Policy Enforcement Point (PEP) implemented in the transport security.

Let's play with a toy example:
Say we want to amend a policy profile to account for a simple action server and client pair. From my understanding of the current proposal, such a profile might look vaguely something like this:

fibonacci server

<subject_name>CN=fibonacci_server</subject_name>
...
<allow_rule>
  <publish>
    <topics>
      <topic>ra/fibonacciFeedback</topic>
      <topic>ra/fibonacciStatus</topic>
      <topic>rr/fibonacciCancleReply</topic>
      <topic>rr/fibonacciGoalReply</topic>
      <topic>rr/fibonacciResultReply</topic>
    </topics>
  </publish>
  <subscribe>
    <topics>
      <topic>rq/fibonacciCancleRequest</topic>
      <topic>rq/fibonacciGoalRequest</topic>
      <topic>rq/fibonacciResultRequest</topic>
    </topics>
  </subscribe>
</allow_rule>
<default>DENY</default>
...

fibonacci client

<subject_name>CN=fibonacci_client</subject_name>
...
<allow_rule>
  <publish>
    <topics>
      <topic>rq/fibonacciCancleRequest</topic>
      <topic>rq/fibonacciGoalRequest</topic>
      <topic>rq/fibonacciResultRequest</topic>
    </topics>
  </publish>
  <subscribe>
    <topics>
      <topic>ra/fibonacciFeedback</topic>
      <topic>ra/fibonacciStatus</topic>
      <topic>rr/fibonacciCancleReply</topic>
      <topic>rr/fibonacciGoalReply</topic>
      <topic>rr/fibonacciResultReply</topic>
    </topics>
  </subscribe>
</allow_rule>
<default>DENY</default>
...

However, should the client_id be used in namespacing the action, then users would then already have to start resorting to permissive wildcards, like * in fnmatch, for basic first class ROS2 functionally; a much more dangerous prospect than static non-interpritided string matching.

This sounds like an ideal case for the "keyed data" feature in DDS which we're hopefully going to expose once we support IDL. The idea is that in your message definition you annotate that one of the fields is a key and then when subscribing each different value for the key produce a separate "instance" which essentially means a separate message queue.

As @wjwwood mentioned in his comment, keys are perhaps much more suited for this, and by incorporating such meta information into the message itself, one could then think of more fine-grained application oriented security features.

E.g. limiting client influence over other peer client requests; nodes are provisioned access to certain specified client ID ranges should be only be able to modify server request from other clients within their own range, like with GIDs in *nix. For example, an e-stop client could cancel all move-it goals for the assembly line, but robot-arm-1 should only be able to cancel goals that pertain to robot-arm-1 platform.

Summarizing:
To keep the security implementation for actions well defined, I'd discourage such dynamic elements, e.g. client/session/goal IDs and etc, from being inserted into the resolvable action substem namespacings.

The QoS settings of this service should be set the so the client is guaranteed to receive a response or an action could be executed without a client being aware of it.
### Cancel Request Service

This comment has been minimized.

@sservulo

sservulo Oct 16, 2018

Woudn't be more intuitive (from a user perspective) to split this service into a cancel (for single actions) and cancel_all?
cancel_all could have the same timestamp logic (without timestamp cancel everything and cancel everything before the timestamp if it has it).
As for the cancel it would be just the cancel one action provided the id. The behavior on cell [1,1] on this table assumes the action to be canceled is surrounded by actions you want to keep (sequentially speaking), otherwise you would just call the behavior in [0,1] using the action timestamp and cancel everything up to that action. That is syntactic sugar for cancel_all(timestamp)+ cancel(id), which I think might be easier to call but counter intuitive given the all-vs-one mix.

This comment has been minimized.

@wjwwood

wjwwood Oct 16, 2018

Member

Combining the functionality is likely an attempt to limit the number of services (for ROS 2, or the number of topics in ROS 1).

This comment has been minimized.

@jacobperron

jacobperron Oct 16, 2018

Member

We can always add syntactic sugar in the client libraries (and even at the C API level).
At the core of actions, I think a single cancel service suffices for canceling one or more goals. We don't necessarily have to adhere to the odd cancel policy from ROS 1 actions and could make a new policy. But unless there is a good reason not to, it seems better to keep it the same for porting any ROS 1 code that made use of the policy.

@jacobperron

This comment has been minimized.

Member

jacobperron commented Oct 16, 2018

Thanks for the review, everyone!

Updates to the design doc:

  • Added ACCEPTED state to the goal state machine.
    • This allows users to queue/list goals and know which ones have begun executing.
  • Removed transition EXECUTING -> CANCELED. This lets the core handle transitions after an accepted cancel request and the implementer confirm with 'set_canceled'.
  • Minor typo, grammar, consistency corrections.
  • Updated the dishwasher example to alleviate any confusion with IDs.
  • Client is now the sole entity responsible for generating goal IDs as UUIDs.
  • Added section in "Alternatives" regarding using multiple topics for feedback/status (instead of the proposed single topic per action server).

TODO:

  • Add section summarizing differences between ROS 1 and ROS 2
    • Any differences in terminology
    • API differences side-by-side
    • Example from MoveIt!
  • More sequence diagram examples
* **Response**: status of goal and user defined result
The purpose of this service is to get the final result of a goal.
After a goal has been accepted the client should call this service to receive the result.

This comment has been minimized.

@mkhansen-intel

mkhansen-intel Oct 22, 2018

So can I assume that like other services, there will be a callback to the client when the result is ready?

This comment has been minimized.

@jacobperron

jacobperron Oct 22, 2018

Member

Yes. The idea is to follow a similar pattern as services. Although it is not shown in the proposed minimal example,

https://github.com/ros2/examples/blob/01f24c9c64003be894584c53085c3910ec2268b7/rclcpp/minimal_action_client/not_composable.cpp#L36-L38

I believe we should be able to also provide a callback (like the service API) for responses to goal requests, feedback, and results:

auto goal_handle = action_client->async_send_goal(goal_msg, goal_response_cb, feedback_cb, result_cb);

This comment has been minimized.

@gbiggs

gbiggs Oct 27, 2018

Contributor

Would this need an additional topic, or would the client side set up background thread that calls get_result() to wait for completion, then calls the callback as its final action?

This comment has been minimized.

@jacobperron

jacobperron Oct 29, 2018

Member

I don't think this needs an additional topic. The client side can make an async result request (e.g. after getting a goal response) and during spin the client can check if a result response has been received, making any user-defined callbacks as needed.

An action server provides an action.
Like topics and services, an action server has a name and a type.
There is only one server per name.

This comment has been minimized.

@gbiggs

gbiggs Oct 27, 2018

Contributor

Should we add a note that the name may be namespaced to clarify that it is the name that is unique, whereas an action type may have multiple simultaneous instances?

This comment has been minimized.

@jacobperron

jacobperron Oct 31, 2018

Member

Added more detail to clarify.

- executing the action when a goal is received and accepted
- optionally providing feedback about the progress of all executing actions
- optionally handling requests to cancel one or more actions
- sending the result of an action, including whether it succeed, failed, or was canceled, to the client when the action completes

This comment has been minimized.

@gbiggs

gbiggs Oct 27, 2018

Contributor

This implies the server sends the result when the action completes in whatever way, which conflicts with the proposed middleware implementation, which requires the client to query the result using a service. Change "when the action completes" to "when the client requests the result of a completed action".

This comment has been minimized.

@jacobperron

jacobperron Oct 31, 2018

Member

Updated wording.

### Action Client
An action client sends a goal (an action to be performed) and monitors its progress.
There may be multiple clients per server; however, it is up to the server to decide how goals from multiple clients will be handled.

This comment has been minimized.

@gbiggs

gbiggs Oct 27, 2018

Contributor

"handled simultaneously"?

This comment has been minimized.

@jacobperron

jacobperron Oct 31, 2018

Member

Added.

- sending a goal to the action server
- optionally monitoring the feedback for a goal from the action server
- optionally monitoring the status for a goal from the action server

This comment has been minimized.

@gbiggs

gbiggs Oct 27, 2018

Contributor

What is the difference between this and the above line?

This comment has been minimized.

@jacobperron

jacobperron Oct 30, 2018

Member

Feedback is user-defined and status reflects the goal state. They are separate topics (like in ROS 1). It is possible that an application with an action client only cares about one topic of the topics and so they are "optional".

This comment has been minimized.

@gbiggs

gbiggs Oct 31, 2018

Contributor

I think it is worth clarifying that in the document. Perhaps something like:

- optionally monitoring the user-defined feedback for a goal from the action server
- optionally monitoring the current state of the goal in the action state machine on the action server

(That last one is a bit of a mouthful though.)

This was done to avoid increasing the work required to create a client library in a new language, but actions turned out to be very important to packages like the [Navigation Stack](http://wiki.ros.org/navigation) and [MoveIt!](https://moveit.ros.org/)<sup>[1](#separatelib)</sup>.
In ROS 2, actions will be included in the client library implementations.
The work of writing a client library in a new language will be reduced by creating a common implementation in C.

This comment has been minimized.

@gbiggs

gbiggs Oct 27, 2018

Contributor

rcl is useable as a client library for users of ROS. It would be good to see an example of the proposed C API (unless there is one and I missed it?).

This comment has been minimized.

@sloretz

sloretz Oct 29, 2018

Contributor

PR proposing C API ros2/rcl#307

This comment has been minimized.

@gbiggs

gbiggs Oct 30, 2018

Contributor

Thanks, I missed that (still catching up on a lot of stuff).

The purpose of this service is to request the cancellation of one or more goals on the action server.
The result indicates which goals will be attempted to be canceled.
Whether or not a goal is actually canceled is indicated by the status topic and the result service.

This comment has been minimized.

@gbiggs

gbiggs Oct 27, 2018

Contributor

Doesn't this conflict with the content of the response defined immediately above?

This comment has been minimized.

@sloretz

sloretz Oct 29, 2018

Contributor

What's the conflict?

Maybe the meaning of actually canceled is ambiguous. The result indicates if the cancel_goal transition is taken. This sentence is trying to say that the response to the service does not guarantee the goal will go through the set_canceled transition. An action server could complete a goal after the server accepts a cancellation request but before code doing the execution is notified.

This comment has been minimized.

@gbiggs

gbiggs Oct 30, 2018

Contributor

That's the conflict I was referring to.

Change the sentence to "Whether or not a goal transitions to the CANCELED state is indicated by the status topic and the result service."

This comment has been minimized.

@jacobperron

jacobperron Oct 31, 2018

Member

Sentence changed.

- **Direction**: client calls server
- **Request**: goal ID and timestamp
- **Response**: list of goals that have transitioned to the CANCELING state

This comment has been minimized.

@gbiggs

gbiggs Oct 27, 2018

Contributor

How can the client distinguish between a goal that failed to cancel, and a goal that was not present on the action server (e.g. wrong timestamp for the goal ID, or no goals with the ID when no timestamp given)?

This comment has been minimized.

@jacobperron

jacobperron Oct 31, 2018

Member

Good question. I don't think we've considered the idea of communicating failure codes for cancel requests. It sounds like a nice thing to add to the cancel service definition, e.g.:

int8 RESULT_OK = 0
int8 RESULT_REJECTED = 1
int8 RESULT_INVALID_GOAL_ID = 2

# Goal info containing an ID and timestamp
GoalInfo goal_info
---
# Goals that accepted the cancel request
GoalInfo[] goals_canceling
# Result code
int8 result
* **Response**: status of goal and user defined result
The purpose of this service is to get the final result of a goal.
After a goal has been accepted the client should call this service to receive the result.

This comment has been minimized.

@gbiggs

gbiggs Oct 27, 2018

Contributor

Would this need an additional topic, or would the client side set up background thread that calls get_result() to wait for completion, then calls the callback as its final action?

Show resolved Hide resolved articles/actions.md
#### Example 2
This example is almost identical to the first, but this time the action client requests for the goal to be canceled mid-execution.

This comment has been minimized.

@gbiggs

gbiggs Oct 27, 2018

Contributor

No message goes from handle_cancel to the user-defined execution method.

This comment has been minimized.

@jacobperron

jacobperron Oct 29, 2018

Member

The handle_cancel and handle_goal are meant to represent potential user-defined callbacks. The "user-defined execution method" represents the actual execution of the goal (that is triggered upon accepting a goal request). E.g. the execute method from the proposed rclcpp example.

This comment has been minimized.

@gbiggs

gbiggs Oct 30, 2018

Contributor

Nevertheless, the diagram needs to have some indication that the user-defined code receives notification about the goal being cancelled for it to match the text above:

Note that the user defined method is allowed to perform any shutdown operations after the cancel request before returning with the cancellation result.

This is what is happening in the example with the check for whether the goal has been cancelled or not.

@gbiggs

This comment has been minimized.

Contributor

gbiggs commented Oct 30, 2018

Also, we could have rosbag be aware of actions as a communication primitive and record the right things depending on your preferences (feedback and status only, requests and replies, etc...).

I think this is a requirement of making actions a first-class citizen in ROS2. All the tools, not just ros2 action, should be action-aware.

@sloretz

This comment has been minimized.

Contributor

sloretz commented Oct 31, 2018

Would this need an additional topic, or would the client side set up background thread that calls get_result() to wait for completion, then calls the callback as its final action?

@gbiggs If I understand correctly I don't think a topic or a background thread are needed. At the rcl layer the client would send the "get result" service request as soon as it learns the goal is accepted. When the "get result" service response is received the service becomes ready in the waitset. The executor in rclcpp or rclpy would call a callback in spin() to handle the result.

abstract:
Actions are one of the three core types of interaction between ROS nodes.
This article specifies the requirements for actions, how they've changed from ROS 1, and how they're communicated.
author: '[Geoffrey Biggs](https://github.com/gbiggs)'

This comment has been minimized.

@gbiggs

gbiggs Nov 1, 2018

Contributor

Don't forget to correct the author line.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment