Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document features of Context::stop_all #157

Closed
thomaseizinger opened this issue Jul 27, 2022 · 7 comments · Fixed by #201
Closed

Document features of Context::stop_all #157

thomaseizinger opened this issue Jul 27, 2022 · 7 comments · Fixed by #201
Milestone

Comments

@thomaseizinger
Copy link
Collaborator

thomaseizinger commented Jul 27, 2022

Document the findings of #119.

@bobdebuildr
Copy link
Contributor

I'm in the process of "migrating" code to the new changes that were pushed in recent days. Is it correct that there is currently no stop_all function on Mailbox or Address? I didn't manage to follow what exactly the result of the discussion in #119 was.

In my old code, I kept a context which I used to stop a certain actor, thus triggering the remaining actors in the chain to get dropped. I can't do the same thing with Mailbox, since there's no such function.
What is the recommended way to solve this problem?

@Restioson
Copy link
Owner

Restioson commented Sep 16, 2022

There is still stop_all, but there is now no more stop function - you must choose between stop_all and stop_self when invokign it. These functions live on Context.

In my old code, I kept a context which I used to stop a certain actor

I would suggest making a message StopAll which you send to an address (maybe a weak address?) which causes it to call stop_all.

@bobdebuildr
Copy link
Contributor

Yes, I saw the stop_{all,self} functions on Context, but that's not accessible outside of a handler, if I understand correctly.

I would suggest making a message StopAll which you send to an address (maybe a weak address?) which causes it to call stop_all.

In #119 it was mentioned that stop_all has infinite priority, but for a StopAll message to be handled it would still have to wait in the queue until it is handled. This isn't a problem for my use case, I just want to clarify that there's no way to bypass the message queue.

@Restioson
Copy link
Owner

No, not from the outside currently. I would recommend just setting the priority to max when you send it

@bobdebuildr
Copy link
Contributor

Thanks for the clarification! I'm really liking this library, it's just so simple but can do everything I want.

@thomaseizinger
Copy link
Collaborator Author

In my old code, I kept a context which I used to stop a certain actor, thus triggering the remaining actors in the chain to get dropped. I can't do the same thing with Mailbox, since there's no such function. What is the recommended way to solve this problem?

You could also implement your own event loop (see examples), which allows you to not give up ownership of Mailbox so you can drop it at any time and thus shut down the actor!

@Restioson
Copy link
Owner

Does this seem appropriate?

/// Stop all actors on this address. This bypasses the message queue, so it will always be
/// handled as soon as possible by all actors, and will not wait for other messages to be
/// enqueued if the queue is full. In other words, it will not wait for an actor which is
/// lagging behind on broadcast messages to catch up before other actors can receive the shutdown
/// message. Therefore, each actor is guaranteed to shut down as its next action immediately
/// after it finishes processing its current message, or as soon as its task is woken if it is 
/// currently idle.
///
/// This is similar to calling [`Context::stop_self`] on all actors active on this address, but
/// a broadcast message that would cause [`Context::stop_self`] to be called may have to wait
/// for other broadcast messages, during which time other messages may be handled by actors (i.e
/// the shutdown may be delayed by a lagging actor).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants