-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why do we need Context/Address::stop_all
#119
Comments
The behaviour is not quite the same, since Fundamentally, it feels a bit strange not to have |
Is this enough to justify it as a dedicated feature? As soon as people use a multi-threaded executor, it is pretty hard to make strict ordering guarantees and unless I am missing something, we only get a slightly better latency for stopping but no strict guarantees about what does and does not get processed.
I see the symmetry aspect but at the same time, it feels redundant to have one feature if it can also be expressed through a different one. Differently said, I think orthogonality of features is more important than providing all combinations out of the box. |
I wonder how small an implementation of this using the broadcast |
By symmetry you mean exposing a I find it odd that one actor out of a fleet of actors should shutdown all others but I am generally not a fan of choreography and would implement such a requirement using an orchestration actor that would manage the lifecycle of a fleet of worker actors. |
One use case I'd imagine is if you are using the actors to abstract over a few handles to one resource - e.g in This is just a contrived example, but generally it could be used any time that all the actors need to stop as the common state resource they are accessing encounters an error. |
Interesting. I think I'd model this with 1 actor actually owning the resource and spawning more worker actors. In other words: orchestration instead of choreography. |
What would the worker actors do themselves? |
Whatever the actual unit of work is? I don't know about the domain much but my thinking would have been that instead of owning the resource, you can instantiate the workers with their unit of work already or send it to them with a message. To spawn multiple actors, you need the resource to be a shared reference right? Some Arc-Mutex or something. I'd find that a bit odd if I could at the same time have one actor full owning the resource, grab units of work and dispatch to workers in a loop. But I am not sure if I'd use actors for this generally? Threadpools are useful because spawning a new thread takes significant resources so it makes sense to reuse them. Tasks are incredibly cheap to spawn so the same thing could be achieved by spawning tasks in a loop and joining on the handles afterwards. I am struggling to really see the usecase for spawning multiple actors to the same address but I can also understand that if we don't have this feature natively in xtra, it is almost impossible to build on top or only with significant performance penality by doing double dispatching through a proxy actor. |
Yea. It's the case for tantivy that getting a searcher from the pool blocks, unfortunately, so that's why I used multiple actors on the same address there.
At this point, why not spawn a task? I agree. It's also nice to have this for those use cases and also get parity with libraries like actix which do support this, which might encourage more use. I don't think it should cause too much of a runtime performance penalty, especially at async scales - what I mean is that polling the channel is going to be a small amount of the time spent, as I'd imagine orders of magnitude more overhead is incurred from the runtime itself. For reference, flume is natively MPMC, and iirc this change didn't make it any slower than when it was MPSC. The main extra is having to check the broadcast mailbox in poll, but I'd imagine this doesn't take long in the case of not using broadcasts, since a spin lock is very fast to unlock without contention (hundreds of nanoseconds or less). |
Would it make sense to retrieve the searcher in the orchestration actor and send it to a worker actor once it is retrieved to actually perform the search? (Not sure if this is how things work)
I am not against the feature per se. Esp. ever since we shipped #95, the feature is offered in an extremely lean way from an API perspective. |
In the specific case of tantivy, I'm not 100% sure, to be honest! It might involve an explicit async semaphore to synchronize when there are not enough searchers left (as the method to acquire one is synchronous and will block) which is less elegant to me. It would be interesting to see if there are any other users of this feature - I wonder how much use it gets in actix? |
To get back on topic, I am gonna try and summarise: Apart from having infinite priority, the internal My final position is that I don't think the improved latency of "better" priority is worth the complexity in the implementation and the additional API. If users care about this, they can choose a priority for a For now, I'd suggest to remove this feature and in the long run, we can provide an Thoughts? |
I've realised that this is not exactly true. The one thing it does offer is that shutdown is guaranteed to be dispatched without waiting for a lagging actor to catch up.
I agree, if it's okay that the shutdown would not be dispatched immediately. |
Ah that is indeed interesting. I guess the one is a graceful shutdown and the other one isn't? |
Not really. This does raise the interesting question of how an actor would be able to call a non-privileged |
With lagging actor, you mean an actor that is currently busy processing a certain message, i.e. not returning from a I think there should be a way of shutting such an actor down. This might also be useful for a supervisor implementation. The question then is, should we maybe have a privileged |
I mean the actor which is the tail of the broadcast queue. I.e, if the bound is 10, it has 10 in its queue. When it tries to broadcast stop all, it will wait forever and deadlock. |
That sounds like a strong enough reason to keep this functionality. Good thing that we fleshed it out. I'd say we keep this issue open and re-purpose it to updating the docs to reflect the findings. In summary:
|
Closing in favor of #157. |
With #118,
Context::stop_all
is moved toAddress::stop_all
.Whilst doing this change and updating the docs, I was wondering why we need to provide this functionality within xtra? If the behaviour is the exact same as sending a
Stop
message to each actor, writing yourself a customAddress::stop_all
function is pretty trivial:If we still consider this too much boilerplate, we could provide a
xtra-stop-handler
crate that offers:Stop
message type#[derive(StopHandler)]
Is there a reason why the implementation of
stop_all
needs to livextra
? There is a fair bit of code involved in providing it, like a custom envelope etc. We could save a good few lines of code by deleting it if it can really be expressed with existing building blocks.The text was updated successfully, but these errors were encountered: