Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
Changed EventQueue::cancel to return boolean value #10750
It's important in some cases to know whether the EventQueue::cancel succeeded. Now cancel method returns the boolean value to indicate did the cancel succeed.
Pull request type
EventQueue::cancel now returns the boolean value to indicate whether the cancel was successful or not.
Thanks for the PR @jarvte, sorry about the delay.
I think it would be better to use the event's destructor:
Some of the reason for not returning an error is that it's deceptive and tricky. With multiple threads you can't rely on an error to mean that the event is still running.
Though the real reason is that I haven't seen a use case for cancel errors that isn't handled better by relying on the event's destructor. (Though IMO the destructor hook is a bit cleaner in C). But I'd be happy to be wrong.
Can't you? Assuming the event has some synchronisation mechanism with the canceller (here a mutex), the canceller can know it has been queued but hasn't reached its synchronised section yet. So it either hasn't been dispatched yet, or is currently being dispatched and will soon reach its synchronised section.
If cancel returns an error, that means it wasn't found in the event queue, so we can conclude it is in the process of being dispatched, which in turn means it must be on another thread, so we can block this thread and wait for it to run.
Is there a flaw in that reasoning?
I think it's a fair assumption that anyone trying to cancel their events should have some synchronisation mechanism with the event itself. They must be sharing some sort of state, or they wouldn't be needing/wanting to cancel it, and caring whether it had expired or not, hence the should have some way of synchronising with it. (So probably a mutex, an atomic, or knowledge that they're in the same event queue).
In the example we're aiming for here, there is a mutex, and the routine clears the mutex-protected "event_queued" flag.
If we were to also clear in the destructor, the destructor would also need to take the mutex. But then we've got the problem that we've got a new state to worry about - between the routine and the destructor. If the routine doesn't clear the flag, we can have "event_queued" true, but the routine's already run. If the routine does clear the flag, then someone can queue another routine, but the destructor for the first clears it.
This is making my head hurt a bit. I would like to think about improving Event+EventQueue to forward better, to maybe make more complex objects with destructors usable as the functors, but I'm not sure borrowing the destructor is the answer for the routine synchronisation, at least in this example where we have to synchronise an immediate delete. If the whole thing was reference counted with shared pointers, it might be the answer.
Hi, sorry for my terribly bad response times. If this is blocking something don't let me stop it.
I'm still believe using the equeue destructor is the correct solution. Cleaning up an event's state is what the destructor was designed for. It is less risky than relying on knowledge of when the event is alive.
If you need a temporary solution that doesn't create an API change, consider using the SharedPtr.
The unfortunate fact is this is the C++03 solution to lifetime problems until we adopt C++11. Just look the STL functions before emplace_back and friends.
Isn't the sharing of state what encapsulation solves?
Keep in mind the destructor runs if either 1. the event is canceled, or 2. the routine runs. So I don't believe you will run into the overlapping issue.
For #10684, I think you just need the destructor to set the _event_id to 0 in any situation.
It's pretty heavy though. There's an possible optimisation in
Ah, well the handy thing there is we have adopted C++11. So if you can cook up a C++11 answer, I'm all for it.
The overlap I'm considering is for where the event is not encapsulating information, but is just a trigger to start work. (Most common case?)
I can't figure out how to make that work. Failure scenario:
That's assuming the event running clears the event ID inside its synchronisation logic. If it doesn't, then you get this failure scenario:
I'm sure there's some trick to make that work by putting logic into the destructor to retrigger a copy of itself, but this is getting incredibly complicated.
Another way of looking at it - we're not at all interested in the lifetime of the event object itself. It could linger on forever as far as we care. It's only a trigger. The only thing we care about is whether the event routine's synchronised section is due to run. And the event object itself can't determine that - only the event queue has that information available.
It seems like it would make sense to use the destructor of the event object to deal with any lifetime issues it has itself - eg destroying a shared pointer inside it. But we need synchronisation with the event queue and in the event routine.
Certainly you could make this work by using a shared (or weak) pointer in the event object, but that's forcing even more dynamic allocation into the design, which I want to avoid. In this scenario, ideally both the network interface and the event itself would be entirely static. At the minute we can at least have the interface be entirely static, and there's still potential to permit static events.
Oh ok, so if I understand correctly you're dealing with a chain of multiple events, where the thing you need to clean up persists across the chain?
This does actually seem like a case where using reference counting would protect everything quite nicely, but I'll give you that it does get more complicated.
Lets go with adding the boolean value to cancel if it makes life easier. I don't have that strong of an argument against it. As long as it comes in the the normal API change runway in Mbed OS's release schedule (minor release?), public API changes shouldn't be easy.
Then you just need to modify EventQueue to do perfect forwarding :)
Well, it's even simpler than that, really. The event is just a singleton trigger from "arbitrary thread context" to start "work on event queue". It contains no information, it's just the queued request to "start work".
The model (which is very typical in various Mbed components ) is
Notionally, this model would settle for a totally static handler routine and a squashable "event flag" type trigger - we don't really need to queue an object in the event queue.
We need to cancel in the destructor of the subsystem - the destructor has to (synchronously) cancel any pending events. (Or otherwise ensure they'd be no-ops not touching our destroyed object).
Yes, if the "event queued flag" and a "cancel flag" were actually accessed via a shared pointer, rather than being embedded in the subsystem object, and that shared pointer was passed in the event object (as weak?), then it would work, but I'd rather avoid the dynamic allocation (and the need to explain the mechanism).
I don't think perfect forwarding is what we're thinking about here - we have to capture.
But we could upgrade to move capture rather than copy. (But I note
But if the events are not chained, does the class need to know whether or not canceling the events succeeded? (well, it knows no events will run after cancel).
I'm not sure how dynamic memory got involved. If the flags are static/class allocated, then in theory so can the reference count + pointer.
Both work, from what I understand. With perfect forwarding you can construct the object directly in the event queue without needing a copy.
I think this PR is good to go
@bulislaw Based on the decryption above, is this going in for 5.14 ?