New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds a persist flag in FileDescriptorEvent. Fixes #695 #763
Conversation
What I wonder is if we actually need to keep backwards compatibility. Since |
The flag (and the difference between persisted/non persisted event) has mainly to do with edge-triggered events. What I found with zmq was that when the socket is ready for read it will keep firing the event, peaking the cpu. The patch was written that way to keep the option open between level triggered and edge triggered events. The backwards compatibility preservation is mostly a side-effect, chosen so as not to disrupt other codebases. |
Right, but would the user of a |
It depends on the "thing" that raises the events. By detaching, some events may be lost, which may not be desirable. There are cases (like mine with zmq) that we only care for the initial triggering and we know how the socket will react until the next event. For the latter case someone could just destroy/create FileDescriptorEvents but that's kind of inefficient. |
Considering that the event is never created as edge-triggered, do you have any concrete example where events would get lost? All use cases that I've had so far always behaved socket like, so that the following principle should work with and without the changes correctly:
|
There are events which are edge-triggered (zmq). The thing with zmq (I don't know if this happens in other libraries) is that, if you want to integrate it with an existing event-loop, it gives you a file-descriptor which behaves a bit like a notification mechanism (not the actual socket and not quite socket-like) that notifies you when the socket is ready for read/write. I do not have a concrete case for the changes + lost events and most probably with a socket-like fd there would be no problem. My (theoritical concern) was that I should preserve the original behavior in case there is some code assuming that the event is persisted and the callback is fired for every event (when in case of non-persisted event, an event could be lost). |
Would you use anything else than a loop like the one above to work with such kinds of events? I see how someone might depend on the old behavior (although I think the number of users of this particular feature is pretty small, so it's hopefully not that likely), but it's a bit unfortunate that it hasn't been working more like an edge-triggered event from the beginning. If we could reasonably rule out the chance than someone is realistically relying on the old behavior, I'd rather like to completely switch over and properly document the actual guarantees of |
I'm currently working on this one and I would have assumed that these fd would be edge triggered with the option of unregistering them automatically from the event loop after the first shot. If you're monitoring a custom socket for example, you also can put a pointer in the event and you need the event type, so wait alone isn't really going to cut it. |
I agree that the semantics of a non-persistent event fit better the wait(). Regarding existing usages, I'd guess that could will be some subtle breakages in code with certain assumptions, but it could be communicated if the wait behavior is specified. |
I would have thought there would be a CLOSED trigger when the descriptor went away, and some way of throwing on the task if there was an error (errno for posix, or GetLastError() for windows, etc). |
What about a hybrid approach - This flag would probably have to be per "trigger bit", so it would be a bit more complicated in practice. But otherwise? BTW, is there a reason why the event is recreated in every call to @etcimon: Not sure what you mean with the pointer, but the |
Regarding the hybrid approach, I find it a bit complex with no great benefit. I would prefer simpler semantics. Now, the new event in _armEvent is a bug. It should reuse the event. |
Well... the benefit would be to have no high CPU usage, but still have the guarantee to not lose any events. |
For sure it would enable both. My comment was more like a vote/preference. I'd just prefer the simpler semantics of "the event is reattached on wait". Thinking about the matter abstractly, the only thing that makes sense to be persisted are the timer events. For file descriptor events, either the descriptor is ready for action when you need it (read/write) or not. |
Well, there are other event types like listening sockets where persistent events make sense. For normal I/O it also usually makes sense (as long as there are no But let's focus on the issue here for now. What I'd like to have, if possible, is a single set of semantics that works for all platforms/underlying implementations and for all practical use cases. Adding optional legacy behavior is something that will just cause problems and more work down the line. The proposed hybrid approach is just one possibility to both fix the reported issue, while not causing any potential hangs in legacy applications. At the same time, the documentation could explicitly mention that it is not guaranteed that events between calls to |
You are right, my abstract thinking missed a couple of cases :) . I was just skimming through the libev documentation and it doesn't seem to have the notion of non-persistent callbacks. If there are gonna be different backend implementations I guess the abstraction shouldn't mention something about persistency. Is that true ? On the other hand, given how vibe.d code is written and run, FileDescriptorEvent.wait() (as an abstraction over the actual implementation) means "wait until this file descriptor can be operated on and then do something with it". Even if there are intermediate events there aren't many things that can be done with them, so the wait() can be considered a wait for a singular event. Am I missing something ? |
Yes "wait for a singular event" would be the kind of guaranteed semantics that I have in mind. It might return for other reasons, so it wouldn't guarantee that the FD is ready after the wait. What I would imagine as the documentation for
|
Disabling edge triggering is preferable for applications that fail to use a buffer. From my experience, emptying the socket right after receiving the event is the most efficient way to do it rather than calling wait() and read() sequentially every time data is necessary, because otherwise it breeds a huge amount of system calls. So, I'm not sure if allowing this behavior in favor of flexibility will end up backfiring in code quality.
I meant using delegates to get event notifications, it's irrelevant and was a bad idea I had because I was exhausted and frustrated by the directory watcher implementation I was working on at the time. |
I'm quite not sure what you mean there - do you mean the example I posted? That would typically generate a sequence of
Can you state exactly the behavior you mean? My idea is to keep the requirements on the implementation as slim as possible to avoid any possible future road blocks as far as possible. |
Well no, I guess there's no problem with that syntax and the example. I'm just thinking freeing/re-adding the event could be avoided if edge triggered |
Oh okay, yeah that was my first idea, too. |
Got implemented with a more general API as part of #1596. |
This fixes the high cpu usage when using FileDescriptorEvent.
It adds a persist flag (true by default, so as to preserve the original behavior) that controls whether the event is persisted with libevent or reattached after each callback.
I found it useful with zmq and createFileDescriptor