-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Buffered client and server examples #11
Conversation
|
Nice, that's an inspiring way to do it. There's a few hickups, one of them being that the same read callback will be called for every packet. It might be good for <4096 byte messages with atomic writes through I would make an abstraction on futures and promises instead, where you basically register sequential callbacks like this: https://gist.github.com/Matthias247/c2188248ddc5b597a897 Since this is a series of callbacks in a non-blocking framework, there's no need for a worker thread. The only other option to the pre-registered callback sequences, would be sequential fiber-blocking like in vibe.d :-p |
|
I think this can be commented further. There can be a single callback in your TCPConnection, registered only for reading the bytes in a buffer, and from there you call the |
|
After giving it some thought, your solution only needs a few adjustments and it would deal with nagle's algorithm and packet fragmenting very well.
The worker threads and loops will not be necessary due to the async nature of the event loop. If data is received without an So basically, just a level of indirection that accumulates bytes between |
|
Cool. I'll see if I can implement this when I have the time. |
|
What is the purpose of |
|
When you call When creating a buffer for this purpose, you don't want to care about packet congestion, you just want the entire thing sent and be advised when it's done. It could even be after |
|
I think the best way to understand this is by looking at the driver I wrote for https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/core/drivers/libasync.d#L948 The only thing you really need to change is the |
|
I've pushed initial |
|
I've squashed commits to only have recent changes. |
|
Works great, this is really useful. You might want to put the circular buffer's payload in a separate allocation though, because the I think there's also an issue with running this example on Windows, though I'm not sure why yet, it doesn't read the server write, and the problem disappears if I enabled |
Buffered TCP, with client and server examples
|
Did you get the windows hangs as well? |
|
Also, I'm going to be replacing all |
|
I've only tested on Fedora 21 x64 so far. I'll check out |
|
Wouldn't using memutils prevent libasync from being integrated into phobos, by the way? Or is there a plan to integrate memutils at some point as well? |
Integrating to Phobos is a utopic idea right now. It doesn't even have a circular buffer (yet) and I'd have to get a few permissions here and there for re-licensing the memory management stuff that Sönke wrote. Maybe I'll come up with something eventually but right now I'm not going to put obstacles on my way based on the idea of an eventual Phobos integration |
I've written these simple async client and server examples to evaluate replacing vibe.d with libasync. Both client and server use single
TCPConnectionimplementation that uses message passing to handle incoming messages in a separate thread.However I'm not sure this is the best way to go about handling long-running connections in async way.
Apart from that
destroyAsyncThreadsis not called since I haven't figured out how to handle signals in platform-agnostic way.Do you have any suggestions how to improve them?