Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Buffered client and server examples #11

Merged
merged 2 commits into from
Feb 25, 2015
Merged

Buffered client and server examples #11

merged 2 commits into from
Feb 25, 2015

Conversation

GeorgeSapkin
Copy link
Contributor

I've written these simple async client and server examples to evaluate replacing vibe.d with libasync. Both client and server use single TCPConnection implementation that uses message passing to handle incoming messages in a separate thread.
However I'm not sure this is the best way to go about handling long-running connections in async way.
Apart from that destroyAsyncThreads is not called since I haven't figured out how to handle signals in platform-agnostic way.
Do you have any suggestions how to improve them?

@etcimon
Copy link
Owner

etcimon commented Feb 12, 2015

Nice, that's an inspiring way to do it. There's a few hickups, one of them being that the same read callback will be called for every packet. It might be good for <4096 byte messages with atomic writes through .write(ubyte[]), but it gets a little hard for large data that uses nagel's algoritm to split the packets randomly over time.

I would make an abstraction on futures and promises instead, where you basically register sequential callbacks like this:

https://gist.github.com/Matthias247/c2188248ddc5b597a897

Since this is a series of callbacks in a non-blocking framework, there's no need for a worker thread. The only other option to the pre-registered callback sequences, would be sequential fiber-blocking like in vibe.d :-p

@etcimon
Copy link
Owner

etcimon commented Feb 12, 2015

I think this can be commented further. There can be a single callback in your TCPConnection, registered only for reading the bytes in a buffer, and from there you call the Future.then-registered callbacks when it reaches the specified length, thus the .read() returns a future to chain them as in the link above. =)

@etcimon
Copy link
Owner

etcimon commented Feb 18, 2015

After giving it some thought, your solution only needs a few adjustments and it would deal with nagle's algorithm and packet fragmenting very well.

TCPConnection must:

  • Handle the collection of data with a circular buffer
  • Implement a connect(NetworkAddress, void delegate() onConnect)
  • Implement a read(size_t len, void delegate(ubyte[]) onRead)
  • Implement a write(size_t len, void delegate()
  • Implement a onError(void delegate() error) to signal anything else
  • Accumulate the bytes in the buffer until len size is met (just like a contract, or a promise)
  • Call the onRead callback with the data

The worker threads and loops will not be necessary due to the async nature of the event loop.

If data is received without an onRead callback set, you have an assertion error.

So basically, just a level of indirection that accumulates bytes between AsyncTCP and your code, and you're good to go.

@GeorgeSapkin
Copy link
Contributor Author

Cool. I'll see if I can implement this when I have the time.

@GeorgeSapkin
Copy link
Contributor Author

What is the purpose of len and delegate args in write? Shouldn't it just be something like write(in ubyte[] msg)? Or is it supposed to have both a message and a callback and defer sending until TCPEvent.WRITE is triggered?

@etcimon
Copy link
Owner

etcimon commented Feb 24, 2015

When you call AsyncTCP.read() and AsyncTCP.write(), it proxies through to the kernel send which can end up sending less than you'd have wanted. In this case, it'll try again later. If it needs to wait, you should have a AsyncTCP.status return Status.ASYNC and the send function will return a uint value that is less than the length of your array. You need to manually send again the rest once TCPEvent.WRITE is triggered, and loop until everything is sent.

When creating a buffer for this purpose, you don't want to care about packet congestion, you just want the entire thing sent and be advised when it's done. It could even be after TCPEvent.WRITE is triggered 10 times, depending on how many megabytes you passed through this BufferedTCP or whatever you'd like to call it.

@etcimon
Copy link
Owner

etcimon commented Feb 24, 2015

I think the best way to understand this is by looking at the driver I wrote for vibe.d

https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/core/drivers/libasync.d#L948
https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/core/drivers/libasync.d#L1352

The only thing you really need to change is the Waiter. Instead of resuming a Task, you'll want to call a delegate. Just take that delegate as a parameter in the read and write methods and update them in the Waiter structure in acquireReader and acquireWriter so that you can call it when the requested data length is entirely received.

@GeorgeSapkin
Copy link
Contributor Author

I've pushed initial BufferedTCPConnection implementation and updated examples to use it. I'm using CircularBuffer from memutils for internal buffers.
I've also removed old TCPConnection helper class and all the threading code.

@GeorgeSapkin GeorgeSapkin changed the title Async client and server examples Bufferd client and server examples Feb 25, 2015
@GeorgeSapkin GeorgeSapkin changed the title Bufferd client and server examples Buffered client and server examples Feb 25, 2015
@GeorgeSapkin
Copy link
Contributor Author

I've squashed commits to only have recent changes.

@etcimon
Copy link
Owner

etcimon commented Feb 25, 2015

Works great, this is really useful. You might want to put the circular buffer's payload in a separate allocation though, because the Buffer struct is going to be marked for scanning by the GC and it'll check the entire circular buffer payloads for pointers if they're inlined.

I think there's also an issue with running this example on Windows, though I'm not sure why yet, it doesn't read the server write, and the problem disappears if I enabled LOG. I'll look into this when I have more time if you don't take care of it =)

etcimon pushed a commit that referenced this pull request Feb 25, 2015
Buffered TCP, with client and server examples
@etcimon etcimon merged commit e893bc9 into etcimon:master Feb 25, 2015
@etcimon
Copy link
Owner

etcimon commented Feb 25, 2015

Did you get the windows hangs as well?

@etcimon
Copy link
Owner

etcimon commented Feb 25, 2015

Also, I'm going to be replacing all new allocations with alloc! in the future. Look into ScopedPool to override the GC using scopes.

@GeorgeSapkin
Copy link
Contributor Author

I've only tested on Fedora 21 x64 so far. I'll check out alloc and ScopedPool.

@GeorgeSapkin
Copy link
Contributor Author

Wouldn't using memutils prevent libasync from being integrated into phobos, by the way? Or is there a plan to integrate memutils at some point as well?

@etcimon
Copy link
Owner

etcimon commented Feb 26, 2015

Wouldn't using memutils prevent libasync from being integrated into phobos, by the way? Or is there a plan to integrate memutils at some point as well?

Integrating to Phobos is a utopic idea right now. It doesn't even have a circular buffer (yet) and I'd have to get a few permissions here and there for re-licensing the memory management stuff that Sönke wrote.

Maybe I'll come up with something eventually but right now I'm not going to put obstacles on my way based on the idea of an eventual Phobos integration

@GeorgeSapkin GeorgeSapkin deleted the examples branch February 28, 2015 09:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants