New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP client leaks memory. #1321

Closed
japplegame opened this Issue Nov 5, 2015 · 33 comments

Comments

Projects
None yet
4 participants
@japplegame
Contributor

japplegame commented Nov 5, 2015

Configuration: CentOS 7 x86_64, vibe.d-0.7.26, libevent driver

There is something wrong in HTTP client error handling.
Client code:

import vibe.d;

shared static this() {
    runTask({
        while(true) {
            try {
                requestHTTP("http://127.0.0.1:8080", (scope req) {
                }, (scope res) {
                    res.dropBody();
                });
            } catch(Exception e) {

            }
            sleep(1.msecs);
        }
    });
}

If to run this code without a HTTP server, the client starts to leak memory very fast.

But if to run a simple HTTP server:

import vibe.d;

shared static this()
{
    auto settings = new HTTPServerSettings;
    settings.port = 8080;
    settings.bindAddresses = ["127.0.0.1"];

    listenHTTP(settings, (req, res) {
        res.writeBody("Hello, World!", "text/plain");
    });
}

The client immediately stops leaking. Stop the server and you will see leak again.
If to change server code to throwing error:

listenHTTP(settings, (req, res) {
    throw new HTTPStatusException(HTTPStatus.badRequest);
});

the client leaks, but much more slowly then without a server. But still leaks.

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 5, 2015

Contributor

Does it also occur with libasync?

Contributor

etcimon commented Nov 5, 2015

Does it also occur with libasync?

@japplegame

This comment has been minimized.

Show comment
Hide comment
@japplegame

japplegame Nov 5, 2015

Contributor

No. Looks like libasync doesn't leak.

Contributor

japplegame commented Nov 5, 2015

No. Looks like libasync doesn't leak.

@japplegame

This comment has been minimized.

Show comment
Hide comment
@japplegame

japplegame Nov 5, 2015

Contributor

But we can't switch to libasync now, because it causes very strange behaviour and we are unable to understand what exactly is happening.

Contributor

japplegame commented Nov 5, 2015

But we can't switch to libasync now, because it causes very strange behaviour and we are unable to understand what exactly is happening.

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 5, 2015

Contributor

Strange behavior?

Contributor

etcimon commented Nov 5, 2015

Strange behavior?

@yannick

This comment has been minimized.

Show comment
Hide comment
@yannick

yannick Nov 6, 2015

Contributor

might be the same as of #1122

Contributor

yannick commented Nov 6, 2015

might be the same as of #1122

@japplegame

This comment has been minimized.

Show comment
Hide comment
@japplegame

japplegame Nov 6, 2015

Contributor

Thanks. It fixes first case, but doesn't second (server throwing error).

Contributor

japplegame commented Nov 6, 2015

Thanks. It fixes first case, but doesn't second (server throwing error).

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 6, 2015

Contributor

I've had suspicions that Exceptions leaked. Are you talking about a leak on the client when both client and server are in the same process?

Contributor

etcimon commented Nov 6, 2015

I've had suspicions that Exceptions leaked. Are you talking about a leak on the client when both client and server are in the same process?

@japplegame

This comment has been minimized.

Show comment
Hide comment
@japplegame

japplegame Nov 6, 2015

Contributor

No. Client and server are different processes. Server doesn't leak.

Contributor

japplegame commented Nov 6, 2015

No. Client and server are different processes. Server doesn't leak.

@japplegame

This comment has been minimized.

Show comment
Hide comment
@japplegame

japplegame Nov 6, 2015

Contributor

Which tool I should use for debugging memory leaks?

Contributor

japplegame commented Nov 6, 2015

Which tool I should use for debugging memory leaks?

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 6, 2015

Contributor

I generally use valgrind --tool=massif ./app and then ms_print report.out, but you need to use MallocAllocator without any wrappers here: https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/utils/memory.d#L43

It's much more difficult to find a GC memory leak though

Contributor

etcimon commented Nov 6, 2015

I generally use valgrind --tool=massif ./app and then ms_print report.out, but you need to use MallocAllocator without any wrappers here: https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/utils/memory.d#L43

It's much more difficult to find a GC memory leak though

@japplegame

This comment has been minimized.

Show comment
Hide comment
@japplegame

japplegame Nov 6, 2015

Contributor

Сould the leak be caused by forced task interruption (Task.interrupt()) during downloading or establishing connection?

Contributor

japplegame commented Nov 6, 2015

Сould the leak be caused by forced task interruption (Task.interrupt()) during downloading or establishing connection?

@japplegame

This comment has been minimized.

Show comment
Hide comment
@japplegame

japplegame Nov 6, 2015

Contributor

Also sometimes interruption causes assertion. Just got one (libasync driver):

#0  0x0000000000ad94e0 in rt.monitor_.ensureMonitor() ()
#1  0x0000000000ad9377 in _d_monitorenter ()
#2  0x0000000000a8eaa5 in libasync.dns.AsyncDNS.cmdInfo() (this=0x7f951e90dd00, __HID72=0x7f953effb3f0) at ../../.dub/packages/libasync-0.7.5/source/libasync/dns.d:132
#3  0x0000000000aada98 in libasync.threads.CmdProcessor.process() (this=0x7f95480a9600, ctxt=0x7f951e90dd00) at ../../.dub/packages/libasync-0.7.5/source/libasync/threads.d:51
#4  0x0000000000aae4d2 in libasync.threads.CmdProcessor.process() (this=0x7f95480a9600) at ../../.dub/packages/libasync-0.7.5/source/libasync/threads.d:211
#5  0x0000000000aae340 in libasync.threads.CmdProcessor.run() (this=0x7f95480a9600) at ../../.dub/packages/libasync-0.7.5/source/libasync/threads.d:171
#6  0x0000000000b25642 in core.thread.Thread.run() ()
#7  0x0000000000b2532a in thread_entryPoint ()
#8  0x00007f9547531df5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007f9546d591ad in clone () from /lib64/libc.so.6
Contributor

japplegame commented Nov 6, 2015

Also sometimes interruption causes assertion. Just got one (libasync driver):

#0  0x0000000000ad94e0 in rt.monitor_.ensureMonitor() ()
#1  0x0000000000ad9377 in _d_monitorenter ()
#2  0x0000000000a8eaa5 in libasync.dns.AsyncDNS.cmdInfo() (this=0x7f951e90dd00, __HID72=0x7f953effb3f0) at ../../.dub/packages/libasync-0.7.5/source/libasync/dns.d:132
#3  0x0000000000aada98 in libasync.threads.CmdProcessor.process() (this=0x7f95480a9600, ctxt=0x7f951e90dd00) at ../../.dub/packages/libasync-0.7.5/source/libasync/threads.d:51
#4  0x0000000000aae4d2 in libasync.threads.CmdProcessor.process() (this=0x7f95480a9600) at ../../.dub/packages/libasync-0.7.5/source/libasync/threads.d:211
#5  0x0000000000aae340 in libasync.threads.CmdProcessor.run() (this=0x7f95480a9600) at ../../.dub/packages/libasync-0.7.5/source/libasync/threads.d:171
#6  0x0000000000b25642 in core.thread.Thread.run() ()
#7  0x0000000000b2532a in thread_entryPoint ()
#8  0x00007f9547531df5 in start_thread () from /lib64/libpthread.so.0
#9  0x00007f9546d591ad in clone () from /lib64/libc.so.6
@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 6, 2015

Contributor

If the client does this, it shouldn't cause a leak currently, your connection will be recycled. If the server runs into an exception, keep-alive is disabled and the connection is closed

Contributor

etcimon commented Nov 6, 2015

If the client does this, it shouldn't cause a leak currently, your connection will be recycled. If the server runs into an exception, keep-alive is disabled and the connection is closed

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 6, 2015

Contributor

Also sometimes interruption causes assertion. Just got one (libasync driver):

Looks like the task was interrupted and destroyed/freed the AsyncDNS while it had a mutex/condition copied in the DNS resolver thread. I might have to put that object on the GC, thanks

Contributor

etcimon commented Nov 6, 2015

Also sometimes interruption causes assertion. Just got one (libasync driver):

Looks like the task was interrupted and destroyed/freed the AsyncDNS while it had a mutex/condition copied in the DNS resolver thread. I might have to put that object on the GC, thanks

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 6, 2015

Contributor

If the client does this, it shouldn't cause a leak currently, your connection will be recycled.

Scratch that, the connection will not be recycled on exception in the response handler.
https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/http/client.d#L426
The HTTPClient itself will be recycled but the connection will be closed. It shouldn't leak anyways.

Here's the reference to the keep-alive behavior on the server-side:
https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/http/server.d#L1735

Contributor

etcimon commented Nov 6, 2015

If the client does this, it shouldn't cause a leak currently, your connection will be recycled.

Scratch that, the connection will not be recycled on exception in the response handler.
https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/http/client.d#L426
The HTTPClient itself will be recycled but the connection will be closed. It shouldn't leak anyways.

Here's the reference to the keep-alive behavior on the server-side:
https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/http/server.d#L1735

@japplegame

This comment has been minimized.

Show comment
Hide comment
@japplegame

japplegame Nov 6, 2015

Contributor

Our third party data source servers are very buggy. Dropped connections, half-open TCP, wrong JSONs, slow downloading, various HTTP errors and so on.
Is there a way to drop downloading after some fixed timeout? Now we are using Task.interrupt(), but it looks unsafe.

Contributor

japplegame commented Nov 6, 2015

Our third party data source servers are very buggy. Dropped connections, half-open TCP, wrong JSONs, slow downloading, various HTTP errors and so on.
Is there a way to drop downloading after some fixed timeout? Now we are using Task.interrupt(), but it looks unsafe.

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 6, 2015

Contributor

Is there a way to drop downloading after some fixed timeout? Now we are using Task.interrupt(), but it look unsafe.

Here's a few ideas not tested, may or may not be better suited for your purpose.

client.request("http://....", 
(scope req) { }, 
(scope res) { 
    Timer tm = setTimer(5.seconds, { res.diconnect(); }); 
    scope(exit) { if (tm.pending()) { tm.stop(); tm.destroy(); } }
    writeFile("myFile.txt", res.bodyReader.readAll()); //...
}

or

client.request("http://....", 
(scope req) { }, 
(scope res) { 
    auto fstream = openFile("myfile.txt", FileMode.createTrunc);
    ubyte[] buf = new ubyte[4092];
    while(res.bodyReader.waitForData(2.seconds)) {
        res.bodyReader.read(buf[0 .. res.bodyReader.leastSize])
        fstream.write(buf); //...
    }
}

If you'd like to timeout on the connect(), you'd need to implement it in the driver directly. I did it like this: https://github.com/etcimon/vibe.d/blob/master/source/vibe/core/drivers/libasync.d#L316

For DNS, you'd be better off adding the address in the hosts file if you think it's never going to change, or avoiding DNS resolver altogether.

Contributor

etcimon commented Nov 6, 2015

Is there a way to drop downloading after some fixed timeout? Now we are using Task.interrupt(), but it look unsafe.

Here's a few ideas not tested, may or may not be better suited for your purpose.

client.request("http://....", 
(scope req) { }, 
(scope res) { 
    Timer tm = setTimer(5.seconds, { res.diconnect(); }); 
    scope(exit) { if (tm.pending()) { tm.stop(); tm.destroy(); } }
    writeFile("myFile.txt", res.bodyReader.readAll()); //...
}

or

client.request("http://....", 
(scope req) { }, 
(scope res) { 
    auto fstream = openFile("myfile.txt", FileMode.createTrunc);
    ubyte[] buf = new ubyte[4092];
    while(res.bodyReader.waitForData(2.seconds)) {
        res.bodyReader.read(buf[0 .. res.bodyReader.leastSize])
        fstream.write(buf); //...
    }
}

If you'd like to timeout on the connect(), you'd need to implement it in the driver directly. I did it like this: https://github.com/etcimon/vibe.d/blob/master/source/vibe/core/drivers/libasync.d#L316

For DNS, you'd be better off adding the address in the hosts file if you think it's never going to change, or avoiding DNS resolver altogether.

@japplegame

This comment has been minimized.

Show comment
Hide comment
@japplegame

japplegame Nov 6, 2015

Contributor

I'd like to timeout on request regardless of its stage.

Contributor

japplegame commented Nov 6, 2015

I'd like to timeout on request regardless of its stage.

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 6, 2015

Contributor

I'd like to timeout on request regardless of its stage.

It needs to be implemented at every stage, currently interrupt is a reliable shortcut, although it doesn't work with the AsyncDNS

Contributor

etcimon commented Nov 6, 2015

I'd like to timeout on request regardless of its stage.

It needs to be implemented at every stage, currently interrupt is a reliable shortcut, although it doesn't work with the AsyncDNS

@japplegame

This comment has been minimized.

Show comment
Hide comment
@japplegame

japplegame Nov 10, 2015

Contributor

Gzipped data decompression also leaks. Actually it's more critical than the previous leaks.

Code: https://github.com/japplegame/vibe-http-client-gzip-test
Valgrind massif output: https://github.com/japplegame/vibe-http-client-gzip-test/blob/master/client/massif.out.txt

Comment https://github.com/japplegame/vibe-http-client-gzip-test/blob/master/server/source/app.d#L16 and leaks disappear.

Server works without leaks in both cases.

Contributor

japplegame commented Nov 10, 2015

Gzipped data decompression also leaks. Actually it's more critical than the previous leaks.

Code: https://github.com/japplegame/vibe-http-client-gzip-test
Valgrind massif output: https://github.com/japplegame/vibe-http-client-gzip-test/blob/master/client/massif.out.txt

Comment https://github.com/japplegame/vibe-http-client-gzip-test/blob/master/server/source/app.d#L16 and leaks disappear.

Server works without leaks in both cases.

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 10, 2015

Contributor

Yes, I fixed this in my fork and I proposed a fix here:

#1116

Contributor

etcimon commented Nov 10, 2015

Yes, I fixed this in my fork and I proposed a fix here:

#1116

@japplegame japplegame closed this Nov 11, 2015

@s-ludwig s-ludwig reopened this Nov 11, 2015

@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig Nov 11, 2015

Member

HTTP client leak fix hasn't been merged/done yet.

Member

s-ludwig commented Nov 11, 2015

HTTP client leak fix hasn't been merged/done yet.

@s-ludwig s-ludwig closed this in #1322 Nov 11, 2015

s-ludwig added a commit that referenced this issue Nov 11, 2015

Merge pull request #1322 from etcimon/patch-27
Free TCPContext on connection failure - fixes #1321
@japplegame

This comment has been minimized.

Show comment
Hide comment
@japplegame

japplegame Nov 11, 2015

Contributor

HTTP client leak fix hasn't been merged/done yet.

Sorry, I mistakenly thought that ca3e23c and zlib patches fixed it.

Contributor

japplegame commented Nov 11, 2015

HTTP client leak fix hasn't been merged/done yet.

Sorry, I mistakenly thought that ca3e23c and zlib patches fixed it.

@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig Nov 11, 2015

Member

Has been auto-closed again by #1322, but #1324 is still open.

Member

s-ludwig commented Nov 11, 2015

Has been auto-closed again by #1322, but #1324 is still open.

@s-ludwig s-ludwig reopened this Nov 11, 2015

@japplegame

This comment has been minimized.

Show comment
Hide comment
@japplegame

japplegame Nov 27, 2015

Contributor

Ping

Contributor

japplegame commented Nov 27, 2015

Ping

@japplegame

This comment has been minimized.

Show comment
Hide comment
@japplegame

japplegame Feb 29, 2016

Contributor

Closed (temporarily?). Because I should give the bounty to @etcimon. And now HTTP client doesn't leak (at least for our project).
Looks like it is impossible if issue isn't closed.

Contributor

japplegame commented Feb 29, 2016

Closed (temporarily?). Because I should give the bounty to @etcimon. And now HTTP client doesn't leak (at least for our project).
Looks like it is impossible if issue isn't closed.

@japplegame japplegame closed this Feb 29, 2016

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Apr 22, 2016

Contributor

@s-ludwig you should claim this, I only changed some 3 lines of code although it prompted you to do a lot of refactoring and overall improvements.

Contributor

etcimon commented Apr 22, 2016

@s-ludwig you should claim this, I only changed some 3 lines of code although it prompted you to do a lot of refactoring and overall improvements.

@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig May 3, 2016

Member

@etcimon: IMO, you should definitely claim it! You've invested quite some time for diagnosis and preparing pull requests and my works isn't really directly related. I also still really need to take a deeper look at #1324.

BTW, I remember now that you asked about libasync in conjunction with the new vibe-core package somewhere, I left a tab open but it got lost before I got back to it to answer. So I'm not yet fully decided on this one. There are some issues with libasync currently:

  • DMD frontend compatibility is less than what vibe.d guarantees
  • Considerably slower than the prototype abstraction that I made for vibe-core
  • I'm still experimenting with ways to improve memory consumption and allocation performance, which is a lot easier in a small, focused code base and it could turn out that libasync would require major refactorings
  • If I continue with the eventcore abstraction, I'm planing to keep it simple enough to get it into Phobos eventually (keeping it to a minumum to avoid making it a lifetime task)
  • Some things I'd do differently regarding the API, but that's a secondary concern

eventcore's downsides:

  • Still missing quite a few things (most of it is simple or already existing elsewhere, though)
  • Not tested as well as the other abstractions
Member

s-ludwig commented May 3, 2016

@etcimon: IMO, you should definitely claim it! You've invested quite some time for diagnosis and preparing pull requests and my works isn't really directly related. I also still really need to take a deeper look at #1324.

BTW, I remember now that you asked about libasync in conjunction with the new vibe-core package somewhere, I left a tab open but it got lost before I got back to it to answer. So I'm not yet fully decided on this one. There are some issues with libasync currently:

  • DMD frontend compatibility is less than what vibe.d guarantees
  • Considerably slower than the prototype abstraction that I made for vibe-core
  • I'm still experimenting with ways to improve memory consumption and allocation performance, which is a lot easier in a small, focused code base and it could turn out that libasync would require major refactorings
  • If I continue with the eventcore abstraction, I'm planing to keep it simple enough to get it into Phobos eventually (keeping it to a minumum to avoid making it a lifetime task)
  • Some things I'd do differently regarding the API, but that's a secondary concern

eventcore's downsides:

  • Still missing quite a few things (most of it is simple or already existing elsewhere, though)
  • Not tested as well as the other abstractions
@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon May 3, 2016

Contributor

Ok thanks! I need to look into each of these points to be sure, but I think the benchmark could be given a 2nd try. There were a lot of GC string appending done by mistake in low level code due to log() calls that I didn't think would execute, I put them in static ifs now.

Contributor

etcimon commented May 3, 2016

Ok thanks! I need to look into each of these points to be sure, but I think the benchmark could be given a 2nd try. There were a lot of GC string appending done by mistake in low level code due to log() calls that I didn't think would execute, I put them in static ifs now.

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon May 16, 2016

Contributor

DMD frontend compatibility is less than what vibe.d guarantees

I'm sure vibe.d will stop supporting 2.066/2.067 eventually

Considerably slower than the prototype abstraction that I made for vibe-core

That can be fixed with some micro-optimizations. For example, catchError is being called in many low-level areas without being inlined (I think).

I'm still experimenting with ways to improve memory consumption and allocation performance, which is a lot easier in a small, focused code base and it could turn out that libasync would require major refactorings

This would belong in the micro-optimizations step. I think a good strategy would be e.g. using array for <10 consecutive connections and rbtree for >10, and inlining as much of these operations possible. Redis does this for its hashmap (hset) implementation to speed it up on small data

If I continue with the eventcore abstraction, I'm planing to keep it simple enough to get it into Phobos eventually (keeping it to a minumum to avoid making it a lifetime task)

Phobos integration was brought up at the time I launched libasync and the idea was to also integrate it into std.concurrency
http://forum.dlang.org/post/lvuns3$2c09$1@digitalmars.com

I think this will also trickle down to std.socket and require some major work to make it merge. That's something I haven't been ready to do myself yet (because time) but you might agree it'll be considerably harder to maintain something that produces so much interlinking, unless you find someone else to make it ready and be committed to it.

Some things I'd do differently regarding the API, but that's a secondary concern

I've always liked your APIs better, I just couldn't go through any back and forth on the design at the time because I was time constrained.

eventcore's downsides:

Still missing quite a few things (most of it is simple or already existing elsewhere, though)

That's the definition of a WIP ;)

Not tested as well as the other abstractions

That's also because it's WIP

Honestly, I'm overburdened with development projects in production (using my/your tools) and I feel like these tools are mature right now, whereas some performance optimizations would help I think it could be easier to run perf on libasync. My bottlenecks are currently in postgres (70%+ of cpu time is spent there, 20%+ in botan/openssl) so it's not to my advantage to do this currently, but I'd drop it anytime for eventcore once it's complete because I've always admired your quality of code as being far superior in the way it's planned from the start.

Contributor

etcimon commented May 16, 2016

DMD frontend compatibility is less than what vibe.d guarantees

I'm sure vibe.d will stop supporting 2.066/2.067 eventually

Considerably slower than the prototype abstraction that I made for vibe-core

That can be fixed with some micro-optimizations. For example, catchError is being called in many low-level areas without being inlined (I think).

I'm still experimenting with ways to improve memory consumption and allocation performance, which is a lot easier in a small, focused code base and it could turn out that libasync would require major refactorings

This would belong in the micro-optimizations step. I think a good strategy would be e.g. using array for <10 consecutive connections and rbtree for >10, and inlining as much of these operations possible. Redis does this for its hashmap (hset) implementation to speed it up on small data

If I continue with the eventcore abstraction, I'm planing to keep it simple enough to get it into Phobos eventually (keeping it to a minumum to avoid making it a lifetime task)

Phobos integration was brought up at the time I launched libasync and the idea was to also integrate it into std.concurrency
http://forum.dlang.org/post/lvuns3$2c09$1@digitalmars.com

I think this will also trickle down to std.socket and require some major work to make it merge. That's something I haven't been ready to do myself yet (because time) but you might agree it'll be considerably harder to maintain something that produces so much interlinking, unless you find someone else to make it ready and be committed to it.

Some things I'd do differently regarding the API, but that's a secondary concern

I've always liked your APIs better, I just couldn't go through any back and forth on the design at the time because I was time constrained.

eventcore's downsides:

Still missing quite a few things (most of it is simple or already existing elsewhere, though)

That's the definition of a WIP ;)

Not tested as well as the other abstractions

That's also because it's WIP

Honestly, I'm overburdened with development projects in production (using my/your tools) and I feel like these tools are mature right now, whereas some performance optimizations would help I think it could be easier to run perf on libasync. My bottlenecks are currently in postgres (70%+ of cpu time is spent there, 20%+ in botan/openssl) so it's not to my advantage to do this currently, but I'd drop it anytime for eventcore once it's complete because I've always admired your quality of code as being far superior in the way it's planned from the start.

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon May 16, 2016

Contributor

using array for <10 consecutive connections and rbtree for >10,

This is for Windows because the events don't carry the object pointer. It could also be useful to introduce a custom tls bucket specifically for the event objects

Contributor

etcimon commented May 16, 2016

using array for <10 consecutive connections and rbtree for >10,

This is for Windows because the events don't carry the object pointer. It could also be useful to introduce a custom tls bucket specifically for the event objects

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon May 17, 2016

Contributor

ok I just added inlining for AsyncTCPConnection.send, recv, catchSocketError, etc. which means it should be inlining in the driver all the way down to the system calls.

Contributor

etcimon commented May 17, 2016

ok I just added inlining for AsyncTCPConnection.send, recv, catchSocketError, etc. which means it should be inlining in the driver all the way down to the system calls.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment