New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a more idiomatic Finagle setup #245

Merged
merged 1 commit into from May 6, 2013

Conversation

Projects
None yet
6 participants
@sprsquish
Contributor

sprsquish commented May 3, 2013

The previous version was effectively serializing all incoming requests by doing all work on on the event loop. This patch provides a setup closer to what one might see in production code.

The serialization benchmark ships serialization off to a thread pool. Previously work was being done on the event loop and the result was sent to the pool.

We have a (still experimental) mysql client built on finagle that we're using for the db benchmarks. This ensures the bottleneck is never how the jdbc adapter is tuned. Also the previous setup was doing all queries on the event loop and providing the result. This patch provides true parallelization of queries.

Finally we have jackson emit byte arrays and use wrapped buffers instead of copied buffers to avoid the extra work of copying.

@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer May 3, 2013

Member

Fantastic. I've not reviewed the code, but I like the sound of these changes, especially if it pushes the test closer to idiomatic for Finagle. Thanks very much for the contribution @sprsquish.

Member

bhauer commented May 3, 2013

Fantastic. I've not reviewed the code, but I like the sound of these changes, especially if it pushes the test closer to idiomatic for Finagle. Thanks very much for the contribution @sprsquish.

@Skamander

This comment has been minimized.

Show comment
Hide comment
@Skamander

Skamander May 4, 2013

Oh cool, a contribution from one of the twitter guys. 👍 Maybe the db test can get in the same region as the json test (6. on ec2, 1. on dedicated hardware) now.

Skamander commented May 4, 2013

Oh cool, a contribution from one of the twitter guys. 👍 Maybe the db test can get in the same region as the json test (6. on ec2, 1. on dedicated hardware) now.

@esiegel

This comment has been minimized.

Show comment
Hide comment
@esiegel

esiegel May 5, 2013

You should probably also note in the final results that finagle isn't using jdbc.
I'd imagine that this will make a huge difference, as finagle-mysql driver doesn't require a thread per connection.

esiegel commented May 5, 2013

You should probably also note in the final results that finagle isn't using jdbc.
I'd imagine that this will make a huge difference, as finagle-mysql driver doesn't require a thread per connection.

@Xorlev

This comment has been minimized.

Show comment
Hide comment
@Xorlev

Xorlev May 5, 2013

That's correct. I was actually coming over here to do the same thing.

The platform offers scalable access to databases, it shouldn't be limited to single-threaded JDBC. Nor should any platform that has futures or continuation-style responses.

I do have some feedback here ... I'm not sure how Finagle schedules work, but should that val random = new Random() really be a call out to ThreadLocalRandom? Or is it guaranteed to be on a single thread?

Xorlev commented May 5, 2013

That's correct. I was actually coming over here to do the same thing.

The platform offers scalable access to databases, it shouldn't be limited to single-threaded JDBC. Nor should any platform that has futures or continuation-style responses.

I do have some feedback here ... I'm not sure how Finagle schedules work, but should that val random = new Random() really be a call out to ThreadLocalRandom? Or is it guaranteed to be on a single thread?

@sprsquish

This comment has been minimized.

Show comment
Hide comment
@sprsquish

sprsquish May 5, 2013

Contributor

@Xorlev It's not guaranteed to be on a single thread. Why would we need ThreadLocalRandom? Random is thread safe.

Contributor

sprsquish commented May 5, 2013

@Xorlev It's not guaranteed to be on a single thread. Why would we need ThreadLocalRandom? Random is thread safe.

@Xorlev

This comment has been minimized.

Show comment
Hide comment
@Xorlev

Xorlev May 5, 2013

I haven't figured out where the contention begins (in terms of threads), but if benchmark performance is the goal then reducing any code that could potentially result in contention seems like a good idea.

Xorlev commented May 5, 2013

I haven't figured out where the contention begins (in terms of threads), but if benchmark performance is the goal then reducing any code that could potentially result in contention seems like a good idea.

@sprsquish

This comment has been minimized.

Show comment
Hide comment
@sprsquish

sprsquish May 5, 2013

Contributor

A quick test on my local machine shows plain Random to be faster than its thread local counterpart. (Clarification, this was at a somewhat low concurrency setting and may turn out to be different at the higher settings).

Benchmark performance isn't my ultimate goal here. My aim is to provide an honest setup that mimics production level code. If someone were to look deeper at Finagle due to seeing these benchmarks I want them to be able to use this code as a template for how they might write their own service.

Contributor

sprsquish commented May 5, 2013

A quick test on my local machine shows plain Random to be faster than its thread local counterpart. (Clarification, this was at a somewhat low concurrency setting and may turn out to be different at the higher settings).

Benchmark performance isn't my ultimate goal here. My aim is to provide an honest setup that mimics production level code. If someone were to look deeper at Finagle due to seeing these benchmarks I want them to be able to use this code as a template for how they might write their own service.

@bhauer

This comment has been minimized.

Show comment
Hide comment
@bhauer

bhauer May 6, 2013

Member

@sprsquish Thank you. That is what we'd prefer all tests to be, to whatever extent that proves to be possible. We'd prefer each test to demonstrate the idioms used in production code for the platform and framework.

Member

bhauer commented May 6, 2013

@sprsquish Thank you. That is what we'd prefer all tests to be, to whatever extent that proves to be possible. We'd prefer each test to demonstrate the idioms used in production code for the platform and framework.

@pfalls1

This comment has been minimized.

Show comment
Hide comment
@pfalls1

pfalls1 May 6, 2013

Contributor

@sprsquish when I run the tests, I'm getting the following error. It does not seem to be a problem on every request, maybe about 1% of all requests, any thoughts?

May 06, 2013 8:21:59 AM com.twitter.finagle.builder.SourceTrackingMonitor handle
SEVERE: A server service unspecified threw an exception
java.util.concurrent.CancellationException
    at com.twitter.util.ExecutorServiceFuturePool$$anonfun$apply$1.applyOrElse(FuturePool.scala:96)
    at com.twitter.util.ExecutorServiceFuturePool$$anonfun$apply$1.applyOrElse(FuturePool.scala:92)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at com.twitter.util.Promise.raise(Promise.scala:367)
    at com.twitter.util.Promise.raise(Promise.scala:372)
    at com.twitter.util.Promise$Chained.raise(Promise.scala:158)
    at com.twitter.util.Promise.raise(Promise.scala:372)
    at com.twitter.util.Promise$Chained.raise(Promise.scala:158)
    at com.twitter.finagle.dispatch.SerialServerDispatcher$$anonfun$1.apply$mcV$sp(ServerDispatcher.scala:31)
    at com.twitter.util.Future$$anonfun$ensure$1.apply(Future.scala:500)
    at com.twitter.util.Future$$anonfun$ensure$1.apply(Future.scala:500)
    at com.twitter.util.Promise$Monitored.apply(Promise.scala:40)
    at com.twitter.util.Promise$Monitored.apply(Promise.scala:31)
    at com.twitter.util.Promise$$anon$1.run(Promise.scala:262)
    at com.twitter.concurrent.Scheduler$LocalScheduler.run(Scheduler.scala:60)
    at com.twitter.concurrent.Scheduler$LocalScheduler.submit(Scheduler.scala:40)
    at com.twitter.concurrent.Scheduler$.submit(Scheduler.scala:26)
    at com.twitter.util.Promise.runq(Promise.scala:248)
    at com.twitter.util.Promise.updateIfEmpty(Promise.scala:492)
    at com.twitter.finagle.transport.ChannelTransport.fail(ChannelTransport.scala:35)
    at com.twitter.finagle.transport.ChannelTransport.handleUpstream(ChannelTransport.scala:46)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.channel.SimpleChannelHandler.channelClosed(SimpleChannelHandler.java:216)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:106)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.channel.SimpleChannelHandler.channelClosed(SimpleChannelHandler.java:216)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:106)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.channel.SimpleChannelHandler.channelClosed(SimpleChannelHandler.java:216)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:106)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.channelClosed(SimpleChannelUpstreamHandler.java:225)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:88)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.channelClosed(SimpleChannelUpstreamHandler.java:225)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:88)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.cleanup(ReplayingDecoder.java:573)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.channelClosed(FrameDecoder.java:372)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:88)
    at org.jboss.netty.handler.codec.http.HttpServerCodec.handleUpstream(HttpServerCodec.java:56)
    at com.twitter.finagle.http.SafeHttpServerCodec.handleUpstream(Codec.scala:40)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.channel.SimpleChannelHandler.channelClosed(SimpleChannelHandler.java:216)
    at com.twitter.finagle.channel.ChannelStatsHandler.channelClosed(ChannelStatsHandler.scala:88)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:106)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:553)
    at org.jboss.netty.channel.Channels.fireChannelClosed(Channels.java:476)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:736)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:722)
Caused by: com.twitter.finagle.CancelledRequestException
    at com.twitter.finagle.NoStacktrace(Unknown Source)
Contributor

pfalls1 commented May 6, 2013

@sprsquish when I run the tests, I'm getting the following error. It does not seem to be a problem on every request, maybe about 1% of all requests, any thoughts?

May 06, 2013 8:21:59 AM com.twitter.finagle.builder.SourceTrackingMonitor handle
SEVERE: A server service unspecified threw an exception
java.util.concurrent.CancellationException
    at com.twitter.util.ExecutorServiceFuturePool$$anonfun$apply$1.applyOrElse(FuturePool.scala:96)
    at com.twitter.util.ExecutorServiceFuturePool$$anonfun$apply$1.applyOrElse(FuturePool.scala:92)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at com.twitter.util.Promise.raise(Promise.scala:367)
    at com.twitter.util.Promise.raise(Promise.scala:372)
    at com.twitter.util.Promise$Chained.raise(Promise.scala:158)
    at com.twitter.util.Promise.raise(Promise.scala:372)
    at com.twitter.util.Promise$Chained.raise(Promise.scala:158)
    at com.twitter.finagle.dispatch.SerialServerDispatcher$$anonfun$1.apply$mcV$sp(ServerDispatcher.scala:31)
    at com.twitter.util.Future$$anonfun$ensure$1.apply(Future.scala:500)
    at com.twitter.util.Future$$anonfun$ensure$1.apply(Future.scala:500)
    at com.twitter.util.Promise$Monitored.apply(Promise.scala:40)
    at com.twitter.util.Promise$Monitored.apply(Promise.scala:31)
    at com.twitter.util.Promise$$anon$1.run(Promise.scala:262)
    at com.twitter.concurrent.Scheduler$LocalScheduler.run(Scheduler.scala:60)
    at com.twitter.concurrent.Scheduler$LocalScheduler.submit(Scheduler.scala:40)
    at com.twitter.concurrent.Scheduler$.submit(Scheduler.scala:26)
    at com.twitter.util.Promise.runq(Promise.scala:248)
    at com.twitter.util.Promise.updateIfEmpty(Promise.scala:492)
    at com.twitter.finagle.transport.ChannelTransport.fail(ChannelTransport.scala:35)
    at com.twitter.finagle.transport.ChannelTransport.handleUpstream(ChannelTransport.scala:46)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.channel.SimpleChannelHandler.channelClosed(SimpleChannelHandler.java:216)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:106)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.channel.SimpleChannelHandler.channelClosed(SimpleChannelHandler.java:216)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:106)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.channel.SimpleChannelHandler.channelClosed(SimpleChannelHandler.java:216)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:106)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.channelClosed(SimpleChannelUpstreamHandler.java:225)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:88)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.channelClosed(SimpleChannelUpstreamHandler.java:225)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:88)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.cleanup(ReplayingDecoder.java:573)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.channelClosed(FrameDecoder.java:372)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:88)
    at org.jboss.netty.handler.codec.http.HttpServerCodec.handleUpstream(HttpServerCodec.java:56)
    at com.twitter.finagle.http.SafeHttpServerCodec.handleUpstream(Codec.scala:40)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.channel.SimpleChannelHandler.channelClosed(SimpleChannelHandler.java:216)
    at com.twitter.finagle.channel.ChannelStatsHandler.channelClosed(ChannelStatsHandler.scala:88)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:106)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:553)
    at org.jboss.netty.channel.Channels.fireChannelClosed(Channels.java:476)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:736)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:722)
Caused by: com.twitter.finagle.CancelledRequestException
    at com.twitter.finagle.NoStacktrace(Unknown Source)
@sprsquish

This comment has been minimized.

Show comment
Hide comment
@sprsquish

sprsquish May 6, 2013

Contributor

Are these coming during the run or at the end? I've noticed that wrk will simply shutdown all connections when it's done with its run. If there are requests in-flight they'll see CancelledRequestExceptions.

Contributor

sprsquish commented May 6, 2013

Are these coming during the run or at the end? I've noticed that wrk will simply shutdown all connections when it's done with its run. If there are requests in-flight they'll see CancelledRequestExceptions.

@pfalls1

This comment has been minimized.

Show comment
Hide comment
@pfalls1

pfalls1 May 6, 2013

Contributor

@sprsquish Yes, that makes sense, they are coming at the end of the run.

Contributor

pfalls1 commented May 6, 2013

@sprsquish Yes, that makes sense, they are coming at the end of the run.

@pfalls1 pfalls1 merged commit 9b6bd83 into TechEmpower:master May 6, 2013

@pfalls1

This comment has been minimized.

Show comment
Hide comment
@pfalls1

pfalls1 May 6, 2013

Contributor

Thanks @sprsquish for the contribution!

Contributor

pfalls1 commented May 6, 2013

Thanks @sprsquish for the contribution!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment