Skip to content
This repository has been archived by the owner on Apr 23, 2019. It is now read-only.

this seed project can't handle high concurrent requests, any suggestion? #80

Closed
gaplo917 opened this issue Mar 16, 2018 · 4 comments
Closed

Comments

@gaplo917
Copy link

gaplo917 commented Mar 16, 2018

I am doing different benchmark(rps & latency) on different web framework using a HTTP benchmark client written in GO.

Other frameworks like vert.x, express, spring-boot... with no problem on handling high concurrency request(at least 1000) in their DEFAULT seed project. But this DEFAULT seed project do not.

Benchmark Env

Benchmark is taken on Google cloud platform in

  • Server n1-standard-1 (1 vCPU, 3.75GB RAM)
  • Client n1-highcpu-8 (8 vCPU, 7.6GB RAM)

BOTH client & server set ulimit -n 10000

Server run the following command:

sbt stage

./play-scala-starter-example-1.0-SNAPSHOT/bin/play-scala-starter-example -J-Xmx3G -J-server -Dplay.http.secret.key='justbenchmark' -Dhttp.port=8080

Client run the following command:

bombardier --http2 -o json -p result -c 50 -d 30s -l http://10.148.0.2:8080

Benchmark Result

But the benchmark result is very disappointing,

when concurrency = 50, it is fine

{
  "spec": {
    "numberOfConnections": 50,
    "testType": "timed",
    "testDurationSeconds": 30,
    "method": "GET",
    "url": "http://10.148.0.2:8080",
    "body": "",
    "stream": false,
    "timeoutSeconds": 2,
    "client": "net/http.v2"
  },
  "result": {
    "bytesRead": 131204824,
    "bytesWritten": 33499104,
    "timeTakenSeconds": 30.002270454,
    "req1xx": 0,
    "req2xx": 348949,
    "req3xx": 0,
    "req4xx": 0,
    "req5xx": 0,
    "others": 0,
    "latency": {
      "mean": 4294.770914947457,
      "stddev": 4960.572371647603,
      "max": 704656,
      "percentiles": {
        "50": 3640,
        "75": 4746,
        "90": 7414,
        "99": 12850
      }
    },
    "rps": {
      "mean": 11631.913864126573,
      "stddev": 2058.378627341629,
      "max": 17422.439420870218,
      "percentiles": {
        "50": 11675.413833,
        "75": 13163.333799,
        "90": 14114.374785,
        "99": 15581.722049
      }
    }
  }
}

when concurrency = 250, there is already 33% of requests can't be processed

{
  "spec": {
    "numberOfConnections": 250,
    "testType": "timed",
    "testDurationSeconds": 30,
    "method": "GET",
    "url": "http://10.148.0.2:8080",
    "body": "",
    "stream": false,
    "timeoutSeconds": 2,
    "client": "net/http.v2"
  },
  "result": {
    "bytesRead": 27744288,
    "bytesWritten": 7291392,
    "timeTakenSeconds": 30.650683353,
    "req1xx": 0,
    "req2xx": 73788,
    "req3xx": 0,
    "req4xx": 0,
    "req5xx": 0,
    "others": 34786,
    "errors": [
      {
        "description": "Get http://10.148.0.2:8080: dial tcp 10.148.0.2:8080: socket: too many open files",
        "count": 32621
      },
      {
        "description": "Get http://10.148.0.2:8080: net/http: request canceled (Client.Timeout exceeded while await
ing headers)",
        "count": 2164
      },
      {
        "description": "Get http://10.148.0.2:8080: net/http: request canceled while waiting for connection (Client
.Timeout exceeded while awaiting headers)",
        "count": 1
      }
      "stddev": 150725.60828889467,
    ],
    "latency": {
      "mean": 69328.29279569695,
      "stddev": 150725.60828889467,
      "max": 2020753,
      "percentiles": {
        "50": 11765,
        "75": 23937,
        "90": 39886,
        "99": 2000143
      }
    },
    "rps": {
      "mean": 3671.3305974733885,
      "stddev": 14898.84396681227,
      "max": 296065.584448267,
      "percentiles": {
        "50": 2592.91667,
        "75": 3706.683892,
        "90": 4745.656064,
        "99": 6919.851323
      }
    }
  }
}

when concurrency = 2000, there is only ~3% request success

{
  "spec": {
    "numberOfConnections": 2000,
    "testType": "timed",
    "testDurationSeconds": 30,
    "method": "GET",
    "url": "http://10.148.0.2:8080",
    "body": "",
    "stream": false,
    "timeoutSeconds": 2,
    "client": "net/http.v2"
  },
  "result": {
    "bytesRead": 41538600,
    "bytesWritten": 10904736,
    "timeTakenSeconds": 30.416049812,
    "req1xx": 0,
    "req2xx": 110456,
    "req3xx": 0,
    "req4xx": 0,
    "req5xx": 0,
    "others": 3010985,
    "errors": [
      {
        "description": "Get http://10.148.0.2:8080: dial tcp 10.148.0.2:8080: socket: too many open files",
        "count": 3007838
      },
      {
        "description": "Get http://10.148.0.2:8080: net/http: request canceled (Client.Timeout exceeded while await
ing headers)",
        "count": 3135
      },
      {
        "description": "Get http://10.148.0.2:8080: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)",
        "count": 12
      }
    ],
    "latency": {
      "mean": 19210.09268828083,
      "stddev": 87254.75049662581,
      "max": 2698506,
      "percentiles": {
        "50": 31,
        "75": 35,
        "90": 33352,
        "99": 346430
      }
    },
    "rps": {
      "mean": 120471.92026549476,
      "stddev": 74269.11622126713,
      "max": 232277.97314570108,
      "percentiles": {
        "50": 153998.922111,
        "75": 186300.207008,
        "90": 193309.714519,
        "99": 212481.367175
      }
    }
  }
}

My Question

Any suggestion to tune the play-framework(underlying akka-http) to handle high concurrency request?

Play-scala Benchmark Source Code:

https://github.com/gaplo917/play-scala-starter-example/

Modification for the benchmark
gaplo917@5ad7cae

Other Benchmark results:

https://github.com/gaplo917/web-framework-benchmark

https://rawgit.com/gaplo917/web-framework-benchmark/master/charting/rps/index.html
https://rawgit.com/gaplo917/web-framework-benchmark/master/charting/latency/index.html

@wsargent
Copy link
Member

Play's able to handle massive concurrency just fine: projects that have reduced the number of servers by 80% because Play is more efficient at handling load.

there are projects like https://github.com/tenorviol/play-c100k-test that do test out play's scalability, and the REST API has a load test built into it, which you may find more helpful: https://github.com/playframework/play-scala-rest-api-example/

Since the starter project is explicitly here to demo Play to first-time users and the general report is that play is fast, the most likely conclusion is that your benchmark is flawed in some way. This is not the right forum for that, as this issue tracker is specifically for fixing bugs in the project -- you can discuss the details on https://discuss.lightbend.com/c/play/ for more feedback.

Alternately, if you can point to a bottleneck in the default configuration Play that is slowing it down, we can open a bug in playframework/playframework.

@gaplo917
Copy link
Author

gaplo917 commented Mar 16, 2018

Hi @wsargent thanks you for your response. I have used play-scala for two years since 2.4(using Netty). I know that it is much more efficient than PHP to handle high concurrency requests and picked to use it for our project API Servers at that moment.

If you have checked my benchmark results, other JVM based web framework, Vert.x using Netty + Kotlin coroutine, is nearly 20x times thoughtput than PHP lumen) and in concurrency=50, is 2x times thoughtput than this play-scala project

So, I am curious is that the default akka-http(switched from Netty since v2.5) config can’t high concurrency well? It is 100% reproducible on this starter proejct by my aforementioned steps

I think a starter project is extremely important to illustrate how good a framework is and always ready for people to benchmark before picking it. That’s why I open the issue here.

Do you have any suggested config for the machine on Google Cloud platform n1-standard-1 (1 vCPU 3.75GB RAM)?

@wsargent
Copy link
Member

wsargent commented Mar 16, 2018

Again, this is not the place for back and forth suggestions -- I would recommend asking on https://discuss.lightbend.com/c/play/ and referring back to this issue.

@gaplo917 gaplo917 changed the title this seed project can't handle high concurrency request, any suggestion? this seed project can't handle high concurrent requests, any suggestion? Mar 17, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants