Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Monitor performance in CI #70

Closed
radu-matei opened this issue Feb 11, 2022 · 9 comments
Closed

Monitor performance in CI #70

radu-matei opened this issue Feb 11, 2022 · 9 comments
Assignees
Labels
enhancement New feature or request

Comments

@radu-matei
Copy link
Member

Very early tests for #54 show approximately 1ms latency for requests for the same application running in Wagi, which for multiple concurrent requests, can be significant.

We should explore where that extra latency is coming from.

@radu-matei radu-matei added bug Something isn't working P0 labels Feb 11, 2022
@radu-matei radu-matei added this to the v0.1.0 milestone Feb 11, 2022
@radu-matei
Copy link
Member Author

Using a Spin build based on #76, with the following Spin configuration (running components that use the Wagi executor, so this is as close as possible to running Wagi):

name = "fermyon.com"
version = "1.0.0"
description = "A simple application showing both Spin and Wagi components."
authors = [ "Radu Matei <radu@fermyon.com>" ]
trigger = {type = "http", base = "/" }


[[component]]
source = "bartholomew.wasm"
id = "bartholomew"
files = [ "content/**/*" , "templates/*", "scripts/*", "config/*"]
[component.trigger]
route = "/..."
executor = "wagi"

[[component]]
source = "fileserver.gr.wasm"
id = "fileserver"
files = ["static/**/*"]
environment = { PATH_PREFIX = "static/" }
[component.trigger]
route = "/static/..."
executor = "wagi"

This is the result of running a load test locally with Spin:

# application running using Spin, with components that use the Wagi executor
 ➜ bombardier -c 100 -n 10000 http://localhost:3000/blog/2022-02-08-hello-world
Bombarding http://localhost:3000/blog/2022-02-08-hello-world with 10000 request(s) using 100 connection(s)
 100.00% 371/s 26s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec       374.55     104.01     912.78
  Latency      266.33ms   107.40ms      0.97s
  HTTP codes:
    1xx - 0, 2xx - 10000, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:     6.81MB/s

And the same application, this time run with Wagi:

# application running using Wagi
➜ bombardier -c 100 -n 10000 http://localhost:3000/blog/2022-02-08-hello-world
Bombarding http://localhost:3000/blog/2022-02-08-hello-world with 10000 request(s) using 100 connection(s)
 100.00% 358/s 27s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec       361.79     132.12     844.97
  Latency      276.70ms   111.16ms   800.17ms
  HTTP codes:
    1xx - 0, 2xx - 10000, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:     6.56MB/s

For reference, updating the application to components that use the Spin HTTP executor (as opposed to the Wagi executor):

[[component]]
source = "spin-bartholomew.wasm"
id = "bartholomew"
files = [ "content/**/*" , "templates/*", "scripts/*", "config/*"]
[component.trigger]
route = "/..."
executor = "spin"

[[component]]
source = "spin_static_fs.wasm"
id = "fileserver"
files = ["static/**/*"]
environment = { PATH_PREFIX = "static/" }
[component.trigger]
route = "/static/..."
executor = "spin"

The performance is slightly better again:

# application running using Spin, with Spin HTTP components
➜ bombardier -c 100 -n 10000 http://localhost:3000/blog/2022-02-08-hello-world
Bombarding http://localhost:3000/blog/2022-02-08-hello-world with 10000 request(s) using 100 connection(s)
100.00% 377/s 26s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec       381.55     115.34     897.43
  Latency      261.63ms   102.31ms   782.53ms
  HTTP codes:
    1xx - 0, 2xx - 10000, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:     6.92MB/s

These results are on my machine, for 100 concurrent connections.

@radu-matei
Copy link
Member Author

Running the tests in the same order as before, this time with a single concurrent connection, the results are almost identical regardless of Wagi vs. Spin (regardless of the executor and component types used): ~1MB/s, ~17ms average latency:

# application running using Spin, with components that use the Wagi executor
➜ bombardier -c 1 -n 1000 http://localhost:3000/blog/2022-02-08-hello-world
Bombarding http://localhost:3000/blog/2022-02-08-hello-world with 1000 request(s) using 1 connection(s)
100.00% 57/s 17s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec        58.43      21.06     289.53
  Latency       17.24ms     1.39ms    47.11ms
  HTTP codes:
    1xx - 0, 2xx - 1000, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:     1.06MB/s
# application running using Wagi
➜ bombardier -c 1 -n 1000 http://localhost:3000/blog/2022-02-08-hello-world
Bombarding http://localhost:3000/blog/2022-02-08-hello-world with 1000 request(s) using 1 connection(s)
100.00% 57/s 17s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec        57.80      19.25     105.59
  Latency       17.37ms     1.08ms    39.69ms
  HTTP codes:
    1xx - 0, 2xx - 1000, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:     1.05MB/s
# application running using Spin, with Spin HTTP components
➜ bombardier -c 1 -n 1000 http://localhost:3000/blog/2022-02-08-hello-world
Bombarding http://localhost:3000/blog/2022-02-08-hello-world with 1000 request(s) using 1 connection(s)

100.00% 56/s 17s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec        56.78      19.86     314.47
  Latency       17.74ms     1.49ms    37.87ms
  HTTP codes:
    1xx - 0, 2xx - 1000, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:     1.02MB/s

Of course, as we've seen with 100 concurrent connections, the time can add up to significant latencies, but for an application that makes heavy IO use, the results seem to be almost identical.

@radu-matei
Copy link
Member Author

radu-matei commented Feb 14, 2022

These test results are purely informative, as we need to run them in a reproducible manner, and my machine isn't currently the most reliable at that..

@radu-matei
Copy link
Member Author

A good resolution for this issue would be a suite of tests that we run as part of CI.

@radu-matei radu-matei changed the title Investigate performance Monitor performance in CI Feb 23, 2022
@lann lann self-assigned this Mar 1, 2022
@lann
Copy link
Collaborator

lann commented Mar 1, 2022

I'm going to put this on hold until I get my desktop delivered.

@lann
Copy link
Collaborator

lann commented Mar 21, 2022

A first step toward this is basic http trigger benchmarks merged in #185. Those are run nightly by the spin-benchmarks repo and published to https://fermyon.github.io/spin-benchmarks/criterion/reports/

@radu-matei radu-matei removed the P0 label Mar 21, 2022
@lann lann added enhancement New feature or request and removed bug Something isn't working labels Mar 21, 2022
@lann
Copy link
Collaborator

lann commented Mar 21, 2022

@vdice
maybe a Slack bot approach that shows the results and/or diff week-over-week could be neat, kinda like the comms website interaction diff messages we get

The cargo-criterion tool can dump events to stdout that would be perfect for this:

{"reason":"benchmark-complete","id":"startup/spin-executor", [...CLIP...], "change":"NoChange"}

@radu-matei
Copy link
Member Author

Perfect reason to build a Spin component that writes to Slack.

@radu-matei radu-matei removed this from the v0.1.0 milestone Mar 23, 2022
@radu-matei
Copy link
Member Author

The benchmarks we have been already been publishing for a while now address this.
See https://fermyon.github.io/spin-benchmarks/criterion/reports.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Done
Development

No branches or pull requests

2 participants