Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to set concurrent request processing #161

Closed
lunemec opened this issue Jan 30, 2017 · 13 comments
Closed

How to set concurrent request processing #161

lunemec opened this issue Jan 30, 2017 · 13 comments
Labels
question A question (converts to discussion)

Comments

@lunemec
Copy link

lunemec commented Jan 30, 2017

I created simple HTTP service:
https://github.com/lunemec/rust-birkana-http

Just to try HTTP frameworks in rust. It performs OK, but can't handle concurrent requests. This can be seen on serving static files - when you open dev/release version in 2 browsers and load them at the same time, one of them waits for the other to finish loading resources.

I saw mention in #21 that there is some workers setting? I couldn't find any mention of it in the API docs or googling it. Does rocket have this option or is it up to me to create ThreadPool and somehow tie it into http serving?

Thank you.

@SergioBenitez
Copy link
Member

In version 0.1, Rocket sets the number of threads to be the number of cores on the machine. The number of threads cannot be changed. Presumably your machine has a single core.

In the soon to be released v0.2, Rocket sets the default number of threads to be max(2, CPU cores). The number can be changed via the workers configuration parameter. I'm hoping to release v0.2 in a few days, and certainly by the end of the week.

@SergioBenitez SergioBenitez added the question A question (converts to discussion) label Jan 30, 2017
@lunemec
Copy link
Author

lunemec commented Jan 30, 2017

Thank you for the answer,
I'm using version 0.1.5, but my machine is Macbook Pro (Retina 13-inch, Early 2015), which has Core i5 dual core with hyper-threading. So 4 logical cores. I wonder if there is some shenanigans with FS locking the files being read .. ? You can easily clone the repo I posted and run it to see the result of the concurrent access...

@SergioBenitez
Copy link
Member

SergioBenitez commented Jan 30, 2017

I tried this out on my machine, and Rocket is properly serving requests from multiple workers. (Neat app, by the way!)

Perhaps you can try to see how many workers Rocket is using by temporarily switching to the master branch? Just change your Cargo.toml to:

[package]
name = "rust-birkana-http"
version = "0.1.0"
authors = ["Nemec Lukas <lukas.nemec2@firma.seznam.cz>"]

[dependencies]
rocket = { git = "https://github.com/SergioBenitez/Rocket" }
rocket_codegen = { git = "https://github.com/SergioBenitez/Rocket" }
rust-birkana = "1.1.1"
serde = "0.8"
serde_derive = "0.8"

[dependencies.rocket_contrib]
git = "https://github.com/SergioBenitez/Rocket"
default-features = false
features = ["tera_templates"]

Rocket will log how many workers it's using at launch:

🔧  Configured for development.
    => address: localhost
    => port: 8080
    => log: normal
    => workers: 12
    => [extra] template_dir: "templates/"

@lunemec
Copy link
Author

lunemec commented Jan 31, 2017

This shows that it is running with 5 workers.

$ target/debug/rust-birkana-http
🔧  Configured for development.
    => address: 0.0.0.0
    => port: 8080
    => log: normal
    => workers: 5
    => [extra] template_dir: "templates/"
🛰  Mounting '/':
    => GET /
    => GET /static/<file..>
    => POST /generate
🚀  Rocket has launched from http://0.0.0.0:8080...

However, the reason I thought that it can't serve files concurrently is some of my css+js are taking over 5s to load into browser, and inside network monitor, it shows that it is TTFB - time till first byte, which is server related.

If you run this code on your machine, and load it several times (2 separate windows at the same time), you'll get sometimes loading times over 6s.

That was the primary reason I noticed, because 102KB CSS shouldn't take 6s to load (and I have apple's uber fast SSD)... the most curious is, it is not consistent. Sometimes it loads instantly.

I even got 2s loading time on index, which is template rendered page (basically static).
screen

@mehcode
Copy link
Contributor

mehcode commented Jan 31, 2017

Some quick bench-marking on your code:

cargo run
$ wrk -c 100 -d 1m -t 8 http://localhost:8080/static/css/bootstrap.css
Running 1m test @ http://localhost:8080/static/css/bootstrap.css
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    55.23ms    9.05ms 113.23ms   90.90%
    Req/Sec    36.20     19.49    80.00     59.98%
  8688 requests in 1.00m, 1.19GB read
Requests/sec:    144.56
Transfer/sec:     20.21MB
cargo build --release && ./target/release/rust-birkana-http
$ wrk -c 100 -d 1m -t 8 http://localhost:8080/static/css/bootstrap.css
Running 1m test @ http://localhost:8080/static/css/bootstrap.css
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    27.63ms   20.94ms  49.10ms   62.87%
    Req/Sec   145.17    158.85     1.85k    94.33%
  17369 requests in 1.00m, 2.37GB read
Requests/sec:    289.05
Transfer/sec:     40.38MB

I also opened your site and hard-refreshed it a dozen times. Loading times never went over 100ms. I'm not sure what you're seeing. Perhaps try running it in release mode (if you haven't)?

@lunemec
Copy link
Author

lunemec commented Jan 31, 2017

Hmm, this is really strange behavior, I'll try to get wrk results tomorrow. I tried release version, no difference. I'll also try on another system - linux server, there may be some macos FS shenanginans...

@SergioBenitez
Copy link
Member

I tried your code on OS X 10.11, but I'm not seeing the behavior you describe. What happens if you try bumping the number of workers to something large like 50? Just add a workers = 50 line to Rocket.toml.

@lunemec
Copy link
Author

lunemec commented Feb 1, 2017

Well, 50 workers does work. With 50 I can't reproduce the slowness. However, when I change back to 5 workers, I can still reproduce. Below are wrk results (both with 5 workers):

debug

wrk -c 100 -d 1m -t 8 http://localhost:8080/static/css/bootstrap.css
Running 1m test @ http://localhost:8080/static/css/bootstrap.css
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    28.37ms   16.82ms 258.46ms   92.97%
    Req/Sec   177.41     40.23   232.00     80.74%
  10575 requests in 1.00m, 1.44GB read
Requests/sec:    175.96
Transfer/sec:     24.58MB

release

wrk -c 100 -d 1m -t 8 http://localhost:8080/static/css/bootstrap.css
Running 1m test @ http://localhost:8080/static/css/bootstrap.css
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.68ms    2.28ms 186.32ms   95.44%
    Req/Sec     1.45k     0.87k    2.78k    54.59%
  173606 requests in 1.00m, 23.68GB read
Requests/sec:   2888.28
Transfer/sec:    403.37MB

I tried both Firefox and Chrome, and they both have the same behavior. I'll try to record the behavior with some screen recorder...

@SergioBenitez
Copy link
Member

Can you hop on IRC or Matrix to talk about this a bit more? I'm fairly certain I know what's going on, but I'd like to have a bit of back-and-forth to confirm. There are links in the README on how to join the chat.

@lunemec
Copy link
Author

lunemec commented Feb 1, 2017

Sure! Thank you for the help. Here is a video of the "slowness"

https://drive.google.com/drive/folders/0Bwv_8TwNErXzRmRELThUTU1LOTA?usp=sharing

You can see there the browser taking over 4s to start loading data from the server.

@lunemec
Copy link
Author

lunemec commented Feb 1, 2017

We discovered it is Hyper's fault.

@lunemec lunemec closed this as completed Feb 1, 2017
@zxvfxwing
Copy link

OK, so I want to say that I had the same error/behavior while trying to load multiple javascript files.

Rocket.toml :
[global]
address = "localhost"
port = 8000
workers = 4
template_dir = "www/templates/"

CPU : Intel i5-6600K (4) @ 4.400GHz

Code :
#[get("/www/<file..>")]
fn www(file: PathBuf) -> Option<NamedFile> {
NamedFile::open(Path::new("www/").join(file)).ok()
}

Some images to reflect the behavior :

  • First, load two scripts with a particular order ;
  • Then, only one script (do it with the two differents scripts) ;
  • Load two scripts in reverse order.

imgur-2017_11_03-23 00 10
imgur-2017_11_03-23 01 01
imgur-2017_11_03-23 02 56
imgur-2017_11_03-23 04 06

As you can see, Rocket seems to be bottleneck somewhere (5sec to load a js file is huge).

Increasing the number of workers to 50 as said here resolved my issue.
But I'm curious and want to understand why with 4 workers this is not working as I want.

Maybe this issue is already fixed with an other workaround, but didn't saw it.

Thanks for reading.

@jordanmack
Copy link

Increasing the number of workers also worked for me. On a multi-core machine, I didn't have any problem. On a single core machine, the TTFB would be 5+ seconds when the browser would make 3+ requests quickly. Manually setting the workers to 8 immediately cleared up the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question A question (converts to discussion)
Projects
None yet
Development

No branches or pull requests

5 participants