Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep alive doesn't work as expected + performance problems #580

Closed
vertexclique opened this issue Mar 6, 2018 · 4 comments
Closed

Keep alive doesn't work as expected + performance problems #580

vertexclique opened this issue Mar 6, 2018 · 4 comments
Labels
upstream An unresolvable issue: an upstream dependency bug

Comments

@vertexclique
Copy link

vertexclique commented Mar 6, 2018

During Apache Benchmark tests with the configuration below:

    let rocket_config = Config::build(Environment::Production)
        .address("127.0.0.1")
        .port(8081)
        .workers(200)
        .keep_alive(3)
        .finalize().unwrap();

Keep-Alive doesn't work and after some time ab receive:

Test aborted after 10 failures

apr_socket_connect(): Operation already in progress (37)

Apache benchmark works fine with below parameters in other frameworks (Akka Http, Finagle):
ab -k -t 90 -c 100 -n 1000000000 http://localhost:8081/data

To add up more information to the loop (Rocket AB benchmark):

Server Software:        Rocket
Server Hostname:        127.0.0.1
Server Port:            8081

Document Path:          /data
Document Length:        82 bytes

Concurrency Level:      100
Time taken for tests:   94.146 seconds
Complete requests:      33238
Failed requests:        0
Keep-Alive requests:    0

and Akka Http benchmark:

Server Software:        akka-http/10.1.0-RC2
Server Hostname:        127.0.0.1
Server Port:            8081

Document Path:          /data
Document Length:        Variable

Concurrency Level:      100
Time taken for tests:   90.001 seconds
Complete requests:      412095
Failed requests:        0
Keep-Alive requests:    412095
@SergioBenitez SergioBenitez added the triage A bug report being investigated label Jul 8, 2018
@SergioBenitez
Copy link
Member

You're right that the keep-alive parameter appears to do nothing. We certainly tell Hyper to set the value to the configured value:

https://github.com/SergioBenitez/Rocket/blob/69683dddd7d672fb1211845944aac4c921182a24/core/lib/src/rocket.rs#L688-L690

Unfortunately, either hyper ignores this or is buggy. Other hyper-related "waiting" issues have been reported, so I'm inclined to believe the issue lies with hyper.

@SergioBenitez SergioBenitez added the upstream An unresolvable issue: an upstream dependency bug label Jul 8, 2018
@SergioBenitez
Copy link
Member

I took a closer look at this today. On the latest master (though I don't think this has changed), setting a keep_alive of 0 (previously "false") results in Rocket/hyper sending a Connection: close as expected. The client should respond to the header by closing the connection.

Clients don't always take the hint, so servers tend to close the connection themselves after all of the data has been sent/received. I've patched hyper so that it does this when keep_alive is disabled.

I don't think this is related to what you're seeing. Instead, this appears to be an ab bug on BSD-like systems. As such, I'm closing out this issue. If you're not using BSD or macOS, do let me know and we can take a closer look.

@strohel
Copy link

strohel commented Sep 24, 2020

I think the bug is still present in all 0.4.x releases, it is reproducible with e.g. hello_world example in the 0.4 branch and curl, on Linux:

strohel@mat480s ~/projekty/Rocket/examples/hello_world $ cargo run
    Finished dev [unoptimized + debuginfo] target(s) in 0.20s
     Running `/home/strohel/projekty/Rocket/target/debug/hello_world`
🔧 Configured for development.
    => address: localhost
    => port: 8000
    => log: normal
    => workers: 16
    => secret key: generated
    => limits: forms = 32KiB
    => keep-alive: 5s
    => tls: disabled
🛰  Mounting /:
    => GET / (hello)
🚀 Rocket has launched from http://localhost:8000
strohel@mat480s ~/projekty/Rocket/examples/hello_world $ curl -v http://localhost:8000/ http://localhost:8000/
*   Trying ::1:8000...
* Connected to localhost (::1) port 8000 (#0)
> GET / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.72.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/plain; charset=utf-8
< Server: Rocket
< Content-Length: 13
< Date: Thu, 24 Sep 2020 11:16:44 GMT
< 
* Connection #0 to host localhost left intact
Hello, world!
* Found bundle for host localhost: 0x560ac3bc55c0 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host localhost
* Connected to localhost (::1) port 8000 (#0)
> GET / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.72.0
> Accept: */*
> 
* Connection died, retrying a fresh connect(retry count: 1)
* Closing connection 0
* Issue another request to this URL: 'http://localhost:8000/'
* Hostname localhost was found in DNS cache
*   Trying ::1:8000...
* Connected to localhost (::1) port 8000 (#1)
> GET / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.72.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/plain; charset=utf-8
< Server: Rocket
< Content-Length: 13
< Date: Thu, 24 Sep 2020 11:16:44 GMT
< 
* Connection #1 to host localhost left intact
Hello, world!

^^^ Notice the * Connection died, retrying a fresh connect(retry count: 1) message and the fact it made 3 GET requests (for 2 URLs to fetch).

I've traced the problem down to a bug in hyper 0.10.x BufReader and submitted a fix in hyperium/hyper#2288 - but I don't know whether the old 0.10.x branch is still maintained.

@strohel
Copy link

strohel commented Sep 24, 2020

I've traced the problem down to a bug in hyper 0.10.x BufReader and submitted a fix in hyperium/hyper#2288

..which has been closed as the maintainers (understandably) don't want to release a new 0.10.x version.

Then I'd suggest to change the default keep-alive = 5 to 0 in the 0.4 branch and perhaps stick a note somewhere that keep-alive doesn't work in 0.4 without patching hyper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
upstream An unresolvable issue: an upstream dependency bug
Projects
None yet
Development

No branches or pull requests

3 participants