Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: Can the http.Server add set read/write buffer size? #13870

Open
felixhao opened this issue Jan 8, 2016 · 6 comments
Open

net/http: Can the http.Server add set read/write buffer size? #13870

felixhao opened this issue Jan 8, 2016 · 6 comments
Milestone

Comments

@felixhao
Copy link

@felixhao felixhao commented Jan 8, 2016

Hi, in our most cases, the http written bytes more than 4<<10, so we need set bufio Read/Write buffer size and connection SNDBUF/RCVBUF ?

Also we think this change is appropriate, does go plan to support this?

@bradfitz
Copy link
Contributor

@bradfitz bradfitz commented Jan 8, 2016

Can you report any performance numbers with different buffer sizes?

@bradfitz bradfitz added this to the Unplanned milestone Jan 8, 2016
@felixhao
Copy link
Author

@felixhao felixhao commented Jan 9, 2016

yeah, there are sample test result below, we written 10kb bytes.
env: Debian GNU/Linux 8.2 4core 4g ram

test code

func BenchmarkBigWrite(b *testing.B) {
    s := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        w.Write(bigBs) // bigBs is 10kb bytes
    }))
    defer s.Close()
    b.SetParallelism(100)
    b.ResetTimer()
    b.RunParallel(func(pb *testing.PB) {
        for pb.Next() {
            res, err := http.Get(s.URL)
            if err != nil {
                log.Fatal(err)
            }
            _, err = ioutil.ReadAll(res.Body)
            res.Body.Close()
            if err != nil {
                log.Fatal(err)
            }
        }
    })
}

standard net/http, three times

go test -test.bench=".*" -benchmem -benchtime=5s
BenchmarkBigWrite-4   100000         79756 ns/op       37929 B/op         72 allocs/op
ok      mytest/httpbuf  8.857s
BenchmarkBigWrite-4   100000         78570 ns/op       37948 B/op         72 allocs/op
ok      mytest/httpbuf  8.665s
BenchmarkBigWrite-4   100000         79072 ns/op       37876 B/op         72 allocs/op
ok      mytest/httpbuf  8.718s

change http/server.go 479L

bw := newBufioWriterSize(checkConnErrorWriter{c}, 4<<10)

to

bw := newBufioWriterSize(checkConnErrorWriter{c}, 10<<10)

go test -test.bench=".*" -benchmem -benchtime=5s
BenchmarkBigWrite-4   100000         69645 ns/op       39890 B/op         73 allocs/op
ok      mytest/httpbuf  7.692s
BenchmarkBigWrite-4   100000         69816 ns/op       39961 B/op         73 allocs/op
ok      mytest/httpbuf  7.702s
BenchmarkBigWrite-4   100000         67768 ns/op       39856 B/op         73 allocs/op
ok      mytest/httpbuf  7.516s
@nhooyr
Copy link
Contributor

@nhooyr nhooyr commented Sep 29, 2018

Related: #22618

@nhooyr
Copy link
Contributor

@nhooyr nhooyr commented May 17, 2019

You could also just wrap the response writer with your own bufio.Writer. More allocation but less exposed API which I think is a fair tradeoff.

@nhooyr
Copy link
Contributor

@nhooyr nhooyr commented Jun 7, 2020

Also, the ability to adjust the Transport's buffer size was only added because it performs the io.Copy so you can't adjust the buffer size yourself.

With the server however, you can just wrap like I mentioned above.

@bolkedebruin
Copy link

@bolkedebruin bolkedebruin commented Sep 9, 2020

Hello! I'm the implementor of a server that tunnels remote desktop connections over websocket (in this case Gorilla). I was facing some performance challenges particularly when high latency high bandwidth connections were involved.

I fired up Wireshark to see on which end the data flow was restricted. It turned out that

  • the bandwidth of each connection was quite steady
    
  • there was no significant amount of dropped packets / retries, so probably not limited by congestion control
    
  • the advertised window in the ACK packets coming back from a client was sufficiently generous (around 300KB)
    
  • however, when sending there were always just around ~4kB of data in flight. Given the high latency to the clients, the server spent most of the time waiting for data to be ACKed by clients, and then immediately sent out a burst of new packets, then proceeded to wait again for enough outstanding bytes to be ACKed.
    

This very likely is due to a small TCP send buffer, since this would limit the ammount of outstanding bytes that the TCP stack could keep track of.

For high bandwidth high latency connections it is extremely beneficial to have a large(r) TCP recv buffer on the OS level that I need to be able to set per client (i.e. in case it is not a high latency). Wrapping it in bufio as was suggested is not sufficient as the OS will just do as it wants.

So the need is to be able to set the OS level receive buffer / send buffer (not the internal buffer with bufio) per connection which is not exposed in the API

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants
You can’t perform that action at this time.