Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow transferring files to client - HOL blocking? #228

Closed
smithrobert opened this issue Mar 13, 2017 · 7 comments
Closed

Slow transferring files to client - HOL blocking? #228

smithrobert opened this issue Mar 13, 2017 · 7 comments

Comments

@smithrobert
Copy link

smithrobert commented Mar 13, 2017

I noticed that there are significant delays when repeatedly transferring a file (10 - 100 kB) from server to client using the same connection/socket. E.g. transferring 100kB file with 100 repetitions takes up to 7 seconds.

My first hunch is that the connection is susceptible to HOL blocking, or have I missed something?

Server code


import (
  "net/http
  "io/ioutil"
  "log"
  "github.com/gorilla/websocket"
)

var file []byte

var upgrader = websocket.Upgrader{
    ReadBufferSize:  1024,
    WriteBufferSize: 1024,
}

func handler(w http.ResponseWriter, r *http.Request) {
    conn, err := upgrader.Upgrade(w, r, nil)
    if err != nil {
        log.Fatal(err)
    }

    for i := 0; i < 100; i++ {
      conn.WriteMessage(2, file)
    }
}

func main() {
  file, _ = ioutil.ReadFile("<path-to-file>")

  http.HandleFunc("/", handler)
  err := http.ListenAndServeTLS(":443", "<path-to-cert>", "<path-to-key>", nil)
  if err != nil {
    panic("ListenAndServe: " + err.Error())
  }
}```
@fschr
Copy link

fschr commented Mar 13, 2017

I think this issue has less to do with HOL blocking on the lower-level networking side of things and more to do with the fact that conn.WriteMessage is blocking: the second call can't begin until the first one completes.

Try replacing conn.WriteMessage(... with go conn.WriteMessage(... and add some kind of barrier after the for loop.

@garyburd
Copy link
Contributor

garyburd commented Mar 13, 2017

@smithrobert

  • How are you measuring the time on the client?
  • What is the time to return the same amount of data in an HTTP response?
  • There is HOL blocking. Data messages in the websocket protocol are not multiplexed with other data messages.

@fschr Concurrent calls to WriteMessage are not supported.

@smithrobert
Copy link
Author

smithrobert commented Mar 14, 2017

@garyburd I'm measuring the time for the client to establish a connection and receive 100 messages (over WAN). Example in Chrome:

var count = 0;
var timer = console.time('ws');
var webSocket = new WebSocket('wss://mywebsite.com');
webSocket.onmessage = function(data) { 
  if (++count == 100) {
    console.timeEnd('ws');
  }
}

I made a HTTP/2 server in Node.js for comparison. Establishing a connection and pushing the same amount of data takes ~600ms compared to multiple seconds with WS.

Server code
const spdy = require('spdy');
const fs = require('fs');
const crypto = require('crypto');

const data = crypto.randomBytes(100 * 1024);
const options = {
  key: fs.readFileSync('keys/privkey.pem'),
  cert: fs.readFileSync('keys/fullchain.pem')
};

spdy.createServer(options, function(request, response) {
  for (let i = 0; i < 100; i++) {
    response.push('./' + i, {}, function(err, stream) {
      stream.end(data);
    });
  }
}).listen(443);
Client code
const spdy = require('spdy');
const https = require('https');
const url = 'website.me';

console.time('100 * 100 KiB');
var agent = spdy.createAgent({
  host: url,
  port: 443,
});

var request = https.get({
  host: url,
  agent: agent
});

var count = 0;
request.on('push', () => {
  if (++count == 100) { console.timeEnd('100 * 100 KiB'); }
});

@garyburd
Copy link
Contributor

garyburd commented Mar 14, 2017

The issue may be with the client code. Please report the results using this Go client code:

start := time.Now()
c, _, err := websocket.DefaultDialer.Dial("wss://mywebsite.com", nil)
if err != nil {
	log.Fatal(err)
}
for i := 0; i < 100; i++ {
	_, r, err := c.NextReader()
	if err != nil {
		log.Fatal(err)
	}
	_, err = io.Copy(ioutil.Discard, r)
	if err != nil {
		log.Fatal(err)
	}
}
log.Println(time.Since(start))
c.Close()

What is the time to return the same amount of data in an HTTP response?

@marwankhalili
Copy link

marwankhalili commented Mar 14, 2017

@garyburd Each run took between 2.5 and 3.5 seconds using the Go client code you provided and the same Go server as described in the first post. It's definitely an improvement to the JS client, but still a fair bit slower than the HTTP/2 implementation.

I've added timestamps for when each message is being sent from the server (see code and output below). It seems like there are sometimes 100-300ms gaps between consecutive messages. Any ideas why this occurs?

Server code
package main

import (
  "log"
  "time"
  "net/http"
  "io/ioutil"
  "github.com/gorilla/websocket"
)

var file []byte

var upgrader = websocket.Upgrader {
	ReadBufferSize:  1024,
	WriteBufferSize: 1024,
  CheckOrigin: func(r *http.Request) bool {
    return true
  },
}

func handler(w http.ResponseWriter, r *http.Request) {
    conn, err := upgrader.Upgrade(w, r, nil)
    if err != nil {
        log.Fatal(err)
    }

    start := time.Now()
    for i := 0; i < 100; i++ {
      conn.WriteMessage(2, file)
      log.Printf("Message %d sent at: %s", i, time.Since(start))
    }
}

func main() {
  filepath := "./assets/100k"
  file, _ = ioutil.ReadFile(filepath)

  http.HandleFunc("/", handler)
  err := http.ListenAndServeTLS(":443", "./keys/fullchain.pem", "./keys/privkey.pem", nil)
  if err != nil {
    log.Fatal(err)
  }
}
Server output
Message 0 sent at: 205.125108ms
Message 1 sent at: 205.83766ms
Message 2 sent at: 206.375369ms
Message 3 sent at: 309.399315ms  <-- +100ms
Message 4 sent at: 309.831542ms
Message 5 sent at: 310.147346ms
Message 6 sent at: 418.227973ms  <-- +100ms
Message 7 sent at: 418.722916ms
Message 8 sent at: 419.046083ms
Message 9 sent at: 423.94252ms
Message 10 sent at: 424.316021ms
Message 11 sent at: 424.676405ms
Message 12 sent at: 424.96414ms
Message 13 sent at: 528.002ms    <-- +100ms
Message 14 sent at: 528.532574ms
Message 15 sent at: 528.877379ms
Message 16 sent at: 529.22013ms
Message 17 sent at: 529.637906ms
Message 18 sent at: 529.923054ms
Message 19 sent at: 530.189899ms
Message 20 sent at: 603.792526ms
Message 21 sent at: 604.215678ms
Message 22 sent at: 604.49196ms
Message 23 sent at: 604.750724ms
Message 24 sent at: 605.004802ms
Message 25 sent at: 605.32248ms
Message 26 sent at: 605.669734ms
Message 27 sent at: 605.963008ms
Message 28 sent at: 606.277992ms
Message 29 sent at: 606.565655ms
Message 30 sent at: 714.323014ms  <-- +100ms
Message 31 sent at: 714.742665ms
Message 32 sent at: 715.063984ms
Message 33 sent at: 715.455991ms
Message 34 sent at: 715.773733ms
Message 35 sent at: 716.065214ms
Message 36 sent at: 716.406493ms
Message 37 sent at: 716.787211ms
Message 38 sent at: 717.112049ms
Message 39 sent at: 717.455316ms
Message 40 sent at: 717.769398ms
Message 41 sent at: 718.058518ms
Message 42 sent at: 1.069511332s  <-- +300ms
Message 43 sent at: 1.07004007s
Message 44 sent at: 1.070363553s
Message 45 sent at: 1.070708279s
Message 46 sent at: 1.071010829s
Message 47 sent at: 1.071328194s
Message 48 sent at: 1.071619561s
Message 49 sent at: 1.07193124s
Message 50 sent at: 1.072223293s
Message 51 sent at: 1.072571132s
Message 52 sent at: 1.072887854s
Message 53 sent at: 1.073226665s
Message 54 sent at: 1.073523184s
Message 55 sent at: 1.073838903s
Message 56 sent at: 1.268652839s  <-- +200ms
Message 57 sent at: 1.269451142s
Message 58 sent at: 1.269811979s
Message 59 sent at: 1.270123388s
Message 60 sent at: 1.270415128s
Message 61 sent at: 1.270776693s
Message 62 sent at: 1.271535412s
Message 63 sent at: 1.271941582s
Message 64 sent at: 1.272259086s
Message 65 sent at: 1.27260683s
Message 66 sent at: 1.272913412s
Message 67 sent at: 1.273252732s
Message 68 sent at: 1.273562944s
Message 69 sent at: 1.508620491s  <-- +100ms
Message 70 sent at: 1.509017061s
Message 71 sent at: 1.509343415s
Message 72 sent at: 1.509706312s
Message 73 sent at: 1.510022251s
Message 74 sent at: 1.510335284s
Message 75 sent at: 1.510632027s
Message 76 sent at: 1.510940075s
Message 77 sent at: 1.511241557s
Message 78 sent at: 1.511551011s
Message 79 sent at: 1.511878236s
Message 80 sent at: 1.512200224s
Message 81 sent at: 1.512498215s
Message 82 sent at: 1.725973543s  <-- +200ms
Message 83 sent at: 1.726407513s
Message 84 sent at: 1.726749035s
Message 85 sent at: 1.727068437s
Message 86 sent at: 1.727369752s
Message 87 sent at: 1.727681181s
Message 88 sent at: 1.728149953s
Message 89 sent at: 1.728502857s
Message 90 sent at: 1.728841882s
Message 91 sent at: 1.729250203s
Message 92 sent at: 1.729541104s
Message 93 sent at: 1.729850962s
Message 94 sent at: 1.730144625s
Message 95 sent at: 1.73043139s
Message 96 sent at: 1.916190789s  <-- +200ms
Message 97 sent at: 1.916659576s
Message 98 sent at: 1.916982074s
Message 99 sent at: 1.917300404s

@garyburd
Copy link
Contributor

The gaps are caused by write to the network connection.

What is the time to return the same amount of data in an HTTP response using a Go server and client?

Does the node client read the data?

@smithrobert
Copy link
Author

smithrobert commented Mar 15, 2017

@marwankhalili, @garyburd I noticed that the Node client measured the time to open 100 streams instead of the time for all streams to be closed. Corrected the client code for HTTP/2 which performs subpar to WebSocket.

Fixed HTTP/2 client
const spdy = require('spdy');
const https = require('https');
const fs = require('fs');
const url = 'mywebsite.com';

console.time('100 * 100 KiB');
var agent = spdy.createAgent({
  host: url,
  port: 443,
});

var request = https.get({
  host: url,
  agent: agent
});

var count = 0;
request.on('push', (stream) => {
  stream.on('data', (chunk) => {});
  stream.on('end', () => {
    if (++count == 100) { console.timeEnd('100 * 100 KiB'); }
  });
});

@gorilla gorilla locked and limited conversation to collaborators Feb 14, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants