This repository has been archived by the owner. It is now read-only.

Round7 #24

Closed
wants to merge 13 commits into
from

Conversation

Projects
None yet
5 participants
@jasnell
Member

jasnell commented Dec 7, 2016

Beginnings of round7...
/cc @mcollina

lib/internal/http2.js
@@ -1102,7 +1093,7 @@ function sessionOnStreamClose(stream, code) {
setImmediate(maybeDestroyStream, stream);
}
-function sessionOnError() {
+function sessionOnError(error) {

This comment has been minimized.

@italoacasas

italoacasas Dec 7, 2016

Member

the error param is unused

@italoacasas

italoacasas Dec 7, 2016

Member

the error param is unused

This comment has been minimized.

This comment has been minimized.

@italoacasas

italoacasas Dec 7, 2016

Member

my bad, I was looking only to the commit code.

@italoacasas

italoacasas Dec 7, 2016

Member

my bad, I was looking only to the commit code.

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Dec 7, 2016

Member

Just fyi... Actively working on a number of improvements to the internal buffering and writes...

Member

jasnell commented Dec 7, 2016

Just fyi... Actively working on a number of improvements to the internal buffering and writes...

lib/internal/http2.js
@@ -791,8 +785,7 @@ class Http2Outgoing extends Writable {
chunk = Buffer.from(chunk, encoding);
if (this.stream) {
this[kBeginSend]();
- this.bufferedCallback = callback;
- this.stream.write(chunk, encoding, outWriteResume);
+ this.stream.write(chunk, encoding, callback);

This comment has been minimized.

@mcollina

mcollina Dec 7, 2016

Member

you should be able to get a boost here by doing:

const res = this.stream.write(chunk, encoding)
if (!res) {
  this.stream.once('drain', cb)
} else {
  cb() // or process.nextTick(cb)
}

In this way we are doing as many writes we can before pausing for I/O to happen.

@mcollina

mcollina Dec 7, 2016

Member

you should be able to get a boost here by doing:

const res = this.stream.write(chunk, encoding)
if (!res) {
  this.stream.once('drain', cb)
} else {
  cb() // or process.nextTick(cb)
}

In this way we are doing as many writes we can before pausing for I/O to happen.

lib/internal/http2.js
+ // Turn off the Nagle's for this socket...
+ // highly recommended for http/2 implementations
+ // TODO(jasnell): May want to make this a configuration option?
+ socket.setNoDelay();

This comment has been minimized.

@mcollina

mcollina Dec 7, 2016

Member

Avoid doing this would cause problems for tiny packets, which are very common in h2.
The standard within node is for Nagle's algorithm to be on. Because disabling Nagle's is the way to go for HTTP/2, we disable it. I don't think we need a config option. An user can always listen for the socket event and call socket.setNoDelay(false).

IMHO providing a configuration option with an "inverse" default compared to the rest of node will be counterintuitive.

@mcollina

mcollina Dec 7, 2016

Member

Avoid doing this would cause problems for tiny packets, which are very common in h2.
The standard within node is for Nagle's algorithm to be on. Because disabling Nagle's is the way to go for HTTP/2, we disable it. I don't think we need a config option. An user can always listen for the socket event and call socket.setNoDelay(false).

IMHO providing a configuration option with an "inverse" default compared to the rest of node will be counterintuitive.

@mcollina

This comment has been minimized.

Show comment
Hide comment
@mcollina

mcollina Dec 7, 2016

Member

@jasnell using this I can already start working on _writev?

Member

mcollina commented Dec 7, 2016

@jasnell using this I can already start working on _writev?

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Dec 7, 2016

Member

@mcollina ... yes but I'm working on revising some of the internals to avoid a segfault that you'll definitely run into. If you'd like to get started on it go for it and I'll try to get the internals fixed today

Member

jasnell commented Dec 7, 2016

@mcollina ... yes but I'm working on revising some of the internals to avoid a segfault that you'll definitely run into. If you'd like to get started on it go for it and I'll try to get the internals fixed today

@mcollina

This comment has been minimized.

Show comment
Hide comment
@mcollina

mcollina Dec 7, 2016

Member

I can work on those on Friday or Monday, no rush :).

Member

mcollina commented Dec 7, 2016

I can work on those on Friday or Monday, no rush :).

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Dec 8, 2016

Member

@mcollina ... ok, writev is working without segfault but I'm still going to be working on some internal improvements to make things more efficient. Once done it will alter the timing of write callbacks and likely the close stream callback.

Member

jasnell commented Dec 8, 2016

@mcollina ... ok, writev is working without segfault but I'm still going to be working on some internal improvements to make things more efficient. Once done it will alter the timing of write callbacks and likely the close stream callback.

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Dec 9, 2016

Member

@mcollina: Ok, there are some definite improvements here but the writing state management is still an issue. Piping doesn't work entirely, for instance.

There are several challenges here: The Http2Stream object _write and _writev methods get the data down into the node::http2::Http2Stream DoWrite method with no problem. This data is buffered into a NodeBIO and the WriteWrap instances are queued up to be invoked. Currently, those get invoked when the on_frame_send callback gets invoked after the DATA frame has been constructed.

The flow is essentially:

  • Response is initiated by calling the Response method on node::http2::Http2Session. This method passes the request headers and the data source that will provide the data. This will cause a HEADERS frame to be sent and will queue up the DATA frame(s) for processing.

  • The actual data held in the nghttp2 queue is not sent until the node::http2::Http2Session object's SendIfNecessary method is called. This method will repeatedly pull data from nghttp's buffer until it is empty, then will write that out to the external stream. When this method is called, nghttp will repeatedly call it's callbacks based on whatever frame is being sent. If nghttp is attempting to send a DATA frame, it will begin calling the node::http2::Http2Stream object's on_read callback to fetch chunks of data for the one or more DATA frames that are to be sent. It is important to note that the on_read callback is ONLY called synchronously within the context of the SendIfNecessary function.

  • When on_read is called, it will pull data from the node::http2::Http2Stream object's outbound queue (str_out_). This is the part that is a bit tricky:

    • If there is no data available, *but the writable side of the Http2Stream Duplex is still writable, then on_read must return a DEFERRED response -- telling nghttp2 that data is not yet available to be sent and to remove the DATA frames for that stream from the queue (pausing the stream).

    • If the writable side of the Http2Stream Duplex is not writable (end has been called) and there is no more data pending in the outbound queue to be sent, then the on_read must either set a flag that tells nghttp2 to set the END_STREAM bit on the DATA frame or queue up the batch of trailing headers to be sent. In order for this to work correctly, then this method needs to know that the writable side of the Duplex has ended.

    • If there is still data pending to be sent, on_read simply returns the amount written in this chunk and on_read will be called repeatedly until it either returns DEFERRED or sets the END_STREAM flag.

  • In the Http2ServerResponse object in http2.js, the Response method on the underlying node::http2::Http2Session is triggered by calling the res[kBeginSend] method. This is called implicitly when the first chunk of data is written to the Http2ServerResponse. This initiates the flow described above. The SendIfNecessary method will be invoked and the first call to on_read will return DEFERRED because there is not yet any data written to the internal outbound buffer.

  • Writing to the Http2ServerResponse needs to write to the Http2Stream, which in turn calls in to the internal node::http2::Http2Stream DoWrite method. The DoWrite method drops the written data into the outbound queue then calls the node::http2::Http2Stream Resume function. Calling Resume tells nghttp2 that there is now data available for the DATA frames for that stream and to put the frames back on the send queue to be processed the next time SendIfNecessary is called. The WriteWrap objects are queued and only get invoked when SendIfNecessary is called.

So... the biggest trick here is knowing when to call SendIfNecessary and how to let the node::http2::Http2Stream object know when the Writable side has ended.

  • In terms of knowing when the writable side has ended, I currently have the Http2Stream object set an ending property when end() is called. Both the write and writev methods check for that property and set an ending property on the WriteWrap that is sent down into node::http2::Http2Stream. This appears to work reasonably well.

  • In terms of knowing when to call SendIfNecessary, there are a couple of places in the code where it is called implicitly (such as when sending SETTINGS frames, etc). I also currently have an unref'd libuv idler (uv_idle_t) configured to call SendIfNecessary on each tick of the event loop so long as the node::http2::Http2Session is active. Essentially, send whatever data is pending on each tick.

  • Again, the WriteWrap callbacks are not currently trigged until SendIfNecessary is called and they will be triggered synchronously while SendIfNecessary is being processed. Any WriteWrap callbacks that have not been called by the time the Http2Stream is closed and the resources are being released will be invoked with a UV_ECANCELED status.

A few more thoughts:

  • The Http2StreamResponse and Http2StreamRequest objects on the js side extend from Http2Outgoing and Http2Incoming. Currently, Http2Outgoing extends from Writable and Http2Incoming extends from Readable. However, this additional indirection in the streams flow causes both a bottle neck and unnecessary extra buffering. What we likely should do is make each pseudo-Writable and pseudo-Readable. That is, expose the appropriate methods and accessors but have those being directly deferred to the Writable and Readable sides of the underlying Http2Stream Duplex as opposed to managing their own readable and writable state. This, I believe, is similar to what the http/1 Request and Response implementations currently do. We will just need to make sure that the @@species check appropriately identifies Http2Outgoing and Http2Incoming as Writable and Readable. This ought to significantly improve performance and state management.

  • There is currently no way of directly triggering SendIfNecessary from JS. We can add the method if necessary.

  • node::http2::Http2Stream is a subclass of StreamBase. It is currently configured not to expose the DoShutdown function but that can be changed.

node::http2::Http2Session is initialized, SendIfNecessary
idler (uv_idle_t) is scheduled to run on every loop.
  |
Request is received, JS-side Http2Stream object is created,
internally the node::http2::Http2Stream handle is created
and passed off to the JS-side Http2Stream. The Duplex is
Writable.
  |
write first chunk of data to res
  |
res[kBeginSend]()
  |
  |--> node::http2::Http2Session::Respond
  |       | 
  |       |--> queue and send response HEADERS frame
  |       |--> queue and first DATA frame
  |
  |--> node::http2::Http2Stream::DoWrite
  |       |
  |       |--> data stored into outbound buffer
  |       |--> WriteWrap callback queued
  |       |--> node::http2::Http2Stream tells nghttp2 to resume this stream
  |
res.end()
  |
  |--> final node::http2::Http2Stream::DoWrite
  |--> need to let node::http2::Http2Stream know that we're done writing

When SendIfNecessary is called. SendIfNecessary essentially prepares a Write
request for the underlying socket stream.
  |
  |--> Repeatedly grab each chunk of pending frame data from nghttp2
  |       |
  |       |--> nghttp2 invokes callbacks to collect information to send
  |       |         |
  |       |         |--> for any pending DATA frame, call on_read repeatedly
  |       |              until either DEFER response removes it from queue,
  |       |              or END_STREAM detected.
  |       |
  |       |--> once a complete DATA frame has been sent, pending WriteWrap
  |       |    callbacks are invoked. This is imperfect currently because
  |       |    ALL pending queued callbacks are invoked, not just the ones
  |       |    whose DATA was written in that frame. This needs to be fixed.
  |       |    We need to know when the data for a specific WriteWrap has been
  |       |    processed.
  |
  |--> Write the chunks of data collected out to the socket stream
Member

jasnell commented Dec 9, 2016

@mcollina: Ok, there are some definite improvements here but the writing state management is still an issue. Piping doesn't work entirely, for instance.

There are several challenges here: The Http2Stream object _write and _writev methods get the data down into the node::http2::Http2Stream DoWrite method with no problem. This data is buffered into a NodeBIO and the WriteWrap instances are queued up to be invoked. Currently, those get invoked when the on_frame_send callback gets invoked after the DATA frame has been constructed.

The flow is essentially:

  • Response is initiated by calling the Response method on node::http2::Http2Session. This method passes the request headers and the data source that will provide the data. This will cause a HEADERS frame to be sent and will queue up the DATA frame(s) for processing.

  • The actual data held in the nghttp2 queue is not sent until the node::http2::Http2Session object's SendIfNecessary method is called. This method will repeatedly pull data from nghttp's buffer until it is empty, then will write that out to the external stream. When this method is called, nghttp will repeatedly call it's callbacks based on whatever frame is being sent. If nghttp is attempting to send a DATA frame, it will begin calling the node::http2::Http2Stream object's on_read callback to fetch chunks of data for the one or more DATA frames that are to be sent. It is important to note that the on_read callback is ONLY called synchronously within the context of the SendIfNecessary function.

  • When on_read is called, it will pull data from the node::http2::Http2Stream object's outbound queue (str_out_). This is the part that is a bit tricky:

    • If there is no data available, *but the writable side of the Http2Stream Duplex is still writable, then on_read must return a DEFERRED response -- telling nghttp2 that data is not yet available to be sent and to remove the DATA frames for that stream from the queue (pausing the stream).

    • If the writable side of the Http2Stream Duplex is not writable (end has been called) and there is no more data pending in the outbound queue to be sent, then the on_read must either set a flag that tells nghttp2 to set the END_STREAM bit on the DATA frame or queue up the batch of trailing headers to be sent. In order for this to work correctly, then this method needs to know that the writable side of the Duplex has ended.

    • If there is still data pending to be sent, on_read simply returns the amount written in this chunk and on_read will be called repeatedly until it either returns DEFERRED or sets the END_STREAM flag.

  • In the Http2ServerResponse object in http2.js, the Response method on the underlying node::http2::Http2Session is triggered by calling the res[kBeginSend] method. This is called implicitly when the first chunk of data is written to the Http2ServerResponse. This initiates the flow described above. The SendIfNecessary method will be invoked and the first call to on_read will return DEFERRED because there is not yet any data written to the internal outbound buffer.

  • Writing to the Http2ServerResponse needs to write to the Http2Stream, which in turn calls in to the internal node::http2::Http2Stream DoWrite method. The DoWrite method drops the written data into the outbound queue then calls the node::http2::Http2Stream Resume function. Calling Resume tells nghttp2 that there is now data available for the DATA frames for that stream and to put the frames back on the send queue to be processed the next time SendIfNecessary is called. The WriteWrap objects are queued and only get invoked when SendIfNecessary is called.

So... the biggest trick here is knowing when to call SendIfNecessary and how to let the node::http2::Http2Stream object know when the Writable side has ended.

  • In terms of knowing when the writable side has ended, I currently have the Http2Stream object set an ending property when end() is called. Both the write and writev methods check for that property and set an ending property on the WriteWrap that is sent down into node::http2::Http2Stream. This appears to work reasonably well.

  • In terms of knowing when to call SendIfNecessary, there are a couple of places in the code where it is called implicitly (such as when sending SETTINGS frames, etc). I also currently have an unref'd libuv idler (uv_idle_t) configured to call SendIfNecessary on each tick of the event loop so long as the node::http2::Http2Session is active. Essentially, send whatever data is pending on each tick.

  • Again, the WriteWrap callbacks are not currently trigged until SendIfNecessary is called and they will be triggered synchronously while SendIfNecessary is being processed. Any WriteWrap callbacks that have not been called by the time the Http2Stream is closed and the resources are being released will be invoked with a UV_ECANCELED status.

A few more thoughts:

  • The Http2StreamResponse and Http2StreamRequest objects on the js side extend from Http2Outgoing and Http2Incoming. Currently, Http2Outgoing extends from Writable and Http2Incoming extends from Readable. However, this additional indirection in the streams flow causes both a bottle neck and unnecessary extra buffering. What we likely should do is make each pseudo-Writable and pseudo-Readable. That is, expose the appropriate methods and accessors but have those being directly deferred to the Writable and Readable sides of the underlying Http2Stream Duplex as opposed to managing their own readable and writable state. This, I believe, is similar to what the http/1 Request and Response implementations currently do. We will just need to make sure that the @@species check appropriately identifies Http2Outgoing and Http2Incoming as Writable and Readable. This ought to significantly improve performance and state management.

  • There is currently no way of directly triggering SendIfNecessary from JS. We can add the method if necessary.

  • node::http2::Http2Stream is a subclass of StreamBase. It is currently configured not to expose the DoShutdown function but that can be changed.

node::http2::Http2Session is initialized, SendIfNecessary
idler (uv_idle_t) is scheduled to run on every loop.
  |
Request is received, JS-side Http2Stream object is created,
internally the node::http2::Http2Stream handle is created
and passed off to the JS-side Http2Stream. The Duplex is
Writable.
  |
write first chunk of data to res
  |
res[kBeginSend]()
  |
  |--> node::http2::Http2Session::Respond
  |       | 
  |       |--> queue and send response HEADERS frame
  |       |--> queue and first DATA frame
  |
  |--> node::http2::Http2Stream::DoWrite
  |       |
  |       |--> data stored into outbound buffer
  |       |--> WriteWrap callback queued
  |       |--> node::http2::Http2Stream tells nghttp2 to resume this stream
  |
res.end()
  |
  |--> final node::http2::Http2Stream::DoWrite
  |--> need to let node::http2::Http2Stream know that we're done writing

When SendIfNecessary is called. SendIfNecessary essentially prepares a Write
request for the underlying socket stream.
  |
  |--> Repeatedly grab each chunk of pending frame data from nghttp2
  |       |
  |       |--> nghttp2 invokes callbacks to collect information to send
  |       |         |
  |       |         |--> for any pending DATA frame, call on_read repeatedly
  |       |              until either DEFER response removes it from queue,
  |       |              or END_STREAM detected.
  |       |
  |       |--> once a complete DATA frame has been sent, pending WriteWrap
  |       |    callbacks are invoked. This is imperfect currently because
  |       |    ALL pending queued callbacks are invoked, not just the ones
  |       |    whose DATA was written in that frame. This needs to be fixed.
  |       |    We need to know when the data for a specific WriteWrap has been
  |       |    processed.
  |
  |--> Write the chunks of data collected out to the socket stream
@mcollina

This comment has been minimized.

Show comment
Hide comment
@mcollina

mcollina Dec 9, 2016

Member

The Http2StreamResponse and Http2StreamRequest objects on the js side extend from Http2Outgoing and Http2Incoming. Currently, Http2Outgoing extends from Writable and Http2Incoming extends from Readable. However, this additional indirection in the streams flow causes both a bottle neck and unnecessary extra buffering. What we likely should do is make each pseudo-Writable and pseudo-Readable. That is, expose the appropriate methods and accessors but have those being directly deferred to the Writable and Readable sides of the underlying Http2Stream Duplex as opposed to managing their own readable and writable state. This, I believe, is similar to what the http/1 Request and Response implementations currently do. We will just need to make sure that the @@species check appropriately identifies Http2Outgoing and Http2Incoming as Writable and Readable. This ought to significantly improve performance and state management.

It is way more complicated than that, and not so simple. To avoid any buffering issue we can make them PassThrough at the moment, an pipe in and out from the main stream. That should solve multiple issues that are there in the current approach.
Another option might be to leverage the prototype chain and extend the stream with the relevant Incoming and Outgoing methods.

How about we defer this and focus just on the Stream API? That is the low-level component we should make super-solid (standalone), and then we can focus on the compat-layer.

Member

mcollina commented Dec 9, 2016

The Http2StreamResponse and Http2StreamRequest objects on the js side extend from Http2Outgoing and Http2Incoming. Currently, Http2Outgoing extends from Writable and Http2Incoming extends from Readable. However, this additional indirection in the streams flow causes both a bottle neck and unnecessary extra buffering. What we likely should do is make each pseudo-Writable and pseudo-Readable. That is, expose the appropriate methods and accessors but have those being directly deferred to the Writable and Readable sides of the underlying Http2Stream Duplex as opposed to managing their own readable and writable state. This, I believe, is similar to what the http/1 Request and Response implementations currently do. We will just need to make sure that the @@species check appropriately identifies Http2Outgoing and Http2Incoming as Writable and Readable. This ought to significantly improve performance and state management.

It is way more complicated than that, and not so simple. To avoid any buffering issue we can make them PassThrough at the moment, an pipe in and out from the main stream. That should solve multiple issues that are there in the current approach.
Another option might be to leverage the prototype chain and extend the stream with the relevant Incoming and Outgoing methods.

How about we defer this and focus just on the Stream API? That is the low-level component we should make super-solid (standalone), and then we can focus on the compat-layer.

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Dec 9, 2016

Member

@mcollina ... ok.. some fairly significant progress today. I've got the timing issue worked out. Perf on the writes still isn't fully optimized but writes to the underlying StreamBase should work without problem now so we can focus on buttoning up the JS streams part. Once we're sure the streams part is working, I'll go back in and keep working on the internals. I've also refactored the headers but still need to do the trailers. I'll work on that next. Finally, I've gone through and cleaned up a number of optimization killers. Fairly significant perf boost out of this so I'm pretty happy but we've got a long ways to go.

Member

jasnell commented Dec 9, 2016

@mcollina ... ok.. some fairly significant progress today. I've got the timing issue worked out. Perf on the writes still isn't fully optimized but writes to the underlying StreamBase should work without problem now so we can focus on buttoning up the JS streams part. Once we're sure the streams part is working, I'll go back in and keep working on the internals. I've also refactored the headers but still need to do the trailers. I'll work on that next. Finally, I've gone through and cleaned up a number of optimization killers. Fairly significant perf boost out of this so I'm pretty happy but we've got a long ways to go.

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Dec 9, 2016

Member

Just looking at the memory profile... the heap usage looks good. There are a couple things there that I'll need to investigate but overall we're pretty solid. The RSS profile is significantly better than what it was but there still definitely something holding on to that memory. Pretty certain it's the buffer allocations not freeing up completely. Will keep investigating that.

Member

jasnell commented Dec 9, 2016

Just looking at the memory profile... the heap usage looks good. There are a couple things there that I'll need to investigate but overall we're pretty solid. The RSS profile is significantly better than what it was but there still definitely something holding on to that memory. Pretty certain it's the buffer allocations not freeing up completely. Will keep investigating that.

@mcollina

This comment has been minimized.

Show comment
Hide comment
@mcollina

mcollina Dec 9, 2016

Member

Just pulled, I'm getting a segfault with:

h2load -m 10000000 -n 10000 https://localhost:8000

And a massive slowdown :(

Member

mcollina commented Dec 9, 2016

Just pulled, I'm getting a segfault with:

h2load -m 10000000 -n 10000 https://localhost:8000

And a massive slowdown :(

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Dec 9, 2016

Member

hmmm...

$ h2load -m 100000000 -n 10000 https://localhost:8000
starting benchmark...
spawning thread #0: 1 total client(s). 10000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES128-GCM-SHA256
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 567.02ms, 17636.22 req/s, 757.87KB/s
requests: 10000 total, 10000 started, 10000 done, 10000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 429.72KB (440037) total, 29.33KB (30031) headers (space savings 95.31%), 136.72KB (140000) data
                     min         max         mean         sd        +/- sd
time for request:     6.58ms    554.00ms    548.01ms     54.41ms    99.00%
time for connect:     2.74ms      2.74ms      2.74ms         0us   100.00%
time to 1st byte:    14.04ms     14.04ms     14.04ms         0us   100.00%
req/s           :   17641.61    17641.61    17641.61        0.00   100.00%
Member

jasnell commented Dec 9, 2016

hmmm...

$ h2load -m 100000000 -n 10000 https://localhost:8000
starting benchmark...
spawning thread #0: 1 total client(s). 10000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES128-GCM-SHA256
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 567.02ms, 17636.22 req/s, 757.87KB/s
requests: 10000 total, 10000 started, 10000 done, 10000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 429.72KB (440037) total, 29.33KB (30031) headers (space savings 95.31%), 136.72KB (140000) data
                     min         max         mean         sd        +/- sd
time for request:     6.58ms    554.00ms    548.01ms     54.41ms    99.00%
time for connect:     2.74ms      2.74ms      2.74ms         0us   100.00%
time to 1st byte:    14.04ms     14.04ms     14.04ms         0us   100.00%
req/s           :   17641.61    17641.61    17641.61        0.00   100.00%
@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Dec 16, 2016

Member

@mcollina ... btw, heads up... I've been spending time this week on a major refactoring of the internals to bring even more performance. The changes make the interaction with the underlying nghttp2_session fully asynchronous closely following libuv's existing handle/request pattern. So far the results have been extremely promising but there is quite a bit to work through. Expect an update next week!

Member

jasnell commented Dec 16, 2016

@mcollina ... btw, heads up... I've been spending time this week on a major refactoring of the internals to bring even more performance. The changes make the interaction with the underlying nghttp2_session fully asynchronous closely following libuv's existing handle/request pattern. So far the results have been extremely promising but there is quite a bit to work through. Expect an update next week!

@sebdeckers

This comment has been minimized.

Show comment
Hide comment
@sebdeckers

sebdeckers Dec 20, 2016

Contributor

@jasnell I'm trying to build this PR to help work on the ServerResponse and ran into a missing file. Is node_http2_core.h supposed to be in the working tree?

./configure
[...]
make
[...]
  c++ '-D_DARWIN_USE_64_BIT_INODE=1' '-DNODE_ARCH="x64"' '-DNODE_WANT_INTERNALS=1' '-DV8_DEPRECATION_WARNINGS=1' '-DNODE_USE_V8_PLATFORM=1' '-DNODE_HAVE_I18N_SUPPORT=1' '-DNODE_HAVE_SMALL_ICU=1' '-DHAVE_INSPECTOR=1' '-DHAVE_OPENSSL=1' '-DHAVE_DTRACE=1' '-D__POSIX__' '-DNODE_PLATFORM="darwin"' '-DUCONFIG_NO_SERVICE=1' '-DUCONFIG_NO_REGULAR_EXPRESSIONS=1' '-DU_ENABLE_DYLOAD=0' '-DU_STATIC_IMPLEMENTATION=1' '-DU_HAVE_STD_STRING=0' '-DUCONFIG_NO_BREAK_ITERATION=0' '-DUCONFIG_NO_LEGACY_CONVERSION=1' '-DHTTP_PARSER_STRICT=0' '-D_LARGEFILE_SOURCE' '-D_FILE_OFFSET_BITS=64' -I../src -I../tools/msvs/genfiles -I../deps/uv/src/ares -I/Users/seb/Code/nodejs/http2/out/Release/obj/gen -I../deps/nghttp2/lib/includes -I../deps/v8_inspector/include -I/Users/seb/Code/nodejs/http2/out/Release/obj/gen/include -I../deps/v8/include -I../deps/icu-small/source/i18n -I../deps/icu-small/source/common -I../deps/openssl/openssl/include -I../deps/zlib -I../deps/http_parser -I../deps/cares/include -I../deps/uv/include  -Os -gdwarf-2 -mmacosx-version-min=10.7 -arch x86_64 -Wall -Wendif-labels -W -Wno-unused-parameter -std=gnu++0x -stdlib=libc++ -fno-rtti -fno-exceptions -fno-threadsafe-statics -fno-strict-aliasing -MMD -MF /Users/seb/Code/nodejs/http2/out/Release/.deps//Users/seb/Code/nodejs/http2/out/Release/obj.target/node/src/node_http2.o.d.raw   -c -o /Users/seb/Code/nodejs/http2/out/Release/obj.target/node/src/node_http2.o ../src/node_http2.cc
../src/node_http2.cc:5:10: fatal error: 'node_http2_core.h' file not found
#include "node_http2_core.h"
         ^
1 error generated.
make[1]: *** [/Users/seb/Code/nodejs/http2/out/Release/obj.target/node/src/node_http2.o] Error 1
rm 013dc124f141acbe16f45b353441ae539096dda4.intermediate
make: *** [node] Error 2
Contributor

sebdeckers commented Dec 20, 2016

@jasnell I'm trying to build this PR to help work on the ServerResponse and ran into a missing file. Is node_http2_core.h supposed to be in the working tree?

./configure
[...]
make
[...]
  c++ '-D_DARWIN_USE_64_BIT_INODE=1' '-DNODE_ARCH="x64"' '-DNODE_WANT_INTERNALS=1' '-DV8_DEPRECATION_WARNINGS=1' '-DNODE_USE_V8_PLATFORM=1' '-DNODE_HAVE_I18N_SUPPORT=1' '-DNODE_HAVE_SMALL_ICU=1' '-DHAVE_INSPECTOR=1' '-DHAVE_OPENSSL=1' '-DHAVE_DTRACE=1' '-D__POSIX__' '-DNODE_PLATFORM="darwin"' '-DUCONFIG_NO_SERVICE=1' '-DUCONFIG_NO_REGULAR_EXPRESSIONS=1' '-DU_ENABLE_DYLOAD=0' '-DU_STATIC_IMPLEMENTATION=1' '-DU_HAVE_STD_STRING=0' '-DUCONFIG_NO_BREAK_ITERATION=0' '-DUCONFIG_NO_LEGACY_CONVERSION=1' '-DHTTP_PARSER_STRICT=0' '-D_LARGEFILE_SOURCE' '-D_FILE_OFFSET_BITS=64' -I../src -I../tools/msvs/genfiles -I../deps/uv/src/ares -I/Users/seb/Code/nodejs/http2/out/Release/obj/gen -I../deps/nghttp2/lib/includes -I../deps/v8_inspector/include -I/Users/seb/Code/nodejs/http2/out/Release/obj/gen/include -I../deps/v8/include -I../deps/icu-small/source/i18n -I../deps/icu-small/source/common -I../deps/openssl/openssl/include -I../deps/zlib -I../deps/http_parser -I../deps/cares/include -I../deps/uv/include  -Os -gdwarf-2 -mmacosx-version-min=10.7 -arch x86_64 -Wall -Wendif-labels -W -Wno-unused-parameter -std=gnu++0x -stdlib=libc++ -fno-rtti -fno-exceptions -fno-threadsafe-statics -fno-strict-aliasing -MMD -MF /Users/seb/Code/nodejs/http2/out/Release/.deps//Users/seb/Code/nodejs/http2/out/Release/obj.target/node/src/node_http2.o.d.raw   -c -o /Users/seb/Code/nodejs/http2/out/Release/obj.target/node/src/node_http2.o ../src/node_http2.cc
../src/node_http2.cc:5:10: fatal error: 'node_http2_core.h' file not found
#include "node_http2_core.h"
         ^
1 error generated.
make[1]: *** [/Users/seb/Code/nodejs/http2/out/Release/obj.target/node/src/node_http2.o] Error 1
rm 013dc124f141acbe16f45b353441ae539096dda4.intermediate
make: *** [node] Error 2
@@ -2,6 +2,7 @@
#include "node_buffer.h"
#include "nghttp2/nghttp2.h"
#include "node_http2.h"
+#include "node_http2_core.h"

This comment has been minimized.

@sebdeckers

sebdeckers Dec 20, 2016

Contributor

This file appears to be missing from the commit.

@sebdeckers

sebdeckers Dec 20, 2016

Contributor

This file appears to be missing from the commit.

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Dec 20, 2016

Member
Member

jasnell commented Dec 20, 2016

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Jan 8, 2017

Member

Closing this PR in favor of #31

Member

jasnell commented Jan 8, 2017

Closing this PR in favor of #31

@jasnell jasnell closed this Jan 8, 2017

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.