New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep alive #450

Merged
merged 3 commits into from Dec 28, 2013

Conversation

Projects
None yet
2 participants
@gittywithexcitement
Contributor

gittywithexcitement commented Dec 27, 2013

Fix for #448
The logic is:

  1. connection=close takes priority.
  2. If the server tells us timeout and max, we use that.
  3. Otherwise we go with the HTTP 1.1 default of assuming the connection remains open. It's unclear (at least from my googling around) what the default timeout value should be, so I went with 1 minute.

Also I changed if (m_client.m_timeout > 0 && max > 1) because keepAliveLimit+=timeout is already a conservative value, as the client set keepAliveLimit to a moment in time before the request was transmitted. So without subtracting 2s, the client's notion of connection timeout will be before the server's notion of it.

gittywithexcitement added some commits Dec 27, 2013

don't subtract 2 seconds from keepAliveLimit
keepAliveLimit+=timeout is already a conservative value, as the client set keepAliveLimit to a moment in time before the request was transmitted. So without subtracting 2s, the client's notion of connection timeout will be before the server's notion of it.
@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig Dec 27, 2013

Member

Thank you for the investigation and the fix!

About the timeout value, though, I'm not sure. The actual client timeout will be about server timeout - network latency for small requests (fitting in one packet). When there is now a small variation in network latency (just slightly worse) or a larger request header is sent, it can easily be possible that the server timeout is exceeded before the request reaches the server. The 2 seconds were supposed to provide some head room for such variations in latency. Also, for the case that no Keep-Alive header was sent, shouldn't the connection time be logically unlimited (e.g. SysTime.max) rather than one minute or similar?

BTW, regarding #448 and not really important here, the TIME_WAIT doesn't really indicate that the server still waits on the connection, but rather that the OS/TCP stack still waits after the connection was actively terminated, it will still occur, regardless of the keep-alive timeout used (but of course just for one or a few connections instead of thousands).

Member

s-ludwig commented Dec 27, 2013

Thank you for the investigation and the fix!

About the timeout value, though, I'm not sure. The actual client timeout will be about server timeout - network latency for small requests (fitting in one packet). When there is now a small variation in network latency (just slightly worse) or a larger request header is sent, it can easily be possible that the server timeout is exceeded before the request reaches the server. The 2 seconds were supposed to provide some head room for such variations in latency. Also, for the case that no Keep-Alive header was sent, shouldn't the connection time be logically unlimited (e.g. SysTime.max) rather than one minute or similar?

BTW, regarding #448 and not really important here, the TIME_WAIT doesn't really indicate that the server still waits on the connection, but rather that the OS/TCP stack still waits after the connection was actively terminated, it will still occur, regardless of the keep-alive timeout used (but of course just for one or a few connections instead of thousands).

@gittywithexcitement

This comment has been minimized.

Show comment
Hide comment
@gittywithexcitement

gittywithexcitement Dec 28, 2013

Contributor
  • I agree that connection PDV (packet delay variation) could cause the server to timeout the connection while the 2nd request is on the wire, if the second request experiences much more latency than the first. I suppose this could be on the order of a second in extreme cases.
  • Your second point about larger headers is correct, but I feel like its effect is a few orders of magnitude smaller than the delay due to PDV. For example, a 1 megabit connection that transmits a header that's 100 bytes longer than the previous header will take 0.0008 seconds more to transmit the header. I consider this negligible compared to the effect of PDV.

We have a competing motivation for not making the "PDV compensation" too high. Some servers use a small timeout. For example, Apache 2.2 defaults to 5 seconds http://httpd.apache.org/docs/2.2/mod/core.html#keepalivetimeout Cutting away 2/5 of the seconds available for the uncommon case of 2 seconds of PDV seems like a lot.

I guess my thoughts are make this 1 second to allow for some PDV, but still interact well with servers that have a low timeout value.

In the end I don't have a strong opinion on this, so I'm going to revert the change so that you can accept the pull request to fix the original bug.

Contributor

gittywithexcitement commented Dec 28, 2013

  • I agree that connection PDV (packet delay variation) could cause the server to timeout the connection while the 2nd request is on the wire, if the second request experiences much more latency than the first. I suppose this could be on the order of a second in extreme cases.
  • Your second point about larger headers is correct, but I feel like its effect is a few orders of magnitude smaller than the delay due to PDV. For example, a 1 megabit connection that transmits a header that's 100 bytes longer than the previous header will take 0.0008 seconds more to transmit the header. I consider this negligible compared to the effect of PDV.

We have a competing motivation for not making the "PDV compensation" too high. Some servers use a small timeout. For example, Apache 2.2 defaults to 5 seconds http://httpd.apache.org/docs/2.2/mod/core.html#keepalivetimeout Cutting away 2/5 of the seconds available for the uncommon case of 2 seconds of PDV seems like a lot.

I guess my thoughts are make this 1 second to allow for some PDV, but still interact well with servers that have a low timeout value.

In the end I don't have a strong opinion on this, so I'm going to revert the change so that you can accept the pull request to fix the original bug.

@gittywithexcitement

This comment has been minimized.

Show comment
Hide comment
@gittywithexcitement

gittywithexcitement Dec 28, 2013

Contributor

As for using an unlimited timeout:

Theoretically, yes if the server doesn't respond with a timeout then perhaps it's infinite. In practice, I wonder if some servers have a finite timeout but don't report it to the client. I think the client assuming a maximum timeout of a few minutes is reasonable, and wouldn't really hurt client performance. My experience here is quite limited though, so I don't have a strong opinion on this either. Let me know what you'd like and I'll update the pull request so you can merge it.

Contributor

gittywithexcitement commented Dec 28, 2013

As for using an unlimited timeout:

Theoretically, yes if the server doesn't respond with a timeout then perhaps it's infinite. In practice, I wonder if some servers have a finite timeout but don't report it to the client. I think the client assuming a maximum timeout of a few minutes is reasonable, and wouldn't really hurt client performance. My experience here is quite limited though, so I don't have a strong opinion on this either. Let me know what you'd like and I'll update the pull request so you can merge it.

@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig Dec 28, 2013

Member

Your second point about larger headers is correct, but I feel like its effect is a few orders of magnitude smaller than the delay due to PDV

You are right, it will usually not be noticable - but at least it will be a consistent delay, so it would happen even on a completely variation free connection. BTW I was more thinking about fragmented packets due to the MTU or similar when writing this.

I think the best would be a 2 step solution:

  1. lower the 2.seconds by default and make it configurable, as well as the default timeout
  2. implement automatic request retry if sending the request header on an existing keep-alive connection has failed

The only issue with 2 is in cases where a request body was written, the user supplied request callback would have to be invoked twice if the error occurs late (maybe when the response header is being read). This might cause some unexpected things if the callback has side effects.

Member

s-ludwig commented Dec 28, 2013

Your second point about larger headers is correct, but I feel like its effect is a few orders of magnitude smaller than the delay due to PDV

You are right, it will usually not be noticable - but at least it will be a consistent delay, so it would happen even on a completely variation free connection. BTW I was more thinking about fragmented packets due to the MTU or similar when writing this.

I think the best would be a 2 step solution:

  1. lower the 2.seconds by default and make it configurable, as well as the default timeout
  2. implement automatic request retry if sending the request header on an existing keep-alive connection has failed

The only issue with 2 is in cases where a request body was written, the user supplied request callback would have to be invoked twice if the error occurs late (maybe when the response header is being read). This might cause some unexpected things if the callback has side effects.

s-ludwig added a commit that referenced this pull request Dec 28, 2013

Merge pull request #450 from gittywithexcitement/keepAlive
Assume keep alive if no "Connection: close" header is present.

@s-ludwig s-ludwig merged commit 23e5739 into vibe-d:master Dec 28, 2013

@gittywithexcitement gittywithexcitement deleted the gittywithexcitement:keepAlive branch Dec 28, 2013

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment