Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discuss the longevity of the httpclient handler #42219

Open
EngRajabi opened this issue Aug 26, 2020 · 11 comments
Open

Discuss the longevity of the httpclient handler #42219

EngRajabi opened this issue Aug 26, 2020 · 11 comments

Comments

@EngRajabi
Copy link

EngRajabi commented Aug 26, 2020

Discuss the longevity of the httpclient handler. #70770

Currently the life of the httpclient handler is 2 minutes. Given that 99% of packages do not change the default lifespan and we use them in our projects. And that new architectures based on micro-services have advanced. Wouldn't it be better to extend Handler's default lifespan to improve system performance?
Any program you want can change the lifespan to suit your needs, but we can extend this default to improve performance.

@Pilchie
Copy link
Member

Pilchie commented Aug 27, 2020

@karelz ?

@karelz
Copy link
Member

karelz commented Aug 27, 2020

@EngRajabi where is the default coming from? Is it from HttpClientFactory or from HttpClientHandler?
Do you see the timeout affecting your current perf? Can you share some details / numbers / graphs for us to understand it better?

Thanks!

@EngRajabi
Copy link
Author

I mean when different programmers use httpclientfactory in different projects. And add in startup.
Most do not use the following form and cause a decrease in program performance.
On the other hand, most architects today use micro services.
Isn't it better to increase the default time for the handler by 2 minutes, for example 30 minutes, and besides that, everyone can change the time.

services.AddHttpClient("extendedhandlerlifetime") .SetHandlerLifetime(TimeSpan.FromMinutes(5));

@karelz
Copy link
Member

karelz commented Aug 27, 2020

@EngRajabi what kind of decrease in program performance do you / they see in such case? (comparing 2 min vs. 30 min)
Is it measurable, does it show on latencies in a measurable way? (just trying to understand the impact scope -- is it hot path, clearly visible, or is it like saving couple of instructions from an application startup - aka "unmeasurable savings"?)

In general, I don't have strong opinion on what the timeout should be. In .NET Core 2.1, we need to keep it smaller, otherwise it will prevent reaction to DNS changes, which is a problem. That should be the only platform with such problem to my knowledge.
If we decide to choose, either the timeout will be different per platform, or we should wait for 2.1 to be dropped out of support.

@EngRajabi
Copy link
Author

EngRajabi commented Aug 28, 2020

By performance reduction I mean building the handler in times of very high app traffic.
There is a problem changing the dns in all versions. But I think there are a few points

  1. Before aspcore, programmers often static httpclient to avoid the cost of build and time_wait. And every time the dns changed, they reset the program.
  2. Currently, the micro-service architecture is mostly used. In this case, microservices can use each other ip rather than domain. Apart from this, there is also the issue of service discovery, which solves this problem.
  3. Most of the written nuget packages do not change during the life of the handler, which makes it 2 minutes by default.
    It should be checked that sir dns change happens so much that we put the lifespan equal to 2 minutes ??
    Or let the programmer, and a longer default time, such as 1 hour, let the programmer know that if there is a possibility of changing the Dns, reduce this time.
  4. In the real world, how likely is it that the DNS will change in a short time? If there is a special scenario, the user can reduce this time.
  5. Avoid high repetition of expensive TCP handshake

@BrennanConroy BrennanConroy transferred this issue from dotnet/extensions Sep 14, 2020
@Dotnet-GitSync-Bot Dotnet-GitSync-Bot added area-System.Net.Http untriaged New issue has not been triaged by the area owner labels Sep 14, 2020
@ghost
Copy link

ghost commented Sep 14, 2020

Tagging subscribers to this area: @dotnet/ncl
See info in area-owners.md if you want to be subscribed.

@JamesNK
Copy link
Member

JamesNK commented Sep 14, 2020

I think the issue here is every 2 minutes a new handler will be created, which means requests on the new handler will need to:

  • Open new sockets
  • Start TCP connection
  • TLS handshakes
  • HTTP2 negotiation (if using HTTP2)

There will be a brief period of high latency as the new handler and its connections are established.

Does a 2 minute start time make sense with SocketsHttpHandler? Is there a good reason to not use a handler longer, or even for the lifetime of the app?

@halter73
Copy link
Member

Does a 2 minute start time make sense with SocketsHttpHandler? Is there a good reason to not use a handler longer, or even for the lifetime of the app?

I think DNS changes are the main reason for not making the default unlimited. From the SetHandlerLifetime doc comments:

"Some handlers also keep connections open indefinitely which can prevent the handler from reacting to DNS changes. The value of should be chosen with an understanding of the application's requirement to respond to changes in the network environment"

@stephentoub
Copy link
Member

stephentoub commented Sep 14, 2020

SocketsHttpHandler itself is going to close out connections by default after two minutes of inactivity on a given connection. That setting can be changed on the handler, though, as can a max lifetime for a connection, e.g. such that the connection won't be reused for new requests after X period of time (which defaults to infinite).

@EngRajabi
Copy link
Author

I think the issue here is every 2 minutes a new handler will be created, which means requests on the new handler will need to:

  • Open new sockets
  • Start TCP connection
  • TLS handshakes
  • HTTP2 negotiation (if using HTTP2)

There will be a brief period of high latency as the new handler and its connections are established.

Does a 2 minute start time make sense with SocketsHttpHandler? Is there a good reason to not use a handler longer, or even for the lifetime of the app?

Exactly, I mean, is 2 minutes enough for all this work ??
Can't this time be considered much more, yes it is

@wfurt
Copy link
Member

wfurt commented Sep 15, 2020

That may depend on your particular situation @EngRajabi . Do you have environment when DNS rapidly changes - and old nodes are still up but refusing to serve content? If not, I think it would be quite safe to bump up various limits.
The other angle is how rapidly do you want to close (and possibly reopen connections). For some people this is critical because of SNAT or other front-end limitations. The trade-off is possibly between having idle connection vs cleaning up aggressively. AFAIK there is really no problem with long-lived connections e.g. if that works for you you can set it hour or more and everything still should work. I would worry about increasing idle timeout much. If connection is busy handling traffic everything is fine. But if it is just open, sometimes upstream firewalls would drop it without sending any notification. That leads to situations when randomly some requests do take long time - first trying dead connection and failing with timeout and than going through new handshake.

Depending on your OS and setting TLS may use TLS session resumption - that is significantly cheaper than full handshake. But for most people, the port reuse is more important.

@scalablecory scalablecory removed the untriaged New issue has not been triaged by the area owner label Sep 17, 2020
@scalablecory scalablecory added this to the Future milestone Sep 17, 2020
@EngRajabi EngRajabi mentioned this issue Jun 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants