Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reuse persistent HTTP connections #13345

Closed
2 tasks
cben opened this issue Jan 4, 2017 · 7 comments
Closed
2 tasks

Reuse persistent HTTP connections #13345

cben opened this issue Jan 4, 2017 · 7 comments

Comments

@cben
Copy link
Contributor

cben commented Jan 4, 2017

I noticed recently we don't reuse HTTP connections. A new TCP/TLS/HTTP connection every request may add up to an order of magnitude speed difference:
https://bibwild.wordpress.com/2012/04/30/ruby-http-performance-shootout-redux/

  • HawkularClient creates a new RestClient instance for every single call.

  • Kubeclient does cache RestClient instance, but RestClient itself doesn't support connection reuse!
    Connection reuse rest-client/rest-client#453

  • [?] Net::HTTP when used directly does support persistent connections in limited scenarios (passing a block to .start). I believe we use it through MiqFs for image scanning, not sure how exactly.

  • [?] @yaacov also told me one of the client libs (maybe in python though?) does API discovery anew before every single action.

P.S. new connections are probably more expensive when going through proxies, e.g. to hawkular running in a pod?

After some reading, for fresh developement, httpclient or excon could be better than restclient, on several aspects.
For our existing code, https://github.com/drbrain/net-http-persistent (co-authored by tenderlove :-) sounds a good start. I propose trying quick&dirty patch to use it in RestClient (and MiqFs), measuring how much we stand to gain, and working with RestClient to add at least opt-in option.

  • Further ahead, given connection reuse, could gain even more from HTTP/2, should keep it in mind when choosing gems.

@miq-bot add-label providers/containers, providers/hawkular, performance
/cc @lucasponce

Probably happens with other providers, a quick search through Gemfile.lock shows only excon used by fog & OpenstackHandle, httpclient used by WinRm.

@yaacov
Copy link
Member

yaacov commented Jan 4, 2017

does API discovery anew before every single action.

@cben 👍 , the ruby and python clients ask for the hawkular version each time they create a new connection object, and since we do not have a good way to reuse ( and stay updated, and not timeout, and ... ) the connection object is recreated almost(*) each new query.

(*) we do try to reuse when we can, clients are usually global and created using @client ||= Clinet.new(...)

@cben
Copy link
Contributor Author

cben commented Jan 4, 2017

(*) https://github.com/hawkular/hawkular-client-ruby/blob/02041e319eabc6851d0a5adf2319af3b83643ad9/lib/hawkular/base_client.rb#L40-L41
Would be an easy fix, not important without persistence in RestClient...
(and depending on implementation, connections could become cached across RestClient instances too)

@cben
Copy link
Contributor Author

cben commented Jan 4, 2017

=> Refresh is much better than I feared :-). Not much point optimizing (until we get into targeted / watching refresh).

  • We do 1 GETs per entity type, eg. get_pods without args => /api/v1/pods, returning one huge json.
  • We do one /api/v1 and one /oapi/v1 discovery at start of refresh, probably from ems.with_provider_connection. Perfect.

Tip: running under env RESTCLIENT_LOG=/tmp/rest.log is handy!

@yaacov when you have a chance doing metric collections / live metrics, do you mind setting RESTCLIENT_LOG and getting an idea how many requests total we do in that area?
@moolitayer same when you get a chance running Hawkular events/alerts polling?

@yaacov
Copy link
Member

yaacov commented Jan 4, 2017

@cben 👍

p.s.
you can look at the VCR recordings:
https://github.com/ManageIQ/manageiq/blob/master/spec/vcr_cassettes/manageiq/providers/kubernetes/container_manager/metrics_capture/capture_context_pod_metrics.yml

@miq-bot
Copy link
Member

miq-bot commented Sep 30, 2017

This issue has been automatically marked as stale because it has not been updated for at least 6 months.

If you can still reproduce this issue on the current release or on master, please reply with all of the information you have about it in order to keep the issue open.

Thank you for all your contributions!

@miq-bot miq-bot added the stale label Sep 30, 2017
@cben
Copy link
Contributor Author

cben commented Oct 1, 2017

@miq-bot move-issue manageiq-providers-kubernetes

@miq-bot
Copy link
Member

miq-bot commented Oct 1, 2017

This issue has been moved to ManageIQ/manageiq-providers-kubernetes#124

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants