Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ClientManager #23

Merged
merged 2 commits into from
Oct 4, 2016
Merged

ClientManager #23

merged 2 commits into from
Oct 4, 2016

Conversation

imhoffd
Copy link
Contributor

@imhoffd imhoffd commented May 11, 2016

Here's an implementation of a potential client pool for this library. The request originated from #21. It has a few features one might expect in a pool, such as a maximum size and age (both of which can be disabled), as well as a way to create clients automatically if not found in the pool. Clients are looked up by doing a sha1 hash of the credential's bytes.

Implementation is basically a LRU cache with an optional time constraint. Lookups are O(1) using a map. LRU is achieved with a doubly linked list. These data structures are not thread safe, but the implementation should be thread safe as a simple mutex is wrapped around each potentially unsafe method.

I have not used this version in production. I wanted to submit this early to open discussion. What are some likes/dislikes, etc.

@sideshow @c3mb0 @nordicdyno

Credit to bradfitz: https://github.com/golang/groupcache/blob/master/lru/lru.go

@coveralls
Copy link

coveralls commented May 11, 2016

Coverage Status

Coverage remained the same at 100.0% when pulling d810dcf on dwieeb:master into 729daff on sideshow:master.

@imhoffd
Copy link
Contributor Author

imhoffd commented May 11, 2016

I don't think the Go 1.4.3 failure in Travis is my doing, but I could be wrong.

@sideshow
Copy link
Owner

Thanks for the pull request @dwieeb
Looks pretty cool, I will have a play with this tomorrow

@imhoffd
Copy link
Contributor Author

imhoffd commented May 16, 2016

@sideshow You're welcome. Does everything look good?

@kirk91
Copy link

kirk91 commented May 26, 2016

Behaviors of pool more like session . It support multiple application, but one application only hold one tcp connection @dwieeb

@imhoffd
Copy link
Contributor Author

imhoffd commented May 26, 2016

@AlphaPigger Right, it's not a "connection" pool one would be used to. What would a better name be? ClientSessionManager, ClientManager?

@kirk91
Copy link

kirk91 commented May 26, 2016

ClientSessionManager, ClientManager is all ok,I just want to make sure the code that match up the point. @dwieeb

@imhoffd imhoffd changed the title ClientPool ClientManager May 31, 2016
@coveralls
Copy link

coveralls commented May 31, 2016

Coverage Status

Coverage remained the same at 100.0% when pulling fc2cbac on dwieeb:master into 729daff on sideshow:master.

@imhoffd
Copy link
Contributor Author

imhoffd commented May 31, 2016

Okay, that's renamed. All ready.

@NeoCN
Copy link

NeoCN commented Jul 19, 2016

Is there any update on this?

@sideshow
Copy link
Owner

@dwieeb Sorry for the delay on reviewing this. We are doing something very similar in an internal project which is used in production, so my way of testing this was to try swapping out our current implementation for yours which looks superior. I think this will be an awesome addition to the library. Thanks again an will try and review shortly.

@imhoffd
Copy link
Contributor Author

imhoffd commented Jul 21, 2016

@sideshow Great to hear! I look forward to pulling this in.

@imhoffd
Copy link
Contributor Author

imhoffd commented Aug 6, 2016

@sideshow Have you had time to review it again? Really looking forward to having this in the library.

@nordicdyno
Copy link

Is only me seeing the problem with potential connection leaking (after eviction) in this code?

@sideshow
Copy link
Owner

Hey @dwieeb We are currently trying to merge into a large project to test.

@sideshow
Copy link
Owner

@nordicdyno can you be more specific. Thanks

@nordicdyno
Copy link

nordicdyno commented Aug 15, 2016

@sideshow There is no explicit disconnection before m.removeElement(ele). So I aware that system sockets could be held by Client's what are already removed from user space.

@sideshow
Copy link
Owner

sideshow commented Sep 20, 2016

@nordicdyno I ran some tests and this does leak/hold connections open after they are removed from the ConnectionManager. We need some kind of explicit CloseConnections method on the client which we can call before removing them. @dwieeb with the exception of this, it has been tested and is ready to merge. I just want to fix this before I merge in (any help appreciated). Thanks again for the code!

@imhoffd
Copy link
Contributor Author

imhoffd commented Sep 28, 2016

@nordicdyno @sideshow To me, this is normal behavior. See https://golang.org/src/net/http/transport.go#L54

57    // By default, Transport caches connections for future re-use.
58    // This may leave many open connections when accessing many hosts.
59    // This behavior can be managed using Transport's CloseIdleConnections method
60    // and the MaxIdleConnsPerHost and DisableKeepAlives fields.

I believe this PR has merely revealed a larger issue with this library, in that anyone who is "accessing many hosts" (by creating many Clients) will have connections left open. I also believe this is fine (it is the normal behavior of the http go libs).

If we want to address this, we can provide configuration options for http2.Transport (https://github.com/sideshow/apns2/blob/master/client.go#L60-L65), but the fine-tuning of this I believe should be left on the library consumer.

@sideshow
Copy link
Owner

sideshow commented Oct 4, 2016

@dwieeb "I believe this PR has merely revealed a larger issue with this library" Agree with this. I will go ahead and merge and we can address this as a seperate issue. Thanks again for the PR!

@sideshow sideshow merged commit e42f05d into sideshow:master Oct 4, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants