Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

x/net/http2: add a MarkComplete method to ClientConnPool #17776

Open
rs opened this Issue Nov 3, 2016 · 25 comments

Comments

Projects
None yet
8 participants
@rs
Copy link
Contributor

commented Nov 3, 2016

This method can be useful in the context of a custom connection pool, to implement lower max concurrent streams per connection than advertised by the server for example.

@gopherbot

This comment has been minimized.

Copy link

commented Nov 3, 2016

CL https://golang.org/cl/32327 mentions this issue.

@bradfitz bradfitz added this to the Unreleased milestone Nov 4, 2016

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Aug 15, 2017

@tombergan any interest for this PR?

In the same vein as for #17775, the idea is that if we want to create a custom connection pool as permitted by the ClientConnPool, we need some visibility on the connection stats.

This method is also useful for reporting.

@tombergan

This comment has been minimized.

Copy link
Contributor

commented Aug 15, 2017

From the CL:

This method can be useful in the context of a custom connection pool, to implement lower max concurrent streams per connection than advertised by the server.

I'm not sure InFlightStreams is the best way to implement that. I suspect you will have atomicity problems unless you can prevent new streams from being added to a connection concurrently, just after your call to InFlightStreams. Or maybe you won't ... I don't have enough information to say.

Along with #17775, I am sympathetic to the idea that net/http has privileged access to golang.org/x/net/http2, which makes it hard to write ClientConnPools of equivalent power to what is used internally by net/http. However, it would help me if you could state the request by: (1) say what you want to do at a high level; (2) explain why you cannot do that with the current API; then (3) if you have a proposal, explain how that proposal solves your problem.

In this issue, you've mostly jumped directly to (3).

I'm also wondering if this is effectively a dup of #13957. Wanting to limit the number of requests per connection is a reasonable goal, and basically what #13957 is asking for. Do you want to use the same limit for all connections? Different limit for different connections (based on some arbitrary function)? i.e., does this have to be implemented with a ClientConnPool, or could it be a knob on http2.Transport?

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Aug 15, 2017

We are building an HTTP proxy that take HTTP/1&2 connections from clients, and multiplex there requests into a bunch of HTTP/2 connections to the origin. As we only speak HTTP/2 with the origin, we are using x/net/http2 directly instead of net/http. This does also give us more control on the http2 library, control we need to improve the way we manage our connections with the origin.

Here are a few examples of control we want on the pool:

  1. Pre-establish the connections before adding them to the pool (i.e.: perform the TLS handshake out of band).
  2. Grow the pool preemptively as the number of streams per connection increases.
  3. Shrink the pool as the number of concurrent streams goes down again.
  4. Gracefully change the origin without affecting in flight requests.

The current net/http2 API does not give any insight on the number of streams per connection. This insight is mainly necessary to implement 2 and 3. #13957 would not solve our issue, because our pool need to have the knowledge of the current number of streams in order to implement the soft/hard limits in so it can grow without blocking requests (I still +1 #13957 though).

To implement 3. we also need #17775 or #17292 or any way to gracefully dispose a client connection. For 4. we need #17292.

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Aug 16, 2017

There is another reason that comes to mind regarding exposing this kind of information to custom connection pools: with the introduction of #13774, if a pool is not able to grow or ensure it does not return a connection that reached it's max streams, the request will queued. That is not an acceptable behavior for the kind of proxy we are working on.

As the max stream is no longer checked by CanTakeNewRequest, I would also advise to expose maxConcurrentStreams in addition to the current number of streams.

@tombergan

This comment has been minimized.

Copy link
Contributor

commented Aug 16, 2017

Thanks! That is helpful. I am still hesitant to expose the number of inflight, pending, or max streams because of potential atomicity problems, e.g., it might look OK to accept a request but then some concurrent action might take that slot between the time your conn pool assigns that request to a conn and the time that request actually reaches conn.RoundTrip, meaning you end up queuing the request even though you didn't want to.

I need to think about this in more detail. (I'm also about to go on vacation for a week so don't interpret lack of response as lack of interest.)

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Aug 16, 2017

If you lock the pool while you take this decision it should be ok as no other request could obtain a connection in the interval. The only thing that could happen is connections releasing streams, but that is not a big issue in this context.

I would also rethink the method, and perhaps have a Stats method returning a struct with all those metrics at once so we get a consistent (atomic) view of all of them.

@tombergan

This comment has been minimized.

Copy link
Contributor

commented Aug 16, 2017

Is locking the pool enough? The issue is that locking the pool does not lock the full call to RoundTrip. For example:

  1. Request A calls RoundTrip
  2. Request B calls RoundTrip
  3. A goes to the pool, we lock the pool, check the conn, 1 slot available, let A through
  4. B goes to the pool, we lock the pool, check the conn, 1 slot still available, let B through
  5. A gets to ClientConn.RoundTrip, takes the available slot
  6. B gets to ClientConn.RoundTrip, stalls waiting for a slot

Note that no lock guards the duration between transport.go lines 345 and 351.

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Aug 16, 2017

That is a good point. What you're saying is that we should probably track the connections we gave to the transport by ourselves instead of relying on internal states of the client connection. The problem is then that we have no way to know when the roundtrip is done (i.e.: stream is closed) without doing some crazy tricks. What about adding a new (optional?) method to the ConnPool to let the transport put the connection back in the pool?

@sideshow

This comment has been minimized.

Copy link

commented Aug 20, 2017

Hi guys, I came here to ask the same thing. I am working on a http2 push library for Apple APNs.

The use case is similar to @rs:

  • We want to write a custom client conn pool that can have a pre established set of connections and can autoscale the connections as the load increases up to a limit, and then shrink back down as load decreases.
  • We also need more metrics around how many active connections/requests there are and a custom conn pool seems a natural place for this.

Why is this important?

@tombergan Your recent change commited to master Block RoundTrip when the Transport hits MaxConcurrentStreams fixes a longstanding issue for us (thanks), but unfortunately changes the behaviour of the default connection pool.

I may be wrong, but under my testing after commit 1c05540, the default connection won't ever dial a new connection (unless the stream ID hits MaxInt32 or recieves a goaway frame). Instead all pending requests are queued on the active connection. Prior to this commit, there would be multiple active tcp connections. After this commit, there will only ever be one connection. This doesn't seem like a bad approach for the default http2 lib, because pending requests will execute or timeout, but for us we need more visibility and control over what is inflight so we know wether or not to open a connection.

Exposing a method like GetInFlightStreams would be really helpful in giving more control to those that have outgrown the default client connection pool, and is not changing the core behaviour.

Lastly, @tombergan to you point;

It might look OK to accept a request but then some concurrent action might take that slot between the time your conn pool assigns that request to a conn and the time that request actually reaches conn.RoundTrip, meaning you end up queuing the request even though you didn't want to.

This above point is not such an issue for us, if a few overflow requests are queued they will still execute under your new commit.

The main thing is the connection pool has the information to know how many requests are assigned/active on each connection and can make the call to assign to a connection / open a new connection or lock a request.

Lastly if you consider adding this, please also expose something like GetMaxConcurrentStreams for the same reason.

Thanks!

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Aug 21, 2017

I think you're right, ideally we would need both insight on connection's streams as well as a more advanced pool API.

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Sep 11, 2017

@tombergan any thoughts on this?

@tombergan

This comment has been minimized.

Copy link
Contributor

commented Nov 2, 2017

Sorry for dropping this. #22537 was just filed and is related. That is on the main repo, so it won't happen until Go 1.11 at the earliest due to the feature freeze, but since this is off the main repo we could potentially do something before then.

I don't know how I missed this comment from @rs, as I think it solves the concurrency problem I raised earlier:

What about adding a new (optional?) method to the ConnPool to let the transport put the connection back in the pool?

Something like this sounds like the right approach. The pool would have the following methods (names and comment just for illustration):

type Pool interface {
  // Get a conn for a new request.
  GetClientConn(req *http.Request, addr string) *ClientConn

  // Called when the request is complete (response.Body has been closed).
  // This would be new.
  MarkComplete(req *http.Request, conn *ClientConn)

  // MarkDead called when the connection should no longer be used.
  // i.e., when the connection is aborted or when the server sends GOAWAY.
  MarkDead(conn *ClientConn)
}

You can then implement GetInFlightStreams internally, since you have a notification each time a connection is used (GetClientConn) and each time a request completes (MarkComplete).

A question for @rs and @sideshow: Does the above API completely support what you need to do? I think the answer is yes but I may have missed some subtleties in the above comments.

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Nov 2, 2017

It would solve most of our issues yes.

The access to the max concurrent streams would be another great addition.

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Nov 9, 2017

@tombergan will be able to work on this soonish or should I give it a try?

@gopherbot

This comment has been minimized.

Copy link

commented Nov 11, 2017

Change https://golang.org/cl/77091 mentions this issue: http2: add MarkComplete to http2.ClientConnPool interface

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Nov 21, 2017

@tombergan any chance to have this merged?

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Nov 28, 2017

As this PR is not yet merged, I'd like to add a few parameters to the new MarkComplete method: the response status and response error if any.

For pools who want to implement outlier detection via passive health check, those parameters would be very helpful.

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Dec 8, 2017

This issue celebrated its first anniversary a month ago (happy birthday). Could you guys please help me figure out what needs to be done to move it forward? Would it be more context, different approach, more unit tests. Please don't let it die, we need this and apparently we are not alone.

@andybons

This comment has been minimized.

Copy link
Member

commented Dec 8, 2017

/cc @bradfitz

@bradfitz

This comment has been minimized.

Copy link
Member

commented Dec 8, 2017

@rs, if you're talking about https://go-review.googlesource.com/c/net/+/32327, Matt Layher noted you needed tests in January but they're not there yet, so it definitely isn't ready to be merged.

But to @tombergan's point: it's not even clear this is a useful primitive that can be used properly (without races). Work towards a coherent design for #22537 would be most helpful. I'd like to get the minimal set of interfaces that solve the largest set of use cases. We don't want to just add little hooks everywhere without understanding what they're being used for.

Code is relatively easy. This will hit its two year anniversary as well if there's not a design.

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Dec 8, 2017

@bradfitz I'm talking about the second PR attached to this issue: https://go-review.googlesource.com/c/net/+/77091. As discussed with @tombergan above, exposing InFlightStreams is not the right solution; A better one is to extend the connection pool's interface to track when streams are ended (kind of what #22537 propose but limited to the current http2.ClientConnPool).

Let me close the previous PR and change the title to make this issue less misleading. If what @tombergan proposed is not the good design or not considered as "a design", please help me understand what you need to move forward.

@rs rs changed the title x/net/http2: add an InFlightStreams method to ClientConn x/net/http2: add a MarkComplete method to ClientConnPool Dec 8, 2017

@odeke-em

This comment has been minimized.

Copy link
Member

commented Jan 19, 2018

Alright, thanks for the feedback everyone and for addressing the concerns @rs with your CL.

@bradfitz, when you return from vacation: in regards to the design and API suitability, I believe that @tombergan's suggestion to implement MarkComplete(*http.Request, string, *ClientConn) is a simple and elegant solution that addresses the needs/request of this issue whose need is the ability to track the number of in-flight requests on connections by getting a notification when one is retrieved with GetClientConn and then respective end notification when MarkComplete is invoked. Also as you noted and requested in the CL review the design is now backwards compatible and less invasive but more importantly it naturally complements ClientConnPool's GetClientConn.

@rs

This comment has been minimized.

Copy link
Contributor Author

commented Mar 20, 2018

@theckman

This comment has been minimized.

Copy link
Contributor

commented Aug 21, 2018

@bradfitz where do we stand with this specific issue? Interested in seeing how it can help with #27044.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.