-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PoolInterface buffer size regression issue in akka-http 10.2.10 #139
Comments
As described in the original PR, the buffer size used to be calculated as val targetBufferSize = settings.maxOpenRequests - settings.maxConnections The reasoning was that there can be one request going on in each of the connections, plus all the extra ones you want to have buffered by the configuration of
That was only true if After the change, we went back to a previous behavior where we again allow In any case, I wonder how increasing the buffer can be a problem in your case because we are handing out less errors than before? Can you show a test case that fails with the new behavior? I can see how the description of the setting as "The maximum number of open requests accepted into the pool" does not really fit my description above as a lower bound (instead of an upper bound). In any case, if you really want to enforce an upper limit you can always wrap the interface and count requests yourself if you want to err out fast. Though, it's not quite clear to me when you would really want to be so strict. After all, the whole client side pool interface architecture has its downsides (many for legacy reasons) and would benefit from a proper streaming interface. On the other hand, even (or maybe rather especially) with a streaming interface you also get all internal buffers so that we would not recommend relying on an exact buffer management in any case. Also, a streaming interface with backpressure has its own challenges in the more complex cases (e.g. head-of-line blocking with super-pools, accidentally unrestricted buffers by just adding streams to a common pool, needed buffers for internal retries etc.). |
Thanks @jrudolph. Our use case is a reverse proxy api gateway service. We currently rely upon max-open-requests and max-connections as a service protection mechanism to fail fast when a target backend service is in duress but does not itself have the capability to fail fast. I’m not sure that we have a strong opinion on whether the configuration should be an upper-bound or a lower-bound. We just need to understand which is the expected behavior, and be able to verify that behavior with unit tests. To my knowledge, I don’t think our existing unit tests (based on 10.2.9 behavior) ever encountered the scenario that prompted the change. If it ever occurred in production, we didn’t notice. |
Akka issue has not led to a change yet. So far, the new behaviour is seeming to be bedded in. |
Discussed in #138
Originally posted by jphelp32 April 18, 2023
@jrudolph
can you comment on the likelihood of pekko-http addressing this regression that was introduced in akka-http 10.2.10: akka/akka-http#4132 (comment)
We are currently locked at akka-http 10.2.9 due to this issue. As we're planning our migration to pekko, I'm trying to plan whether to expect a fix in pekko-http, or if we should plan to adjust our configs according to the newer/broken(?) behavior that was introduced in 10.2.10.
thanks
Original akka-http issue: akka/akka-http#4156
Summary:
The text was updated successfully, but these errors were encountered: