Skip to content

Commit 39f36dd

Browse files
committed
=doc add documentation section about max-open-requests problems
1 parent 85356f5 commit 39f36dd

File tree

5 files changed

+116
-4
lines changed

5 files changed

+116
-4
lines changed

akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolInterfaceActor.scala

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,12 @@ private class PoolInterfaceActor(gateway: PoolGateway)(implicit fm: Materializer
5151
private[this] val inputBuffer = Buffer[PoolRequest](hcps.setup.settings.maxOpenRequests, fm)
5252
private[this] var activeIdleTimeout: Option[Cancellable] = None
5353

54+
private[this] val PoolOverflowException = new BufferOverflowException( // stack trace cannot be prevented here because `BufferOverflowException` is final
55+
s"Exceeded configured max-open-requests value of [${inputBuffer.capacity}]. This means that the request queue of this pool (${gateway.hcps}) " +
56+
s"has completely filled up because the pool currently does not process requests fast enough to handle the incoming request load. " +
57+
"Please retry the request later. See http://doc.akka.io/docs/akka-http/current/scala/http/client-side/pool-overflow.html for " +
58+
"more information.")
59+
5460
log.debug("(Re-)starting host connection pool to {}:{}", hcps.host, hcps.port)
5561

5662
initConnectionFlow()
@@ -110,10 +116,8 @@ private class PoolInterfaceActor(gateway: PoolGateway)(implicit fm: Materializer
110116
}
111117
if (totalDemand == 0) {
112118
// if we can't dispatch right now we buffer and dispatch when demand from the pool arrives
113-
if (inputBuffer.isFull) {
114-
x.responsePromise.failure(
115-
new BufferOverflowException(s"Exceeded configured max-open-requests value of [${inputBuffer.capacity}]"))
116-
} else inputBuffer.enqueue(x)
119+
if (inputBuffer.isFull) x.responsePromise.failure(PoolOverflowException)
120+
else inputBuffer.enqueue(x)
117121
} else dispatchRequest(x) // if we can dispatch right now, do it
118122
request(1) // for every incoming request we demand one response from the pool
119123

docs/src/main/paradox/java/http/client-side/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ Akka HTTP will happily handle many thousand concurrent connections to a single o
2828
* [request-level](request-level.md)
2929
* [host-level](host-level.md)
3030
* [connection-level](connection-level.md)
31+
* [pool-overflow](pool-overflow.md)
3132
* [client-https-support](client-https-support.md)
3233
* [websocket-support](websocket-support.md)
3334

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
# Pool overflow and the max-open-requests setting
2+
3+
@ref[Request-Level Client-Side API](request-level.md#request-level-api) and @ref[Host-Level Client-Side API](host-level.md#host-level-api)
4+
use a connection pool underneath. The connection pool will open a limited number of concurrent connections to one host
5+
(see the `akka.http.client.host-connection-pool.max-connections` setting). This will limit the rate of requests a pool
6+
to a single host can handle.
7+
8+
When you use the @ref[stream-based host-level API](host-level.md#using-the-host-level-api-in-a-streaming-fashion)
9+
stream semantics prevent that the pool is overloaded with requests. On the other side, when a new request is pushed either using
10+
`Http.singleRequest()` or when materializing too many streams using the same `Http().cachedHostConnectionPool`, requests
11+
may start to queue up when the rate of new requests is greater than the rate that the pool can process requests.
12+
13+
In such a situation `max-open-requests` per host connection pool will be queued to buffer short-term peaks of requests.
14+
Further requests will fail immediately with a `BufferOverflowException` with a message like this:
15+
16+
```
17+
Exceeded configured max-open-requests value of ...
18+
```
19+
20+
This will usually happen under high load or when the pool has been running for some time with the processing speed being
21+
too slow to handle all the incoming requests.
22+
23+
Note, that even if the pool can handle regular load, short-term hiccups (at the server, the network, or at the client) can make
24+
the queue overflow, so you need to treat this as an expected condition. Your application should be able to deal with it. In many cases, it
25+
makes sense to treat pool overflow the same as a `503` answer from the server which usually is used when the server is
26+
overloaded. A common way to treat it would be to retry the request after some while (using a viable backoff strategy).
27+
28+
## Common causes of pool overload
29+
30+
As explained above the general explanation for pool overload is that the incoming request rate is higher that the request
31+
processing rate. This can have all kinds of causes (and hints for fixing them in parentheses):
32+
33+
* The server is too slow (improve server performance)
34+
* The network is too slow (improve network performance)
35+
* The client issues requests too fast (slow down creation of requests if possible)
36+
* There's high latency between client and server (use more concurrent connections to hide latency with parallelism)
37+
* There are peaks in the request rate (prevent peaks by tuning the client application or increase `max-open-requests` to
38+
buffer short-term peaks)
39+
* Response entities were not read or discarded (see @ref[Implications of the streaming nature of Http entities](../implications-of-streaming-http-entity.md))
40+
* Some requests are slower than others blocking the connections of a pool for other requests (see below)
41+
42+
The last point may need a bit more explanation. If some requests are much slower than others, e.g. if the request is
43+
a long-running Server Sent Events request than this will block one of the connections of the pool for a long time. If
44+
there are multiple such requests going on at the same time it will lead to starvation and other requests cannot make any
45+
progress any more. Make sure to run a long-running request on a dedicated connection (using the
46+
@ref[Connection-Level Client-Side API](connection-level.md#connection-level-api)) to prevent such a situation.
47+
48+
## Why does this happen only with Akka Http and not with [insert other client]
49+
50+
Many Java HTTP clients don't set limits by default for some of the resources used. E.g. some clients will never queue a
51+
request but will just open another connection to the server if all the pooled connections are currently busy. However,
52+
this might just move the problem from the client to the server. Also using an excessive number of connections will lead to
53+
worse performance on the network as more connections will compete for bandwidth.

docs/src/main/paradox/scala/http/client-side/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@ Akka HTTP will happily handle many thousand concurrent connections to a single o
3232
* [request-level](request-level.md)
3333
* [host-level](host-level.md)
3434
* [connection-level](connection-level.md)
35+
* [pool-overflow](pool-overflow.md)
3536
* [client-https-support](client-https-support.md)
3637
* [websocket-support](websocket-support.md)
3738

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
# Pool overflow and the max-open-requests setting
2+
3+
@ref[Request-Level Client-Side API](request-level.md#request-level-api) and @ref[Host-Level Client-Side API](host-level.md#host-level-api)
4+
use a connection pool underneath. The connection pool will open a limited number of concurrent connections to one host
5+
(see the `akka.http.client.host-connection-pool.max-connections` setting). This will limit the rate of requests a pool
6+
to a single host can handle.
7+
8+
When you use the @ref[stream-based host-level API](host-level.md#using-the-host-level-api-in-a-streaming-fashion)
9+
stream semantics prevent that the pool is overloaded with requests. On the other side, when a new request is pushed either using
10+
`Http.singleRequest()` or when materializing too many streams using the same `Http().cachedHostConnectionPool`, requests
11+
may start to queue up when the rate of new requests is greater than the rate that the pool can process requests.
12+
13+
In such a situation `max-open-requests` per host connection pool will be queued to buffer short-term peaks of requests.
14+
Further requests will fail immediately with a `BufferOverflowException` with a message like this:
15+
16+
```
17+
Exceeded configured max-open-requests value of ...
18+
```
19+
20+
This will usually happen under high load or when the pool has been running for some time with the processing speed being
21+
too slow to handle all the incoming requests.
22+
23+
Note, that even if the pool can handle regular load, short-term hiccups (at the server, the network, or at the client) can make
24+
the queue overflow, so you need to treat this as an expected condition. Your application should be able to deal with it. In many cases, it
25+
makes sense to treat pool overflow the same as a `503` answer from the server which usually is used when the server is
26+
overloaded. A common way to treat it would be to retry the request after some while (using a viable backoff strategy).
27+
28+
## Common causes of pool overload
29+
30+
As explained above the general explanation for pool overload is that the incoming request rate is higher that the request
31+
processing rate. This can have all kinds of causes (and hints for fixing them in parentheses):
32+
33+
* The server is too slow (improve server performance)
34+
* The network is too slow (improve network performance)
35+
* The client issues requests too fast (slow down creation of requests if possible)
36+
* There's high latency between client and server (use more concurrent connections to hide latency with parallelism)
37+
* There are peaks in the request rate (prevent peaks by tuning the client application or increase `max-open-requests` to
38+
buffer short-term peaks)
39+
* Response entities were not read or discarded (see @ref[Implications of the streaming nature of Http entities](../implications-of-streaming-http-entity.md))
40+
* Some requests are slower than others blocking the connections of a pool for other requests (see below)
41+
42+
The last point may need a bit more explanation. If some requests are much slower than others, e.g. if the request is
43+
a long-running Server Sent Events request than this will block one of the connections of the pool for a long time. If
44+
there are multiple such requests going on at the same time it will lead to starvation and other requests cannot make any
45+
progress any more. Make sure to run a long-running request on a dedicated connection (using the
46+
@ref[Connection-Level Client-Side API](connection-level.md#connection-level-api)) to prevent such a situation.
47+
48+
## Why does this happen only with Akka Http and not with [insert other client]
49+
50+
Many Java HTTP clients don't set limits by default for some of the resources used. E.g. some clients will never queue a
51+
request but will just open another connection to the server if all the pooled connections are currently busy. However,
52+
this might just move the problem from the client to the server. Also using an excessive number of connections will lead to
53+
worse performance on the network as more connections will compete for bandwidth.

0 commit comments

Comments
 (0)