|
| 1 | +# Pool overflow and the max-open-requests setting |
| 2 | + |
| 3 | +@ref[Request-Level Client-Side API](request-level.md#request-level-api) and @ref[Host-Level Client-Side API](host-level.md#host-level-api) |
| 4 | +use a connection pool underneath. The connection pool will open a limited number of concurrent connections to one host |
| 5 | +(see the `akka.http.client.host-connection-pool.max-connections` setting). This will limit the rate of requests a pool |
| 6 | +to a single host can handle. |
| 7 | + |
| 8 | +When you use the @ref[stream-based host-level API](host-level.md#using-the-host-level-api-in-a-streaming-fashion) |
| 9 | +stream semantics prevent that the pool is overloaded with requests. On the other side, when a new request is pushed either using |
| 10 | +`Http.singleRequest()` or when materializing too many streams using the same `Http().cachedHostConnectionPool`, requests |
| 11 | +may start to queue up when the rate of new requests is greater than the rate that the pool can process requests. |
| 12 | + |
| 13 | +In such a situation `max-open-requests` per host connection pool will be queued to buffer short-term peaks of requests. |
| 14 | +Further requests will fail immediately with a `BufferOverflowException` with a message like this: |
| 15 | + |
| 16 | +``` |
| 17 | +Exceeded configured max-open-requests value of ... |
| 18 | +``` |
| 19 | + |
| 20 | +This will usually happen under high load or when the pool has been running for some time with the processing speed being |
| 21 | +too slow to handle all the incoming requests. |
| 22 | + |
| 23 | +Note, that even if the pool can handle regular load, short-term hiccups (at the server, the network, or at the client) can make |
| 24 | +the queue overflow, so you need to treat this as an expected condition. Your application should be able to deal with it. In many cases, it |
| 25 | +makes sense to treat pool overflow the same as a `503` answer from the server which usually is used when the server is |
| 26 | +overloaded. A common way to treat it would be to retry the request after some while (using a viable backoff strategy). |
| 27 | + |
| 28 | +## Common causes of pool overload |
| 29 | + |
| 30 | +As explained above the general explanation for pool overload is that the incoming request rate is higher that the request |
| 31 | +processing rate. This can have all kinds of causes (and hints for fixing them in parentheses): |
| 32 | + |
| 33 | + * The server is too slow (improve server performance) |
| 34 | + * The network is too slow (improve network performance) |
| 35 | + * The client issues requests too fast (slow down creation of requests if possible) |
| 36 | + * There's high latency between client and server (use more concurrent connections to hide latency with parallelism) |
| 37 | + * There are peaks in the request rate (prevent peaks by tuning the client application or increase `max-open-requests` to |
| 38 | + buffer short-term peaks) |
| 39 | + * Response entities were not read or discarded (see @ref[Implications of the streaming nature of Http entities](../implications-of-streaming-http-entity.md)) |
| 40 | + * Some requests are slower than others blocking the connections of a pool for other requests (see below) |
| 41 | + |
| 42 | +The last point may need a bit more explanation. If some requests are much slower than others, e.g. if the request is |
| 43 | +a long-running Server Sent Events request than this will block one of the connections of the pool for a long time. If |
| 44 | +there are multiple such requests going on at the same time it will lead to starvation and other requests cannot make any |
| 45 | +progress any more. Make sure to run a long-running request on a dedicated connection (using the |
| 46 | +@ref[Connection-Level Client-Side API](connection-level.md#connection-level-api)) to prevent such a situation. |
| 47 | + |
| 48 | +## Why does this happen only with Akka Http and not with [insert other client] |
| 49 | + |
| 50 | +Many Java HTTP clients don't set limits by default for some of the resources used. E.g. some clients will never queue a |
| 51 | +request but will just open another connection to the server if all the pooled connections are currently busy. However, |
| 52 | +this might just move the problem from the client to the server. Also using an excessive number of connections will lead to |
| 53 | +worse performance on the network as more connections will compete for bandwidth. |
0 commit comments