Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hackney pool overload #510

grepz opened this issue Jun 6, 2018 · 2 comments

Hackney pool overload #510

grepz opened this issue Jun 6, 2018 · 2 comments


Copy link

@grepz grepz commented Jun 6, 2018


It is possible to overload hackney pool and render it unresponsive or even crash erlang node.
The case is like this:

Some process sending simple http requests on a server that can handle not more then N RPS with rate N * k(k >= 2) At some point hackney pool queue size begins to grow indefinitely(with my numbers its ~300 rps of receiving server and 600-700 rps for client).

Process info for a pool shows this:


Pool configuration looks like this:

  max_conn => 100,
  timeout => 3000

Since message queue size is huge, pool becomes completely unresponsive.

My proposal: maybe its a good idea to set up some back pressure or introduce some defensive mechanism that will allow to avoid pool lock.

Also it seems a bit unclear when hackney_pool answers {error, connect_timeout} for gen_server:call timeout to a pool. Forces confusion in case described above. Connection itself has not been even tried to be established. Since gen_server timeouted its call.

Copy link

@seanmcevoy seanmcevoy commented Oct 11, 2018

i noticed similar issues and when testing have alieviated a lot of them with this fix:
it's not a complete fix but it definitely helps the symptoms here

Copy link

@indrekj indrekj commented Mar 5, 2019

I'm not sure if it's the same problem but my pools are getting stuck as well. I'm using hackney 1.15.0. Running get stats on the pool returns:

iex(transporter@> :hackney_pool.get_stats(:"client-logger")
  name: :"client-logger",
  max: 400,
  in_use_count: 0,
  free_count: 0,
  queue_count: 13961

in_use_count is 0 but the queue count is huge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants