Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

high concurrency errors: "HttpConnectionPool is full, discarding connection" #4

Closed
saidimu opened this Issue · 4 comments

3 participants

@saidimu

I'm encountering quite a high number of the exceptions above while using Celery in a high concurrency environment (I'm using eventlet as a Celery worker pool). The workers subsequently get stuck in this loop and never consume the queue.

The error is due to urllib's connection pool maxsize being exceeded; is there a recommended way to handle high concurrency with IronMQ in these situations? According to the HUD, message rate peaked at about 500 messages/second.

It is worth noting that previously I used redis as both a broker and backend with no such issues. It appears the problem isn't with IronMQ but with the urllib http client being used by the IronMQ python library.

Perhaps a better http client (requests) should be used?

@saidimu

Turns out iron_celery already uses the requests library (via the iron_core library dependency).

These two closed issues on the requests tracker deal with the connection pool size:

@saidimu saidimu referenced this issue in iron-io/iron_core_python
Closed

Allow custom HTTP connection pool size #6

@saidimu

I submitted the above pull request #4 to iron_core_python to allow custom HTTP connection pools.

The problem now is to figure out how to pass on these parameters to the IronMQ object here: https://github.com/iron-io/iron_celery/blob/master/iron_celery/iron_mq_transport.py#L16

@carimura
Owner

Thanks @saidimu ... cc @iced @paddyforan for assistance if necessary

@ulandj
Collaborator

@saidimu thanks for the offer. But from customers who use iron_mq_python and iron_celery not had complaints about pool maxsize. In which case your pull request will remain for us as reference.

@ulandj ulandj closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.