client unexpectedly closed TCP connection with status check #7028
Replies: 51 comments 8 replies
-
This does not look like a bug in Celery. However I'll need more information to determine that. Also, please try to set the connect_timeout to a higher value, it should solve your issue. |
Beta Was this translation helpful? Give feedback.
-
@unclewizard @thedrow I am getting the same issue. Was this resolved? Can you help me with the solution? |
Beta Was this translation helpful? Give feedback.
-
Without more details I can't be helpful. If any of you think you have found the root cause of the issue please report a new issue about it. |
Beta Was this translation helpful? Give feedback.
-
Hi i am also facing same issue 018-08-30 09:59:22.323 [warning] <0.21673.0> closing AMQP connection <0.21673.0> (127.0.0.1:53706 -> 127.0.0.1:5672, vhost: '/', user: 'guest'): i am using Brokerurl : amqp://guest:guest@localhost:5672//
) It is adding conneting to rabbitMq and not getting databack client unexpectedly closed TCP connection Could you Please helpme about why i am not getting databack alembic 0.8.6 |
Beta Was this translation helpful? Give feedback.
-
Am getting the same issue.
My logs are flooded with this, hundreds of times in the same time frame. It always starts with this:
EDIT: Just saw #4895 . Will attempt to downgrade celery. |
Beta Was this translation helpful? Give feedback.
-
are you using the latest celery? |
Beta Was this translation helpful? Give feedback.
-
I'm using 4.2.1 as reported in #4895 . I will now attempt a downgrade to 4.1.1 to see if that resolves the issue. Downgrading to 4.1.1 seems to have fixed the issue for me. I will test it further to see if the issue is still prevalent. |
Beta Was this translation helpful? Give feedback.
-
then it might be a regression |
Beta Was this translation helpful? Give feedback.
-
Can reproduce exactly the same issue as xlanor and downgrading to 4.1.1 fixed it also for me. |
Beta Was this translation helpful? Give feedback.
-
So can I. Back to 4.1.1. |
Beta Was this translation helpful? Give feedback.
-
I'm having this same issue and the downgrade to 4.1.1 seems to fix this. |
Beta Was this translation helpful? Give feedback.
-
We are also running into this issue. Downgrading to 4.1.1 fixes it, but this is not a feasible workaround for us because 4.2.0 contains some critical bugfixes, including a fix for #4223 . Is the 4.1.x release line still supported? Will bug fixes be backported? |
Beta Was this translation helpful? Give feedback.
-
We don't have the resources to maintain two versions currently. |
Beta Was this translation helpful? Give feedback.
-
Understandable. Let us know if there is any further information we can provide to help diagnose this issue. Thanks for all your hard work! |
Beta Was this translation helpful? Give feedback.
-
At this point, any new information would be good. |
Beta Was this translation helpful? Give feedback.
-
Unfortunately This issue still persists in version 5.0.5 . Is there any workaround for this ? |
Beta Was this translation helpful? Give feedback.
-
did you check #4355 (comment) ? |
Beta Was this translation helpful? Give feedback.
-
The link mentioned in this comment #4355 (comment) is asking us to login to salesforce.com :( |
Beta Was this translation helpful? Give feedback.
-
I'm seeing this when I do celery ping, e.g.
RabbitMQ:
I will try to debug celery, it feels like the connection is not properly closed after being opened. |
Beta Was this translation helpful? Give feedback.
-
OK, I did debug a bit and found the culprit to be the connection pool in kombu:
results in a nice RabbitMQ log:
I guess the problem lies in the teardown of the pool? @auvipy does this ring any bell? :) anyway, I don't understand why for a ping we need two connections, any idea? |
Beta Was this translation helpful? Give feedback.
-
does changing the value here improve anything? |
Beta Was this translation helpful? Give feedback.
-
yes, the connections are closed gracefully, I no longer see the warnings in RabbitMQ log:
|
Beta Was this translation helpful? Give feedback.
-
instead of closing I am moving this to discussion |
Beta Was this translation helpful? Give feedback.
-
@auvipy any progress for this problem? i think this problem cause high cpu usages too. |
Beta Was this translation helpful? Give feedback.
-
Hello all, any solution? Getting same for the celery 4.4.2 |
Beta Was this translation helpful? Give feedback.
-
iirc broker_pool_limit = 0 fixed this for me as well |
Beta Was this translation helpful? Give feedback.
-
Same here. Happening at least since version 4.1, when we decided to disable the broker pool as a workaround ( Versions:
I'd be glad to collaborate, but celery and kombu projects are a bit messy, and I can't find where the problem is. So... any hint is welcomed. |
Beta Was this translation helpful? Give feedback.
-
I have the same error.
I'm not sure sharing my environmet helps but the following is my env just in case.
|
Beta Was this translation helpful? Give feedback.
-
I am also facing similar issue. My detailed analysis is mentioned here Is it recommended to change the celery version? |
Beta Was this translation helpful? Give feedback.
-
Hey, I get same logs in rabbitmq:
I am trying to rabbitmq like: def main():
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='rabbitmq')
)
channel = connection.channel()
def callback(chan, method, props, body):
err = email.notify(body)
if err:
chan.basic_nack(delivery_tag=method.delivery_tag)
else:
chan.basic_ack(delivery_tag=method.delivery_tag)
channel.basic_consume(
queue=os.environ.get("MP3_QUEUE"),
on_message_callback=callback
)
print("Waiting for messages. To leave: CTRL+C")
channel.start_consuming()
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
print("Exiting...")
# Graceful exit
try:
sys.exit(0)
except SystemExit:
os._exit(0) can anyone help where could be the problem be? At line
|
Beta Was this translation helpful? Give feedback.
-
Checklist
Steps to reproduce
Follow the "first steps with django" documentation to create a barebones celery app: http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html
For a broker, use RabbitMQ 3.6.11. I have also tested with RabbitMQ 3.6.12. I set my broker in settings.py as follows:
Finally, run
celery worker -A appname status
Expected behavior
No warnings in the broker logs
Actual behavior
The rabbitmq server prints the following output (depending on configuration it may show up in the log file. For me it was in /var/log/rabbitmq/rabbit@vagrant-ubuntu-trusty-64.log)
The status check reports "OK" and the worker is consuming tasks normally, so this doesn't appear to be a connectivity or network issue.
This wouldn't be so bad, but it also seems to trigger a memory leak at the broker. If you run this 1000s of times, you will notice the rabbitmq memory usage increase even without queueing any actual messages.
Beta Was this translation helpful? Give feedback.
All reactions