-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Received error "[Errno 104] Connection reset by peer" after 180 seconds when waiting on task result #5358
Comments
I'm also experiencing this on 4.3.0rc1 even though it has been said in issues that this should be resolved in 4.3.0. Using broker |
Could you please try to reproduce this on the solo pool to ensure this is not a problem with prefork or gevent? |
The issue still occurs when using the solo pool. I used the same instructions as above, except the following command-line to start the worker:
Script output/traceback
Log output
|
Do you have anything useful in RabbitMQ's logs? |
@thedrow i am trying to reproduce this issue, it throws Errno 104 after 360seconds in my case. And result is consumed (I'm checking Queue celery in rabbitmq management web-interface, and there is no messages). Rabbitmq logs does not have anything at all. Traceback from repro.py
Same traceback shared via sentry: https://sentry.io/share/issue/393ef556ff514bf9a019d858283bed49/ |
Could this be related to client heartbeat? |
If i set broker_heartbeat to 20 seconds, it will still fail after 355 seconds. |
I'm am seeing the same problem with RabbitMQ 3.2.4, Python 2.7.15rc1, Ubuntu 18.04, and latest Kombu/Celery via pyampq. |
Just chiming in, but I see the same thing happening on 4.3.0, but on Python 3.6. My worker is running under the following command:
|
@thedrow It is related to heartbeat after all. But i was setting heartbeat this way:
And this way heartbeat still uses default value 120. So i changed code to
And now it fails very fast:
|
Looking at #4876 I wanted to also add that I'm only seeing this issue in the client (Django server) so it does seem to be a duplicate. |
What is the solution to this bug? |
We don't have one. |
Well...reverted when? This BUG still persist so the revert didn't worked seems. (4.4.7) |
I agree it isn't an acceptable fix but we don't have a solution currently. |
I'll try DEBUG on free time. |
It is confusing to close issues that haven't been solved yet |
celery issue
Checklist
celery -A proj report
in the issue.(if you are not able to do this, then at least specify the Celery
version affected).
pip freeze
in the issue.master
branch of Celery.Related Issues and Possible Duplicates
Related Issues
Possible Duplicates
Environment & Settings
Celery version:
4.3.0rc1 (rhubarb)
(e257646)celery report
Output:Steps to Reproduce
Required Dependencies
Python Packages
pip freeze
Output:Other Dependencies
N/A
Minimally Reproducible Test Case
Execute
Save the following as
repro.py
.Execute
Expected Behavior
The client should send all tasks to be executed, then wait on each task in turn. Each task should succeed and output should be printed in the console.
Actual Behavior
180 seconds (3 minutes) after the start of the client, a connection reset is received.
Traceback
Daemon logs
Combinations tested:
pyampq://
rpc://localhost
redis://
rpc://localhost
pyamqp://
redis://
The text was updated successfully, but these errors were encountered: