Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Celery Tasks Fail Randomly with redis.exceptions.ResponseError: wrong number of arguments for 'subscribe' command #147

Open
SHARANTANGEDA opened this issue Sep 23, 2021 · 5 comments
Assignees
Labels

Comments

@SHARANTANGEDA
Copy link

SHARANTANGEDA commented Sep 23, 2021

Describe the bug
Facing this issue intermittently where Celery gives the following error on scheduled run. I believe this is happening because of some race condition due to asyncio. We have used single pod solution only, even with that configuration this issue pops up randomly

{"stackTrace": "Traceback (most recent call last):\n File \"/code/ops/tasks/anomalyDetectionTasks.py\", line 85, in 
anomalyDetectionJob\n result = _detectionJobs.get()\n File \"/opt/venv/lib/python3.7/site-packages/celery/result.py\", line 680, in get\n on_interval=on_interval,\n File \"/opt/venv/lib/python3.7/site-packages/celery/result.py\", line 799, in 
join_native\n on_message, on_interval):\n File \"/opt/venv/lib/python3.7/site-packages/celery/backends/asynchronous.py\",
 line 150, in iter_native\n for _ in self._wait_for_pending(result, no_ack=no_ack, **kwargs):\n File 
\"/opt/venv/lib/python3.7/site-packages/celery/backends/asynchronous.py\", line 267, in _wait_for_pending\n 
on_interval=on_interval):\n File \"/opt/venv/lib/python3.7/site-packages/celery/backends/asynchronous.py\", line 54, in 
drain_events_until\n yield self.wait_for(p, wait, timeout=interval)\n File \"/opt/venv/lib/python3.7/site-
packages/celery/backends/asynchronous.py\", line 63, in wait_for\n wait(timeout=timeout)\n File 
\"/opt/venv/lib/python3.7/site-packages/celery/backends/redis.py\", line 152, in drain_events\n message = 
self._pubsub.get_message(timeout=timeout)\n File \"/opt/venv/lib/python3.7/site-packages/redis/client.py\", line 3617, in 
get_message\n response = self.parse_response(block=False, timeout=timeout)\n File \"/opt/venv/lib/python3.7/site-
packages/redis/client.py\", line 3505, in parse_response\n response = self._execute(conn, conn.read_response)\n File 
\"/opt/venv/lib/python3.7/site-packages/redis/client.py\", line 3479, in _execute\n return command(*args, **kwargs)\n File 
\"/opt/venv/lib/python3.7/site-packages/redis/connection.py\", line 756, in read_response\n raise 
response\nredis.exceptions.ResponseError: wrong number of arguments for 'subscribe' command\n", "message": "wrong 
number of arguments for 'subscribe' command"}

To Reproduce
Steps to reproduce the behavior:

  1. Create an anomaly definition
  2. Schedule it to run at specific time
  3. Few times schedule might succeed where as few other times you might see the above error

Expected behavior
Is there any work around that we can use to avoid this issue?, please help

@ankitkpandey
Copy link
Contributor

Hi @SHARANTANGEDA could you please mention the versions you have of redis-py, celery and the redis-server? It would help us debug the problem better

@SHARANTANGEDA
Copy link
Author

@ankitkpandey
I'm using celery==5.1.2, redis==3.5.3 and Azure Cache for Redis 4.0.14
Hope this helps

@ankitkpandey
Copy link
Contributor

@SHARANTANGEDA
I believe it is an internal issue with celery and redis-py for these versions and might or might not have to do with Azure Cache.
I found a recent discussion regarding the same issue and it's some functional parameter mismatch when celery is trying to subscribe to the redis server via redis-py. It also has the same issue. Will try to find out more, have attached the link below

Issue

@SHARANTANGEDA
Copy link
Author

@ankitkpandey Thanks for sharing this, so I assume this is happening because Celery doesn't support Asyncio yet, and the call is being made in async fashion.

On a separate note, is there any workaround you can provide around this?

@vincue
Copy link
Contributor

vincue commented Oct 13, 2021

@ankitkpandey can't this be exception handled and retried, something like Task Retry Decorator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants