You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I wanted to know if someone has encountered this issue.
We have a django service on the server side and Redis as the celery cache.
We have some long running tasks that we run via the celery, with two queues and each time only one task is running in each queue worker. (we use "fetch multiplier = =1")
Since the tasks are "important", we decided to try and use the "ack_late" feature, so that when the celery goes down and up , we wanted the pending tasks to continue running instead of all being cleared.
We noticed an issue that started to happen ever since we started using the "ack_late" feature:
Tasks that were in pending state and even that task that was running when we "killed" the celery process, are entering some kind of infinite loop state, and even though they finished running, they start running all over again time and time again.
(It's like, for some reason , when the tasks are done, they are not removed from the redis cache, which makes them run again and again - but this is just a speculation!!)
I am not sure if this is related or not, but this issue happens only in environments where we are using a service called "uswgi" in order to "manage" some of our processes in the environment , celery being one of them, and make sure that if a process falls, the "uswgi" process raises them back again.
Did anyone ever encounter this type of issue before? any ideas what is causing this ?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
I wanted to know if someone has encountered this issue.
We have a django service on the server side and Redis as the celery cache.
We have some long running tasks that we run via the celery, with two queues and each time only one task is running in each queue worker. (we use "fetch multiplier = =1")
Since the tasks are "important", we decided to try and use the "ack_late" feature, so that when the celery goes down and up , we wanted the pending tasks to continue running instead of all being cleared.
We noticed an issue that started to happen ever since we started using the "ack_late" feature:
Tasks that were in pending state and even that task that was running when we "killed" the celery process, are entering some kind of infinite loop state, and even though they finished running, they start running all over again time and time again.
(It's like, for some reason , when the tasks are done, they are not removed from the redis cache, which makes them run again and again - but this is just a speculation!!)
I am not sure if this is related or not, but this issue happens only in environments where we are using a service called "uswgi" in order to "manage" some of our processes in the environment , celery being one of them, and make sure that if a process falls, the "uswgi" process raises them back again.
Did anyone ever encounter this type of issue before? any ideas what is causing this ?
Thank you,
Beta Was this translation helpful? Give feedback.
All reactions