New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Django-q calls task twice or more #183
Comments
You know that :
is already starting the task right ? If you firstly want to set up a task and run it manually later on by command you must use the class Async and the run method. e.g. :
because you wrote
which looks like you started the task twice. |
I can see that the tasks is queued in On our server django-q is running non-stop (handled by circus) so I would assume if I do a |
I've been facing similar problem that some of my long tasks are being processed again and again even successfully processed. I eventually resolved this by configuring a longer According to http://django-q.readthedocs.io/en/latest/configure.html#retry,
So I think that's why my tasks are pushed into the queue again after 60 seconds (the default I'm not sure if this can help you but I think there might be someone like me being confused by similar problem. Sorry for my poor English. |
The ORM broker is a bit experimental and not as stable as the other brokers unfortunately. It might be a problem with your specific database type. None of you mention what your database backend is, it might be that it is very slow and tasks are not locked fast enough. Let me know what backend you're using, maybe this needs to be tested for specifically. I'll look into making the dequeue function lock better, but in the mean time you can experiment by adjusting the poll settings in your configuration. The default 200ms might just be too fast and you need to set it to 1 second or even slower to have a reliable lock. |
I've tried AWS SQS broker and the ORM broker, with mysql, postgreSQL as database backend. The problem occurs on all of these brokers. I found out it is the retry setting that makes my task being taken by the cluster again and again. Once the task is finished and unlocked, that task will be pushed into a worker again. As a result, I resolved my problem by setting retry to a larger value. So I'm wondering if the original poster can also resolve the problem by re-configuring his retry setting. |
Yes, if your tasks will be long running you should set your I personally run most of my servers with the SQS broker, because it's cheap and reliable. Maybe I should set the global default for |
I use orm broker,I also encountered the same problem when task execution time is longer than Conf.RETRY. How about add an option such as no_ack, when set no_ack=True, dequeue just remove the task from broker. |
I had my retry on -1 and now I changed it to a higher value (for testing 1000000) and Will run this in production and check if this solves the problem there also. |
I was having the same problem as you @Eagllus. I had a task which would run for about 180 seconds, so I had configured a Configuring my I'm struggling to imagine a scenario where you'd actually want to have |
@jordanmkoncz thank you for the update! I had setup my configuration a while ago and was never a problem until a connection timeout was triggered (triggering the 'bug' in my logic) and ending up with more then 100k of the same task. I updated my configuration and this idd solved the problem! |
This issue has been reported many times in Django-Q's issue tracker: Koed00#183 Koed00#180 Koed00#307 All these issue have been closed and responses have noted that retry should be set bigger than timeout or duration of any task.
According to this thread, should reduce multiple workers taking the same task: Koed00/django-q#183 (comment)
* Improved handling of race condition when saving setting value * Improvements for managing pending migrations - Inform user if there are outstanding migrations - reload plugin registry (if necessary) * Increase django-q polling time According to this thread, should reduce multiple workers taking the same task: Koed00/django-q#183 (comment) * Revert default behavior * Better logging * Remove comment * Update unit test * Revert maintenance mode behaviour * raise ValidationError in settings
My background process is called twice (or more) but I'm really sure that should not be happening.
My settings for Django Q:
My test task function:
calling it from the commandline:
Starting qcluster (after that I called the async)
And the function is called twice...
For most functions I wouldn't really care if they run twice (or more) but I have a task that calls send_mail and people that are invited receive 2 or more mails...
Is this a bug in Django Q or in my logic?
The text was updated successfully, but these errors were encountered: