New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DatabaseBackend.cleanup causes OOM kill for large expired result #232
Comments
I would say we should go with option Additionally the current behaviour could be kept same and a config flag given which will use the "new bypass ORM/signal delete mechanism"/ |
I will look deeply on the isseu. thanks for report btw |
as the doc mentions, there needs to be a In my case, we have a general "auditing middleware" which listens to this signal without any |
work aroundthe default "clean_up" task is scheduled at 4AM, so I have a custom task scheduled at 3:50AM which does the "raw delete", and when the |
@auvipy any update? should I go ahead with PR? |
yes please |
queryset._raw_delete(queryset.db) private API ;) |
Problem
DatabaeBackend.cleanup()
method uses Django'sQuerySet.delete
method - which could be stupidly slow and cause a LOT of memory bloting since this operation loads all object in-memory before deleting. Following is the quote explaining it from the docsPossible Solution
a. using raw SQL for deletion - with
RawQuerySet
to delete the selected raws (Note: that likeQuerySet
they are lazy too, so would need to make sure the query is fired)b. using raw SQL for deletion - with
connection.cursor.execute
APIc. using
QuerySet._raw_delete(using)
abstraction - only issue being that the method is privateOther notes/reads
bulk_delete
method which is more efficient with deleting large volume of recordTo conclude
Since celery-result could grow very fast (for system that executes lots of tiny celery tasks) - for such system this behaviour could cause OOM kills and the actual "delete/cleanup operation" is never performed.
PS: I would like to contribute this fix, if given green signal here.
The text was updated successfully, but these errors were encountered: