New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issues with revisions and activity #17894
Comments
what database are you using ? |
I'm not OP but I am having the same issue with Postgres 14 running on AWS RDS. |
Same issue here. Revisions take a really long time to load. Have about 700,000 records in the directus_revisions table. I use two fields for caching stock values from our slow ERP, which are updated every 6 hours. Would be nice to be able to disable revision logging for certain fields. |
I added an index on the |
On top of the index suggestion we should also have a configuration to limit the amount of revisions per item, so they are rotative. For example, only save last 0, 5, 10, 20, 50 revisions per item. Probably can be configured per collection. This is because Directus can crash because it does not have sufficient memory (OOMKilled - Out of Memory). It happens Another thing we can do is truncate |
Partial fix numero uno here is to ensure there's some configuration to control the max retention on those tables. Part of the problem is that it effectively grows infinitely big without limit. #18105 implements two new env vars to control the max retention in time for activity and revisions separately. |
Same issue here, ~1,1M records. |
For those struggling with issues similar to this (but not quite exactly the one OP opened) We wanted to clean up the revision/activity table and the estimated time was ~3 hours, ran immediately after the change. Add an index to column These indexes might help with selections too, but I haven't had any issues in this particular case. For reference (use at your own risk): CREATE INDEX custom_directus_revisions_parent_idx ON public.directus_revisions USING btree (parent);
ALTER TABLE public.directus_revisions DROP CONSTRAINT directus_revisions_parent_foreign;
ALTER TABLE public.directus_revisions ADD CONSTRAINT directus_revisions_parent_foreign FOREIGN KEY (parent) REFERENCES public.directus_revisions(id) ON DELETE SET NULL; CREATE INDEX custom_directus_revisions_activity_idx ON public.directus_revisions (activity); |
Just to confirm, we had the same issue: (i) random 504s (which I could see in the request monitor were very slow or timed-out requests for revisions) and (ii) the list of revisions in the single item sidebar taking ages to load. I can confirm @BenoitAverty and @bevanmw's suggestion to add an index to the |
Also want to mention this became more trickier with the addition of Content Versioning. When we try to delete a collection, it's freezing the whole project. directus/api/src/services/collections.ts Lines 587 to 680 in 19598eb
|
Wanted to open a new issue, when I saw this one is in progress. So just adding that any use of If possible, standardize the case of the values in these columns when data is inserted or updated. This way, you can avoid using LOWER() during querying.
|
Checklist
Describe the Bug
There is a problem with the reveisions and activity queries.
When I navigate in my directus instance, I regularly get a popup that says that a request failed with status 504.
When I look in my devtools, it's the requests fetching the revisions for a particular item.
I have to purge the activity and revisions often to avoid that, even though I don't have a particularly big database (there are ~ 1 million in activity and 500 000 in revisions).
My guess is that there's a missing index in the revisions table, because the query is almost always fetching revisions for a particular collection and item : there should be an index on the item column.
To Reproduce
There's not a reliable way I think beause it depends on the volume of revisions and the performance of the DB. However it should be possible to measure a performance boost with an added index.
Hosting Strategy
Self-Hosted (Docker Image)
The text was updated successfully, but these errors were encountered: