-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(core): Optimize search index update queries #2808
Conversation
✅ Deploy Preview for effervescent-donut-4977b2 canceled.
|
5c57fde
to
0dd8ec3
Compare
Signed-off-by: carathorys <gallayb@gmail.com>
This approach is good. The reason I think this has not come up until now is that I suspect most projects working with very large data sets are using the ElasticsearchPlugin or some other search integration which is more memory-efficient when building the index. |
@michaelbromley The main issue I think is with typeorm: |
@michaelbromley I think I've fixed the E2E tests, but I've seen some strange behaviors for the translations, I hope I fixed it, and didn't made any mistakes: |
I've updated the E2E tests where it was necessary, and picked the remaining checkboxes (I'm not sure where to update the readme, if it is needed at all) |
Hi! |
Done! |
Thank you! |
Description
The current procedure of the Search Index update is loading all the product variants, all their products, collections, facetValues, and facet relations.
This data would be huge by definition, but these data usually have translations as well.
In my example, it was loading 40 million rows for ~4900 products on 4 channels, and 4 languages.
Of course, I was trying to reindex only 1 channel with all the products, but after a while, and 16GB of RAM, the node process exited with a memory allocation error message.
What I've changed is the following: On any search index update I'm
eager
relation loading)Breaking changes
Checklist
📌 Always:
👍 Most of the time: