-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make batchs for the indexation #9
Comments
Currently the plugins fetch all data needed to be indexed and send it in one big batch to MeiliSearch gatsby-plugin-meilisearch/gatsby-node.js Lines 37 to 38 in b47111a
We want to avoid, in case the dataset is very large, a payload that is too big for the server hosting MeiliSearch. To do that, the best solution would be to batch the documents in smaller chunks before sending them. For example if your |
54: Implemented adding to index in batches (#9) r=bidoubiwa a=TommasoAmici Hello, please consider this a draft, I opened the MR so others can see I'm working on this. I've started implementing the changes required to add documents in batches (#9). `index.addDocumentsInBatches` appears to fail silently in some cases (e.g. 'Wrong transformer', 'Document has no id'), while looking at the existing tests `index.addDocuments` used to throw some errors. I will investigate more the behavior of `index.addDocumentsInBatches` to see if there are some parameters to tweak, otherwise it looks like some logic needs to be added to the plugin to handle these cases. Co-authored-by: Tommaso Amici <me@tommasoamici.com>
Closing as this was done with #54 |
Split the indexation by making batches of 1000 (by default) documents
The text was updated successfully, but these errors were encountered: