You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Index-level caching is one of many areas that can be helpfully tuned to improve the performance of Elasticsearch. Ironically, disabling / reducing a cache size is often a solution for improving the performance.
Elasticsearch has tracked hit/miss stats of the query cache, but it does nothing with the data other than inform an interested user about the effectiveness of their index's caching.
It would be interesting if the node would be able to tune the index query cache heuristically based on actual usage (age, frequency, and value). This could likely be combined with more automatic, global tweaks that occur based on the node's data tier.
This could ultimately be used to give GBs of heap back to an individual ES node instead of burning it on cache storage that's not effectively used (or useful).
The text was updated successfully, but these errors were encountered:
I agree with the general observation that Elasticsearch is likely often burning GBs of heap on the filter cache for little benefit. We should more proactively give back heap to the JVM when possible.
In my opinion this is a general Lucene issue, let's move it to Lucene's JIRA?
Description
Index-level caching is one of many areas that can be helpfully tuned to improve the performance of Elasticsearch. Ironically, disabling / reducing a cache size is often a solution for improving the performance.
Elasticsearch has tracked hit/miss stats of the query cache, but it does nothing with the data other than inform an interested user about the effectiveness of their index's caching.
It would be interesting if the node would be able to tune the index query cache heuristically based on actual usage (age, frequency, and value). This could likely be combined with more automatic, global tweaks that occur based on the node's data tier.
This could ultimately be used to give GBs of heap back to an individual ES node instead of burning it on cache storage that's not effectively used (or useful).
The text was updated successfully, but these errors were encountered: