You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The default number of Elasticsearch shards per index for the Graylog charm is 2 [0]. This means that when more than two ES nodes are deployed one or more get no shards assigned and are essentially unused for the lifetime of each index. When the index is rotated a different set of nodes are typically chosen but this means that some the compute capacity is unused each time, and disk usage can be uneven (because the quantity of logs can vary over time, when different nodes are in use)
Another option would be to use the setting of '0' to handle this automatically, but not sure how well this works for expansions of existing clusters, etc.
index_shards:
default: 2
description: |
Number of Elasticsearch shards used per index in this index set. Set this to '0' to let the charm automatically calculate based on how many Elasticsearch units.
source: default
type: int
value: 2
The default number of Elasticsearch shards per index for the Graylog charm is 2 [0]. This means that when more than two ES nodes are deployed one or more get no shards assigned and are essentially unused for the lifetime of each index. When the index is rotated a different set of nodes are typically chosen but this means that some the compute capacity is unused each time, and disk usage can be uneven (because the quantity of logs can vary over time, when different nodes are in use)
Another option would be to use the setting of '0' to handle this automatically, but not sure how well this works for expansions of existing clusters, etc.
index_shards:
default: 2
description: |
Number of Elasticsearch shards used per index in this index set. Set this to '0' to let the charm automatically calculate based on how many Elasticsearch units.
source: default
type: int
value: 2
[0] https://jaas.ai/graylog
The text was updated successfully, but these errors were encountered: