Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some cluster settings don't take effect if set from elasticsearch.yml, but work fine if set via API #40803

Open
gwbrown opened this issue Apr 3, 2019 · 11 comments
Labels
>bug :Core/Infra/Settings Settings infrastructure and APIs help wanted adoptme Team:Core/Infra Meta label for core/infra team

Comments

@gwbrown
Copy link
Contributor

gwbrown commented Apr 3, 2019

Some cluster-level settings don't take effect unless they're set via the Cluster Settings API, and values specified in elasticsearch.yml are ignored. They'll show up in GET _cluster/settings?include_defaults=true but will be otherwise ignored.

This is caused by reading settings only from the cluster state - in order to be initialized in elasticsearch.yml, a variable needs to be set in memory and initialized from a different Settings object during startup and updated in a cluster state listener rather than being read directly from the cluster state.

So far, I've found these settings which don't respect values set in elasticsearch.yml:
cluster.blocks.read_only
cluster.max_shards_per_node fixed in #57234.

/cc @joegallo who pointed this out

@gwbrown gwbrown added >bug :Core/Infra/Settings Settings infrastructure and APIs labels Apr 3, 2019
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-infra

@dmitry-ee
Copy link

Can confirm bug.
Setting via docker environment -e "cluster.max_shards_per_node=3000" is not working.
Wasted 3hrs for understanding where that 1000 shards limit actually is.
Hopefully found that bug.

Part of exception:
Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1366]/[1000] maximum shards open

Current solution:
curl -X PUT localhost:9200/_cluster/settings -H "Content-Type: application/json" -d '{ "persistent": { "cluster.max_shards_per_node": "3000" } }'

@Pupkur
Copy link

Pupkur commented Jul 2, 2019

.opendistro_security index does not exists, attempt to create it ... ERR: An unexpected IllegalArgumentException occured: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [2670]/[1000] maximum shards open;
Trace:
java.lang.IllegalArgumentException: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [2670]/[1000] maximum shards open;

And when I`m trying:

curl -k -X PUT https://SERVER_IP:9200/_cluster/settings -H "Content-Type: application/json" -d '{ "persistent": { "cluster.max_shards_per_node": "3000" } }'

Got error:
Open Distro Security not initialized.

@gwbrown
Copy link
Contributor Author

gwbrown commented Jul 2, 2019

@Pupkur That looks like an issue with OpenDistro, which we don't maintain and can't provide support for. We do our best to answer questions about our default distribution and core Elasticsearch functionality on our forums and address bugs here on GitHub, but for third-party plugins you'll need to get support from either the creator of the plugin or a community with knowledge about that plugin. Let's keep the discussion on this issue focused on the bug described in the first post.

@Pupkur

This comment has been minimized.

@gperrego
Copy link

gperrego commented May 26, 2020

We are having the same issue, we can work around this using the API but thats an issue. We are on 7.7. Is there a specific section of the yml that this should be in? When placing this setting in the yml we were not sure where to place it. I don't think that should matter but wanted to check.

@gwbrown
Copy link
Contributor Author

gwbrown commented Jun 9, 2020

The cluster.max_shards_per_node issue has been fixed in #57234. Because cluster.blocks.read_only is highly unlikely to be set via the YAML config, I'm going to close this issue. We can reopen if we decided we want to address that in the future.

@gwbrown gwbrown closed this as completed Jun 9, 2020
@M9k
Copy link

M9k commented Jun 22, 2020

If someone work in Kibana curl:
PUT /_cluster/settings { "persistent": { "cluster.max_shards_per_node": "5000" } }

@DaveCTurner
Copy link
Contributor

Because cluster.blocks.read_only is highly unlikely to be set via the YAML config

@gwbrown I don't think this is the right reasoning to use here. I think we shouldn't silently ignore settings like this, regardless of how likely they are to be used.

In this case, you might for instance expect to be able to use this setting to bring up a cluster in a purely read-only mode -- perhaps you want to do some forensic analysis on a snapshot of the cluster and be sure that the data will remain unchanged while you're doing it. I think that silently ignoring this setting would be surprising to users trying to do that.

I'm reopening this issue for that reason.

@DaveCTurner DaveCTurner reopened this Jun 23, 2020
@rjernst rjernst added the needs:triage Requires assignment of a team area label label Dec 3, 2020
@gwbrown gwbrown added help wanted adoptme and removed needs:triage Requires assignment of a team area label labels Dec 19, 2020
@mouglou
Copy link

mouglou commented May 5, 2021

Hi!

I run the ELK stack under the 7.8.1 version, and I get the same problem.
And as you say, configure this value with the API via a "persistent" option working fine and solving our problem.

Thanks !

@gaby
Copy link

gaby commented Jul 27, 2021

Any plans to fix this, ticket has been open for ever 2 years.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Core/Infra/Settings Settings infrastructure and APIs help wanted adoptme Team:Core/Infra Meta label for core/infra team
Projects
None yet
Development

No branches or pull requests

10 participants