-
Notifications
You must be signed in to change notification settings - Fork 24.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default max local storage nodes to one #19964
Default max local storage nodes to one #19964
Conversation
This commit defaults the max local storage nodes to one. The motivation for this change is that a default value greather than one is dangerous as users sometimes end up unknowingly starting a second node and start thinking that they have encountered data loss.
@@ -261,6 +261,7 @@ class ClusterFormationTasks { | |||
'node.attr.testattr' : 'test', | |||
'repositories.url.allowed_urls': 'http://snapshot.test*' | |||
] | |||
esConfig['node.max_local_storage_nodes'] = node.config.numNodes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cute.
This commit adjusts the node max local storage node settings value for some tests - the provided value for ESIntegTestCase is derived from the value of the annotations - the default value for the InternalTestCluster is more carefully calculated - the value for the tribe unit tests is adjusted to reflect that there are two clusters in play
Also that when locks are lost, the exception is masked and dropped on the floor and ES keeps on trucking. Oh and did i mention they are filesystem locks? Seems legitimately unsafe. |
This commit simplifies the handling of max local storage nodes in integration tests by just setting the default max local storage nodes to be the maximum possible integer.
LGTM |
Can we just remove this feature entirely, such that users must always be explicit on starting a node about precisely which I think this is too much magic on ES's part, trying to have multiple nodes share |
@mikemccand Doesn't this get us there, now you have to be explicit about wanting multiple nodes that share the same |
Thanks for reviewing @nik9000. I've merged this @mikemccand, but that shouldn't stop discussion on your proposal! |
This is definitely an improvement (thank you! progress not perfection!), but what I'm saying is I don't think such dangerous magic should even be an option in ES. |
Personally I feel the design is broken anyway, as i've said over, and over again. It relies on filesystem locking, which is unreliable by definition. But worse, its lenient. Startup ES, index some docs, and go nuke that
Why is such an important exception dropped on the floor and merely translated into a logger WARN? This feature is 100% unsafe. |
I'm ok with removing the feature altogether. If we're already breaking backwards compatibility with the setting maybe we can just kill it? @clintongormley, what do you think? I like that we did this now though because I have a feeling even if we do decide to remove the feature it'll take some time because lots of tests and the gradle build rely on it. |
The gradle build does not depend on it. Integ tests have unique installations per node, and even fantasy land tests create a unique temp dir per node for path.home iirc. |
Given that the default is now 1, the comment in the config file was outdated. Also considering that the default value is production ready, we shouldn't list it among the values that need attention when going to production. Relates to elastic#19964
Given that the default is now 1, the comment in the config file was outdated. Also considering that the default value is production ready, we shouldn't list it among the values that need attention when going to production. Relates to #19964
Given that the default is now 1, the comment in the config file was outdated. Also considering that the default value is production ready, we shouldn't list it among the values that need attention when going to production. Relates to #19964
Given that the default is now 1, the comment in the config file was outdated. Also considering that the default value is production ready, we shouldn't list it among the values that need attention when going to production. Relates to #19964
This commit defaults the max local storage nodes to one. The motivation
for this change is that a default value greather than one is dangerous
as users sometimes end up unknowingly starting a second node and start
thinking that they have encountered data loss.
Closes #19679, supersedes #19748