-
Notifications
You must be signed in to change notification settings - Fork 24.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix TransportMasterNodeAction not Retrying NodeClosedException #51325
Changes from 1 commit
6bee961
323f0d5
3378d3d
d3b05a1
1d07a26
646823d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -178,7 +178,7 @@ protected void doStart(ClusterState clusterState) { | |
@Override | ||
public void handleException(final TransportException exp) { | ||
Throwable cause = exp.unwrapCause(); | ||
if (cause instanceof ConnectTransportException) { | ||
if (cause instanceof ConnectTransportException || cause instanceof NodeClosedException) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we should separate out cases where the Adding a check that the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks Henning, I pushed 1d07a26 :) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. AFAICS, TransportException is not an ElasticsearchWrapperException (in contrast to RemoteTransportException), and therefore This means that the code as before would have been fine I think There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we can run into There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ^^ seems to have happened here https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+pull-request-1/14196/consoleText
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. alright, the tests and you win |
||
// we want to retry here a bit to see if a new master is elected | ||
logger.debug("connection exception while trying to forward request with action name [{}] to " + | ||
"master node [{}], scheduling a retry. Error: [{}]", | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you write an integration test that shows that this solves an issue?
Perhaps a test that restarts the current master while concurrently sending some
TransportMasterNodeReadAction
(e.g. cluster health).I'm surprised that we have not seen this issue in any of our integration tests, where we sometimes restart nodes (but perhaps don't do this to concurrently issuing master-level requests).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There we go: 3378d3d
Bit of a dirty test but reliably reproduces the issue without any other hacks :)
I've researched this a little and the problem seems to be isolated to
7.x+
. When shutting down a master there is a short window during which:org.elasticsearch.cluster.coordination.Coordinator#clusterStateWithNoMasterBlock
NotMasterException
and add the cluster state observer for the retry waiting for the new master. That new master never comes and instead the wait is shut-down with theNodeClosedException
when the cluster service is shut downNodeClosedException
back from the master=> this fix still seems fine, if we retry on
NodeClosedException
because we can interpret it as the master node going away we're good IMO.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that also include 7.0, 7.1, ...?
I would like to understand why this is not an issue in earlier versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For snapshots this only started showing up because I inadvertently disabled the snapshot shard's services own retry in #50788
I can also reproduce this failure back to
7.1
, though interestingly enough it seems to be a lot less likely in7.1
than in7.6
(maybe that's due to some locking we removed from the internal test cluster operations, but I can't tell right now).I pushed 646823d to slow things down a little more and make the test repro better. Without this change it takes a pretty large number of iterations for it to fail on
7.1