Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix TransportMasterNodeAction not Retrying NodeClosedException #51325

Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ protected void doStart(ClusterState clusterState) {
@Override
public void handleException(final TransportException exp) {
Throwable cause = exp.unwrapCause();
if (cause instanceof ConnectTransportException) {
if (cause instanceof ConnectTransportException || cause instanceof NodeClosedException) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you write an integration test that shows that this solves an issue?

Perhaps a test that restarts the current master while concurrently sending some TransportMasterNodeReadAction (e.g. cluster health).

I'm surprised that we have not seen this issue in any of our integration tests, where we sometimes restart nodes (but perhaps don't do this to concurrently issuing master-level requests).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There we go: 3378d3d

Bit of a dirty test but reliably reproduces the issue without any other hacks :)

I've researched this a little and the problem seems to be isolated to 7.x+. When shutting down a master there is a short window during which:

  1. We will get the cluster state without master node from org.elasticsearch.cluster.coordination.Coordinator#clusterStateWithNoMasterBlock
  2. That then causes the master action on the current master node to fail with NotMasterException and add the cluster state observer for the retry waiting for the new master. That new master never comes and instead the wait is shut-down with the NodeClosedException when the cluster service is shut down
  3. Client receives NodeClosedException back from the master

=> this fix still seems fine, if we retry on NodeClosedException because we can interpret it as the master node going away we're good IMO.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the problem seems to be isolated to 7.x+

Does that also include 7.0, 7.1, ...?
I would like to understand why this is not an issue in earlier versions.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For snapshots this only started showing up because I inadvertently disabled the snapshot shard's services own retry in #50788

I can also reproduce this failure back to 7.1, though interestingly enough it seems to be a lot less likely in 7.1 than in 7.6 (maybe that's due to some locking we removed from the internal test cluster operations, but I can't tell right now).

I pushed 646823d to slow things down a little more and make the test repro better. Without this change it takes a pretty large number of iterations for it to fail on 7.1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should separate out cases where the NodeClosedException originate on the local node (for isntance the case where transport stopped, which is also relayed as a NodeClosedException). There should be no need to retry those.

Adding a check that the NodeClosedException is wrapped in a RemoteTransportException would ensure we only retry in true client cases.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Henning, I pushed 1d07a26 :)

Copy link
Contributor

@ywelsch ywelsch Jan 24, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAICS, TransportException is not an ElasticsearchWrapperException (in contrast to RemoteTransportException), and therefore unwrapCause will not yield the NodeClosedException in case of a local transport exception.

This means that the code as before would have been fine I think

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can run into SendRequestTransportException with NodeClosedException which is an ElasticsearchWrapperException?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^^ seems to have happened here https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+pull-request-1/14196/consoleText

  1> [2020-01-21T11:51:16,833][WARN ][o.e.s.SnapshotShardsService] [node_td4] [test-repo:test-snap/SQinm62PQkmhonsriaGPUw] [ShardSnapshotStatus[state=FAILED, nodeId=ULGA-lGmRfiN3aM53N3qqg, reason=[test-idx/6AougmeaSJCvxG4mFvfRZA][[test-idx][11]] org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: Aborted
  1> 	at org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:1501)
  1> 	at org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:341)
  1> 	at org.elasticsearch.snapshots.SnapshotShardsService.lambda$startNewShards$1(SnapshotShardsService.java:287)
  1> 	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:629)
  1> 	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  1> 	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  1> 	at java.base/java.lang.Thread.run(Thread.java:834)
  1> , generation=null]] failed to update snapshot state
  1> org.elasticsearch.transport.SendRequestTransportException: [node_td4][127.0.0.1:32974][internal:cluster/snapshot/update_snapshot_status]
  1> 	at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:644) ~[main/:?]
  1> 	at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:545) ~[main/:?]
  1> 	at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:520) ~[main/:?]
  1> 	at org.elasticsearch.snapshots.SnapshotShardsService.lambda$sendSnapshotShardUpdate$2(SnapshotShardsService.java:492) ~[main/:?]
  1> 	at org.elasticsearch.transport.TransportRequestDeduplicator.executeOnce(TransportRequestDeduplicator.java:52) ~[main/:?]
  1> 	at org.elasticsearch.snapshots.SnapshotShardsService.sendSnapshotShardUpdate(SnapshotShardsService.java:478) ~[main/:?]
  1> 	at org.elasticsearch.snapshots.SnapshotShardsService.notifyFailedSnapshotShard(SnapshotShardsService.java:472) ~[main/:?]
  1> 	at org.elasticsearch.snapshots.SnapshotShardsService$1.onFailure(SnapshotShardsService.java:305) ~[main/:?]
  1> 	at org.elasticsearch.action.ActionListener$5.onFailure(ActionListener.java:256) ~[main/:?]
  1> 	at org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:1624) ~[main/:?]
  1> 	at org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:341) ~[main/:?]
  1> 	at org.elasticsearch.snapshots.SnapshotShardsService.lambda$startNewShards$1(SnapshotShardsService.java:287) ~[main/:?]
  1> 	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:629) ~[main/:?]
  1> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
  1> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
  1> 	at java.lang.Thread.run(Thread.java:834) [?:?]
  1> Caused by: org.elasticsearch.node.NodeClosedException: node closed {node_td4}{ULGA-lGmRfiN3aM53N3qqg}{MhSHeUC3SG2Qv9JTAFNk9Q}{127.0.0.1}{127.0.0.1:32974}{di}
  1> 	at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:626) ~[main/:?]
  1> 	... 15 more

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alright, the tests and you win

// we want to retry here a bit to see if a new master is elected
logger.debug("connection exception while trying to forward request with action name [{}] to " +
"master node [{}], scheduling a retry. Error: [{}]",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.discovery.MasterNotDiscoveredException;
import org.elasticsearch.node.NodeClosedException;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.tasks.Task;
import org.elasticsearch.test.ESTestCase;
Expand Down Expand Up @@ -392,7 +393,8 @@ public void testDelegateToFailingMaster() throws ExecutionException, Interrupted
assertThat(capturedRequest.action, equalTo("internal:testAction"));

if (rejoinSameMaster) {
transport.handleRemoteError(capturedRequest.requestId, new ConnectTransportException(masterNode, "Fake error"));
transport.handleRemoteError(capturedRequest.requestId,
randomBoolean() ? new ConnectTransportException(masterNode, "Fake error") : new NodeClosedException(masterNode));
assertFalse(listener.isDone());
if (randomBoolean()) {
// simulate master node removal
Expand Down