Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve the lifecycle management of the join control thread in zen discovery. #8327

Conversation

martijnvg
Copy link
Member

This PR also includes:

  • Better exception handling in UnicastZenPing#ping
  • In the join thread that runs the innerJoinCluster loop, remember the last known exception and throw that when assertions are enabled. We loop until inner join has successfully completed and if continues exceptions are thrown we should fail the test, because the exception shouldn't occur in production (at least not too often).

}
});
} catch (Exception e) {
sendPingsHandler.close();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we want to rethrow here, right?

@bleskes
Copy link
Contributor

bleskes commented Nov 3, 2014

Looking good. left two comments.

@martijnvg
Copy link
Member Author

@bleskes I updated the PR.

/**
* Wraps the specified exception in a runtime exception if required and then rethrows it.
*
* Usable for assertions too because of the boolean return value.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's only usabe in assertions :) can you give a usage example? . Maybe call it reThrowIfNotNull and allow passing null values into it? might make it more usefull .

@martijnvg
Copy link
Member Author

@bleskes I applied the feedback, also added better error handling for multicast ping.

*
* Also usable for assertions, because of the boolean return value.
*/
public static boolean reThrowIfNotNull(Throwable e) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this is renamed we need to allow for null as a value of e?

@bleskes
Copy link
Contributor

bleskes commented Nov 3, 2014

@martijnvg looking good. left two last little comments

@martijnvg
Copy link
Member Author

@bleskes Thanks, I updated the PR to address your comments.

@@ -1281,7 +1281,7 @@ public ClusterState stopRunningThreadAndRejoin(ClusterState clusterState, String
return rejoin(clusterState, reason);
}

/** starts a new joining thread if there is no currently active one */
/** starts a new joining thread if there is no currently active one and join thread controlling is started */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

join thread controlling is started <-- love it :)

@bleskes
Copy link
Contributor

bleskes commented Nov 4, 2014

Left one minor logging comment. LGTM!

…d in zen discovery.

Also added:
* Better exception handling in UnicastZenPing#ping and MulticastZenPing#ping
* In the join thread that runs the innerJoinCluster loop, remember the last known exception and throw that when assertions are enabled. We loop until inner join has completed and if continues exceptions are thrown we should fail the test, because the exception shouldn't occur in production (at least not too often).
Applied feedback 3

Closes elastic#8327
@martijnvg martijnvg force-pushed the improvements/join_thread_control_life_cycle branch from 8a27d38 to 4ddb057 Compare November 4, 2014 08:45
martijnvg added a commit that referenced this pull request Nov 4, 2014
…d in zen discovery.

Also added:
* Better exception handling in UnicastZenPing#ping and MulticastZenPing#ping
* In the join thread that runs the innerJoinCluster loop, remember the last known exception and throw that when assertions are enabled. We loop until inner join has completed and if continues exceptions are thrown we should fail the test, because the exception shouldn't occur in production (at least not too often).
Applied feedback 3

Closes #8327
@martijnvg martijnvg merged commit 4ddb057 into elastic:master Nov 4, 2014
@martijnvg
Copy link
Member Author

Pushed, thanks @bleskes!

martijnvg added a commit that referenced this pull request Nov 4, 2014
…d in zen discovery.

Also added:
* Better exception handling in UnicastZenPing#ping and MulticastZenPing#ping
* In the join thread that runs the innerJoinCluster loop, remember the last known exception and throw that when assertions are enabled. We loop until inner join has completed and if continues exceptions are thrown we should fail the test, because the exception shouldn't occur in production (at least not too often).
Applied feedback 3

Closes #8327
@s1monw s1monw removed the review label Nov 4, 2014
bleskes added a commit to bleskes/elasticsearch that referenced this pull request Nov 6, 2014
When a node stops, we cancel any ongoing join process. With elastic#8327, we improved this logic and wait for it to complete before shutting down the node. In our tests we typically shutdown an entire cluster at once, which makes it very likely for nodes to be joining while shutting down. This introduces a race condition where the joinThread.interrupt can happen before the thread starts waiting on pings which causes shutdown logic to be slow. This commits improves by repeatedly trying to stop the thread in smaller waits.

Another side effect of the change is that we are now more likely to ping ourselves while shutting down, we results in an ugly warn level log. We now log all remote exception during pings at a debug level.
bleskes added a commit that referenced this pull request Nov 6, 2014
When a node stops, we cancel any ongoing join process. With #8327, we improved this logic and wait for it to complete before shutting down the node. In our tests we typically shutdown an entire cluster at once, which makes it very likely for nodes to be joining while shutting down. This introduces a race condition where the joinThread.interrupt can happen before the thread starts waiting on pings which causes shutdown logic to be slow. This commits improves by repeatedly trying to stop the thread in smaller waits.

Another side effect of the change is that we are now more likely to ping ourselves while shutting down, we results in an ugly warn level log. We now log all remote exception during pings at a debug level.

Closes #8359
bleskes added a commit that referenced this pull request Nov 7, 2014
When a node stops, we cancel any ongoing join process. With #8327, we improved this logic and wait for it to complete before shutting down the node. However, the joining thread is part of a thread pool and will not stop until the thread pool is shutdown.

Another issue raised by the unneeded wait is that when we shutdown, we may ping ourselves - which results in an ugly warn level log. We now log all remote exception during pings at a debug level.

Closes #8359
bleskes added a commit that referenced this pull request Nov 7, 2014
When a node stops, we cancel any ongoing join process. With #8327, we improved this logic and wait for it to complete before shutting down the node. However, the joining thread is part of a thread pool and will not stop until the thread pool is shutdown.

Another issue raised by the unneeded wait is that when we shutdown, we may ping ourselves - which results in an ugly warn level log. We now log all remote exception during pings at a debug level.

Closes #8359
bleskes added a commit that referenced this pull request Dec 11, 2014
When a node stops, we cancel any ongoing join process. With #8327, we improved this logic and wait for it to complete before shutting down the node. In our tests we typically shutdown an entire cluster at once, which makes it very likely for nodes to be joining while shutting down. This introduces a race condition where the joinThread.interrupt can happen before the thread starts waiting on pings which causes shutdown logic to be slow. This commits improves by repeatedly trying to stop the thread in smaller waits.

Another side effect of the change is that we are now more likely to ping ourselves while shutting down, we results in an ugly warn level log. We now log all remote exception during pings at a debug level.

Closes #8359
@clintongormley clintongormley added the :Distributed/Discovery-Plugins Anything related to our integration plugins with EC2, GCP and Azure label Mar 19, 2015
@martijnvg martijnvg deleted the improvements/join_thread_control_life_cycle branch May 18, 2015 23:29
@clintongormley clintongormley changed the title Discovery: Improve the lifecycle management of the join control thread in zen discovery. Improve the lifecycle management of the join control thread in zen discovery. Jun 7, 2015
mute pushed a commit to mute/elasticsearch that referenced this pull request Jul 29, 2015
…d in zen discovery.

Also added:
* Better exception handling in UnicastZenPing#ping and MulticastZenPing#ping
* In the join thread that runs the innerJoinCluster loop, remember the last known exception and throw that when assertions are enabled. We loop until inner join has completed and if continues exceptions are thrown we should fail the test, because the exception shouldn't occur in production (at least not too often).
Applied feedback 3

Closes elastic#8327
mute pushed a commit to mute/elasticsearch that referenced this pull request Jul 29, 2015
When a node stops, we cancel any ongoing join process. With elastic#8327, we improved this logic and wait for it to complete before shutting down the node. In our tests we typically shutdown an entire cluster at once, which makes it very likely for nodes to be joining while shutting down. This introduces a race condition where the joinThread.interrupt can happen before the thread starts waiting on pings which causes shutdown logic to be slow. This commits improves by repeatedly trying to stop the thread in smaller waits.

Another side effect of the change is that we are now more likely to ping ourselves while shutting down, we results in an ugly warn level log. We now log all remote exception during pings at a debug level.

Closes elastic#8359
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Distributed/Discovery-Plugins Anything related to our integration plugins with EC2, GCP and Azure v1.4.0 v1.5.0 v2.0.0-beta1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants