Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ShardActiveResponseHandler shouldn't hold to an entire cluster state #21470

Conversation

Projects
None yet
4 participants
@imotov
Copy link
Member

commented Nov 10, 2016

ShardActiveResponseHandler doesn't need to hold to ab entire cluster state since it only needs to know the cluster state version. It seems that on overloaded systems where nodes are unresponsive holding onto a lot of different cluster states can make the situation worse.

Closes #21394

Not sure how far back should we go in propagating this fix.

@jasontedor
Copy link
Member

left a comment

Nice, LGTM.

@bleskes

This comment has been minimized.

Copy link
Member

commented Nov 10, 2016

LGTM2. good catch

@clintongormley

This comment has been minimized.

Copy link
Member

commented Nov 10, 2016

@imotov Back to 2.4.2 i'd say

ShardActiveResponseHandler shouldn't hold to an entire cluster state
ShardActiveResponseHandler doesn't need to hold to an entire cluster state since it only needs to know the cluster state version. It seems that on overloaded systems where nodes are unresponsive holding onto a lot of different cluster states can make the situation worse.

Closes #21394

@imotov imotov force-pushed the imotov:issue-21394-large-footprint-of-shard-active-response-handlers branch to 06a50fa Nov 11, 2016

@imotov imotov merged commit 06a50fa into elastic:master Nov 11, 2016

1 of 2 checks passed

elasticsearch-ci Build started sha1 is merged.
Details
CLA Commit author is a member of Elasticsearch
Details

@imotov imotov added v5.0.1 and removed v5.0.2 labels Nov 11, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.