Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds resiliency to read-only filesystems #45286 #52680

Merged
merged 50 commits into from Jul 7, 2020
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
569e8cc
Merge pull request #2 from elastic/master
Bukhtawar Jul 4, 2019
64815f1
Merge remote-tracking branch 'upstream/master'
Bukhtawar Feb 22, 2020
b598944
[Initial DRAFT] Adds a FsHealthService that periodically tries to wr…
Bukhtawar Feb 23, 2020
d4fb892
Test case addition and PR comments
Bukhtawar Mar 25, 2020
38f1a4e
Merge remote-tracking branch 'upstream/master'
Bukhtawar Mar 25, 2020
f3ac906
Merge branch 'master' into ro-fs-handling
Bukhtawar Mar 25, 2020
79948f3
Changes for FsHealthService and tests
Bukhtawar Mar 25, 2020
20d9ba2
Review comments for simplication and better tests
Bukhtawar May 3, 2020
fa3ed38
Merge remote-tracking branch 'upstream/master'
Bukhtawar May 3, 2020
1646319
Merge branch 'master' into ro-fs-handling
Bukhtawar May 3, 2020
5305ebb
Fixing tests and check styles
Bukhtawar May 3, 2020
26fbce7
FsHealthService comments on slow IO
Bukhtawar May 5, 2020
8a86051
Restricting FS health checks to IOExceptions
Bukhtawar May 11, 2020
c9dd1a7
Addressing comments on logging and tests
Bukhtawar May 20, 2020
c99a68e
Minor edits
Bukhtawar May 20, 2020
545eaf5
Merge branch 'master' into ro-fs-handling
Bukhtawar May 27, 2020
86fa7c9
Updated the exception id
Bukhtawar May 27, 2020
8102c81
Merge branch 'master' into ro-fs-handling
Bukhtawar Jun 4, 2020
043db93
Fix merge conflict
DaveCTurner Jun 16, 2020
bbf5517
Fix spacing in StatusInfo#toString
DaveCTurner Jun 18, 2020
1459937
Tidy 'skip prevoting' log message
DaveCTurner Jun 18, 2020
8eb5e20
Tidy response messages in FollowersChecker
DaveCTurner Jun 18, 2020
2095d82
Tidy log message in JoinHelper
DaveCTurner Jun 18, 2020
39a0565
Tidy message in PreVoteCollector
DaveCTurner Jun 18, 2020
136bc44
Tidy info messages
DaveCTurner Jun 18, 2020
1ab13b2
Tidy tracing messages
DaveCTurner Jun 18, 2020
4143f8f
Tidy warn/error messages
DaveCTurner Jun 18, 2020
1d9a7ab
Fix up tests
DaveCTurner Jun 18, 2020
f222529
Fix too-short delay
DaveCTurner Jun 18, 2020
befd822
Minor fixes to Follower and FsHealthService
Bukhtawar Jun 18, 2020
061dd33
Fix assertions
Bukhtawar Jun 18, 2020
cda2179
Leader checks
Bukhtawar Jun 18, 2020
4d83de0
Leader check tests
Bukhtawar Jun 19, 2020
e41392f
cluster reduce stabilization time after unhealthy node
Bukhtawar Jun 19, 2020
67d49bb
Minor fix up
Bukhtawar Jun 19, 2020
fa3cc69
ClusterFormationFailureHelper changes and more tests
Bukhtawar Jun 19, 2020
89035fb
Minor changes to LeaderChecker
Bukhtawar Jun 21, 2020
adbe670
Pass StatusInfo to ClusterFormationState and simplify message
DaveCTurner Jun 24, 2020
fdcdf45
Whitespace
DaveCTurner Jun 24, 2020
deafeca
Imports
DaveCTurner Jun 24, 2020
1120428
Fixing Random
Bukhtawar Jun 24, 2020
23bc4e5
Merge remote-tracking branch 'upstream/master'
Bukhtawar Jun 24, 2020
06b14b8
Merge branch 'master' into ro-fs-handling
Bukhtawar Jun 24, 2020
56fb9b3
ForbiddenApis for charset
Bukhtawar Jun 24, 2020
0d7b72f
Fix logger
Bukhtawar Jun 24, 2020
f390ed8
Merge remote-tracking branch 'upstream/master' into ro-fs-handling
Bukhtawar Jun 24, 2020
f44cf0d
NPE handling
Bukhtawar Jun 29, 2020
97a4c02
Merge remote-tracking branch 'upstream/master' into ro-fs-handling
Bukhtawar Jun 29, 2020
54d7c98
Merge remote-tracking branch 'upstream/master' into ro-fs-handling
Bukhtawar Jul 2, 2020
aae5142
Merge remote-tracking branch 'upstream/master' into ro-fs-handling
Bukhtawar Jul 3, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
21 changes: 18 additions & 3 deletions server/src/main/java/org/elasticsearch/cluster/ClusterInfo.java
Expand Up @@ -44,9 +44,10 @@ public class ClusterInfo implements ToXContentFragment, Writeable {
final ImmutableOpenMap<String, Long> shardSizes;
public static final ClusterInfo EMPTY = new ClusterInfo();
final ImmutableOpenMap<ShardRouting, String> routingToDataPath;
final ImmutableOpenMap<String, Boolean> nodeAllPathsWritable;

protected ClusterInfo() {
this(ImmutableOpenMap.of(), ImmutableOpenMap.of(), ImmutableOpenMap.of(), ImmutableOpenMap.of());
this(ImmutableOpenMap.of(), ImmutableOpenMap.of(), ImmutableOpenMap.of(), ImmutableOpenMap.of(), ImmutableOpenMap.of());
}

/**
Expand All @@ -60,24 +61,27 @@ protected ClusterInfo() {
*/
public ClusterInfo(ImmutableOpenMap<String, DiskUsage> leastAvailableSpaceUsage,
ImmutableOpenMap<String, DiskUsage> mostAvailableSpaceUsage, ImmutableOpenMap<String, Long> shardSizes,
ImmutableOpenMap<ShardRouting, String> routingToDataPath) {
ImmutableOpenMap<ShardRouting, String> routingToDataPath, ImmutableOpenMap<String, Boolean> nodeAllPathsWritable) {
this.leastAvailableSpaceUsage = leastAvailableSpaceUsage;
this.shardSizes = shardSizes;
this.mostAvailableSpaceUsage = mostAvailableSpaceUsage;
this.routingToDataPath = routingToDataPath;
this.nodeAllPathsWritable = nodeAllPathsWritable;
}

public ClusterInfo(StreamInput in) throws IOException {
Map<String, DiskUsage> leastMap = in.readMap(StreamInput::readString, DiskUsage::new);
Map<String, DiskUsage> mostMap = in.readMap(StreamInput::readString, DiskUsage::new);
Map<String, Boolean> allPathsWritable = in.readMap(StreamInput::readString, StreamInput::readBoolean);
Map<String, Long> sizeMap = in.readMap(StreamInput::readString, StreamInput::readLong);
Map<ShardRouting, String> routingMap = in.readMap(ShardRouting::new, StreamInput::readString);

ImmutableOpenMap.Builder<String, DiskUsage> leastBuilder = ImmutableOpenMap.builder();
this.leastAvailableSpaceUsage = leastBuilder.putAll(leastMap).build();
ImmutableOpenMap.Builder<String, DiskUsage> mostBuilder = ImmutableOpenMap.builder();
this.mostAvailableSpaceUsage = mostBuilder.putAll(mostMap).build();
ImmutableOpenMap.Builder<String, Long> sizeBuilder = ImmutableOpenMap.builder();
ImmutableOpenMap.Builder<String, Boolean> allPathsWritableBuilder = ImmutableOpenMap.builder();
this.nodeAllPathsWritable = allPathsWritableBuilder.putAll(allPathsWritable).build();
this.shardSizes = sizeBuilder.putAll(sizeMap).build();
ImmutableOpenMap.Builder<ShardRouting, String> routingBuilder = ImmutableOpenMap.builder();
this.routingToDataPath = routingBuilder.putAll(routingMap).build();
Expand All @@ -95,6 +99,11 @@ public void writeTo(StreamOutput out) throws IOException {
out.writeString(c.key);
c.value.writeTo(out);
}
out.writeVInt(this.nodeAllPathsWritable.size());
for (ObjectObjectCursor<String, Boolean> c : this.nodeAllPathsWritable) {
out.writeString(c.key);
out.writeBoolean(c.value);
}
out.writeVInt(this.shardSizes.size());
for (ObjectObjectCursor<String, Long> c : this.shardSizes) {
out.writeString(c.key);
Expand Down Expand Up @@ -127,6 +136,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws
}
}
builder.endObject(); // end "most_available"
builder.field("all_path_writable", this.nodeAllPathsWritable.get(c.key));
}
builder.endObject(); // end $nodename
}
Expand Down Expand Up @@ -161,6 +171,11 @@ public ImmutableOpenMap<String, DiskUsage> getNodeMostAvailableDiskUsages() {
return this.mostAvailableSpaceUsage;
}

/**
* Returns a node id to writeablity mapping for the path that is not writeable.
*/
public ImmutableOpenMap<String, Boolean> getNodeAllPathsWritable() { return this.nodeAllPathsWritable; }

/**
* Returns the shard size for the given shard routing or <code>null</code> it that metric is not available.
*/
Expand Down
Expand Up @@ -44,6 +44,7 @@
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;
import org.elasticsearch.monitor.fs.FsHealthService;
import org.elasticsearch.monitor.fs.FsInfo;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.ReceiveTimeoutTransportException;
Expand Down Expand Up @@ -80,6 +81,7 @@ public class InternalClusterInfoService implements ClusterInfoService, LocalNode

private volatile ImmutableOpenMap<String, DiskUsage> leastAvailableSpaceUsages;
private volatile ImmutableOpenMap<String, DiskUsage> mostAvailableSpaceUsages;
private volatile ImmutableOpenMap<String, Boolean> allPathsWritable;
private volatile ImmutableOpenMap<ShardRouting, String> shardRoutingToDataPath;
private volatile ImmutableOpenMap<String, Long> shardSizes;
private volatile boolean isMaster = false;
Expand All @@ -94,6 +96,7 @@ public InternalClusterInfoService(Settings settings, ClusterService clusterServi
this.leastAvailableSpaceUsages = ImmutableOpenMap.of();
this.mostAvailableSpaceUsages = ImmutableOpenMap.of();
this.shardRoutingToDataPath = ImmutableOpenMap.of();
this.allPathsWritable = ImmutableOpenMap.of();
this.shardSizes = ImmutableOpenMap.of();
this.clusterService = clusterService;
this.threadPool = threadPool;
Expand All @@ -105,16 +108,16 @@ public InternalClusterInfoService(Settings settings, ClusterService clusterServi
clusterSettings.addSettingsUpdateConsumer(INTERNAL_CLUSTER_INFO_TIMEOUT_SETTING, this::setFetchTimeout);
clusterSettings.addSettingsUpdateConsumer(INTERNAL_CLUSTER_INFO_UPDATE_INTERVAL_SETTING, this::setUpdateFrequency);
clusterSettings.addSettingsUpdateConsumer(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_DISK_THRESHOLD_ENABLED_SETTING,
this::setEnabled);
FsHealthService.ENABLED_SETTING, this::setEnabled);

// Add InternalClusterInfoService to listen for Master changes
this.clusterService.addLocalNodeMasterListener(this);
// Add to listen for state changes (when nodes are added)
this.clusterService.addListener(this);
}

private void setEnabled(boolean enabled) {
this.enabled = enabled;
private void setEnabled(boolean diskThresholdEnabled, boolean fsHealthEnabled) {
this.enabled = diskThresholdEnabled || fsHealthEnabled;
}

private void setFetchTimeout(TimeValue fetchTimeout) {
Expand Down Expand Up @@ -200,7 +203,7 @@ public void clusterChanged(ClusterChangedEvent event) {

@Override
public ClusterInfo getClusterInfo() {
return new ClusterInfo(leastAvailableSpaceUsages, mostAvailableSpaceUsages, shardSizes, shardRoutingToDataPath);
return new ClusterInfo(leastAvailableSpaceUsages, mostAvailableSpaceUsages, shardSizes, shardRoutingToDataPath, allPathsWritable);
}

/**
Expand Down Expand Up @@ -242,7 +245,7 @@ public void run() {
*/
protected CountDownLatch updateNodeStats(final ActionListener<NodesStatsResponse> listener) {
final CountDownLatch latch = new CountDownLatch(1);
final NodesStatsRequest nodesStatsRequest = new NodesStatsRequest("data:true");
final NodesStatsRequest nodesStatsRequest = new NodesStatsRequest();
nodesStatsRequest.clear();
nodesStatsRequest.fs(true);
nodesStatsRequest.timeout(fetchTimeout);
Expand Down Expand Up @@ -293,10 +296,12 @@ public final ClusterInfo refresh() {
public void onResponse(NodesStatsResponse nodesStatsResponse) {
ImmutableOpenMap.Builder<String, DiskUsage> leastAvailableUsagesBuilder = ImmutableOpenMap.builder();
ImmutableOpenMap.Builder<String, DiskUsage> mostAvailableUsagesBuilder = ImmutableOpenMap.builder();
fillDiskUsagePerNode(logger, adjustNodesStats(nodesStatsResponse.getNodes()),
leastAvailableUsagesBuilder, mostAvailableUsagesBuilder);
ImmutableOpenMap.Builder<String, Boolean> allPathsWritableBuilder = ImmutableOpenMap.builder();
fillDiskStatsPerNode(logger, adjustNodesStats(nodesStatsResponse.getNodes()), clusterService,
leastAvailableUsagesBuilder, mostAvailableUsagesBuilder, allPathsWritableBuilder);
leastAvailableSpaceUsages = leastAvailableUsagesBuilder.build();
mostAvailableSpaceUsages = mostAvailableUsagesBuilder.build();
allPathsWritable = allPathsWritableBuilder.build();
}

@Override
Expand Down Expand Up @@ -396,51 +401,57 @@ static void buildShardLevelInfo(Logger logger, ShardStats[] stats, ImmutableOpen
}
}

static void fillDiskUsagePerNode(Logger logger, List<NodeStats> nodeStatsArray,
static void fillDiskStatsPerNode(Logger logger, List<NodeStats> nodeStatsArray, ClusterService clusterService,
ImmutableOpenMap.Builder<String, DiskUsage> newLeastAvaiableUsages,
ImmutableOpenMap.Builder<String, DiskUsage> newMostAvaiableUsages) {
ImmutableOpenMap.Builder<String, DiskUsage> newMostAvaiableUsages,
ImmutableOpenMap.Builder<String, Boolean> allPathsWritableBuilder) {
for (NodeStats nodeStats : nodeStatsArray) {
if (nodeStats.getFs() == null) {
logger.warn("Unable to retrieve node FS stats for {}", nodeStats.getNode().getName());
} else {
FsInfo.Path leastAvailablePath = null;
FsInfo.Path mostAvailablePath = null;
for (FsInfo.Path info : nodeStats.getFs()) {
if (leastAvailablePath == null) {
assert mostAvailablePath == null;
mostAvailablePath = leastAvailablePath = info;
} else if (leastAvailablePath.getAvailable().getBytes() > info.getAvailable().getBytes()) {
leastAvailablePath = info;
} else if (mostAvailablePath.getAvailable().getBytes() < info.getAvailable().getBytes()) {
mostAvailablePath = info;
}
}
String nodeId = nodeStats.getNode().getId();
String nodeName = nodeStats.getNode().getName();
if (logger.isTraceEnabled()) {
logger.trace("node: [{}], most available: total disk: {}," +
" available disk: {} / least available: total disk: {}, available disk: {}",
Boolean allPathsWritable = nodeStats.getFs().getTotal().isWritable();
if (clusterService.state().getNodes().getMasterNodes().containsKey(nodeStats.getNode().getId()) == false) {
for (FsInfo.Path info : nodeStats.getFs()) {
if (leastAvailablePath == null) {
assert mostAvailablePath == null;
mostAvailablePath = leastAvailablePath = info;
} else if (leastAvailablePath.getAvailable().getBytes() > info.getAvailable().getBytes()) {
leastAvailablePath = info;
} else if (mostAvailablePath.getAvailable().getBytes() < info.getAvailable().getBytes()) {
mostAvailablePath = info;
}
}
if (logger.isTraceEnabled()) {
logger.trace("node: [{}], most available: total disk: {}," +
" available disk: {} / least available: total disk: {}, available disk: {}",
nodeId, mostAvailablePath.getTotal(), leastAvailablePath.getAvailable(),
leastAvailablePath.getTotal(), leastAvailablePath.getAvailable());
}
if (leastAvailablePath.getTotal().getBytes() < 0) {
if (logger.isTraceEnabled()) {
logger.trace("node: [{}] least available path has less than 0 total bytes of disk [{}], skipping",
}
if (leastAvailablePath.getTotal().getBytes() < 0) {
if (logger.isTraceEnabled()) {
logger.trace("node: [{}] least available path has less than 0 total bytes of disk [{}], skipping",
nodeId, leastAvailablePath.getTotal().getBytes());
}
} else {
newLeastAvaiableUsages.put(nodeId, new DiskUsage(nodeId, nodeName, leastAvailablePath.getPath(),
leastAvailablePath.getTotal().getBytes(), leastAvailablePath.getAvailable().getBytes()));
}
} else {
newLeastAvaiableUsages.put(nodeId, new DiskUsage(nodeId, nodeName, leastAvailablePath.getPath(),
leastAvailablePath.getTotal().getBytes(), leastAvailablePath.getAvailable().getBytes()));
}
if (mostAvailablePath.getTotal().getBytes() < 0) {
if (logger.isTraceEnabled()) {
logger.trace("node: [{}] most available path has less than 0 total bytes of disk [{}], skipping",
if (mostAvailablePath.getTotal().getBytes() < 0) {
if (logger.isTraceEnabled()) {
logger.trace("node: [{}] most available path has less than 0 total bytes of disk [{}], skipping",
nodeId, mostAvailablePath.getTotal().getBytes());
}
} else {
newMostAvaiableUsages.put(nodeId, new DiskUsage(nodeId, nodeName, mostAvailablePath.getPath(),
mostAvailablePath.getTotal().getBytes(), mostAvailablePath.getAvailable().getBytes()));
}
} else {
newMostAvaiableUsages.put(nodeId, new DiskUsage(nodeId, nodeName, mostAvailablePath.getPath(),
mostAvailablePath.getTotal().getBytes(), mostAvailablePath.getAvailable().getBytes()));

}
allPathsWritableBuilder.put(nodeId, allPathsWritable);

}
}
Expand Down
Expand Up @@ -30,6 +30,7 @@
import org.elasticsearch.cluster.ClusterStateTaskConfig;
import org.elasticsearch.cluster.ClusterStateUpdateTask;
import org.elasticsearch.cluster.LocalClusterUpdateTask;
import org.elasticsearch.cluster.ClusterInfoService;
import org.elasticsearch.cluster.block.ClusterBlocks;
import org.elasticsearch.cluster.coordination.ClusterFormationFailureHelper.ClusterFormationState;
import org.elasticsearch.cluster.coordination.CoordinationMetaData.VotingConfigExclusion;
Expand Down Expand Up @@ -67,6 +68,8 @@
import org.elasticsearch.discovery.PeerFinder;
import org.elasticsearch.discovery.SeedHostsProvider;
import org.elasticsearch.discovery.SeedHostsResolver;
import org.elasticsearch.monitor.fs.FsReadOnlyMonitor;
import org.elasticsearch.monitor.fs.FsService;
import org.elasticsearch.threadpool.Scheduler;
import org.elasticsearch.threadpool.ThreadPool.Names;
import org.elasticsearch.transport.TransportResponse.Empty;
Expand Down Expand Up @@ -149,6 +152,8 @@ public class Coordinator extends AbstractLifecycleComponent implements Discovery
private Optional<Join> lastJoin;
private JoinHelper.JoinAccumulator joinAccumulator;
private Optional<CoordinatorPublication> currentPublication = Optional.empty();
private final FsService fsService;
private final FsReadOnlyMonitor fsReadOnlyMonitor;

/**
* @param nodeName The name of the node, used to name the {@link java.util.concurrent.ExecutorService} of the {@link SeedHostsResolver}.
Expand All @@ -158,7 +163,8 @@ public Coordinator(String nodeName, Settings settings, ClusterSettings clusterSe
NamedWriteableRegistry namedWriteableRegistry, AllocationService allocationService, MasterService masterService,
Supplier<CoordinationState.PersistedState> persistedStateSupplier, SeedHostsProvider seedHostsProvider,
ClusterApplier clusterApplier, Collection<BiConsumer<DiscoveryNode, ClusterState>> onJoinValidators, Random random,
RerouteService rerouteService, ElectionStrategy electionStrategy) {
RerouteService rerouteService, ElectionStrategy electionStrategy, FsService fsService,
ClusterInfoService clusterInfoService) {
this.settings = settings;
this.transportService = transportService;
this.masterService = masterService;
Expand All @@ -168,7 +174,7 @@ public Coordinator(String nodeName, Settings settings, ClusterSettings clusterSe
this.electionStrategy = electionStrategy;
this.joinHelper = new JoinHelper(settings, allocationService, masterService, transportService,
this::getCurrentTerm, this::getStateForMasterService, this::handleJoinRequest, this::joinLeaderInTerm, this.onJoinValidators,
rerouteService);
rerouteService, fsService);
this.persistedStateSupplier = persistedStateSupplier;
this.noMasterBlockService = new NoMasterBlockService(settings, clusterSettings);
this.lastKnownLeader = Optional.empty();
Expand All @@ -178,7 +184,7 @@ public Coordinator(String nodeName, Settings settings, ClusterSettings clusterSe
this.publishInfoTimeout = PUBLISH_INFO_TIMEOUT_SETTING.get(settings);
this.random = random;
this.electionSchedulerFactory = new ElectionSchedulerFactory(settings, random, transportService.getThreadPool());
this.preVoteCollector = new PreVoteCollector(transportService, this::startElection, this::updateMaxTermSeen, electionStrategy);
this.preVoteCollector = new PreVoteCollector(transportService, this::startElection, this::updateMaxTermSeen, electionStrategy, fsService);
configuredHostsResolver = new SeedHostsResolver(nodeName, settings, transportService, seedHostsProvider);
this.peerFinder = new CoordinatorPeerFinder(settings, transportService,
new HandshakingTransportAddressConnector(settings, transportService), configuredHostsResolver);
Expand All @@ -196,6 +202,10 @@ public Coordinator(String nodeName, Settings settings, ClusterSettings clusterSe
transportService::getLocalNode);
this.clusterFormationFailureHelper = new ClusterFormationFailureHelper(settings, this::getClusterFormationState,
transportService.getThreadPool(), joinHelper::logLastFailedJoinAttempt);
//TODO check if FsReadOnlyMonitor and LagDetector can be implemented as a part of a common interface
this.fsReadOnlyMonitor = new FsReadOnlyMonitor(settings, clusterSettings, this::getStateForMasterService, transportService::getLocalNode,
this::removeNode, clusterInfoService);
this.fsService = fsService;
}

private ClusterFormationState getClusterFormationState() {
Expand Down Expand Up @@ -1171,6 +1181,12 @@ public void run() {
return;
}

if(fsService.stats().getTotal().isWritable() == Boolean.FALSE){
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have left out spaces assuming checkStyles would catch. But unfortunate. I'll fix white spacing

logger.warn("skip prevoting as local node is not writable: {}",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A warning here isn't helpful, we should be logging the failure elsewhere so this will simply result in confusion.

Suggested change
logger.warn("skip prevoting as local node is not writable: {}",
logger.debug("skip prevoting as local node is not writable: {}",

Also, we have this generic NodeHealthService but the log message is very specific: local node is not writeable. Maybe the NodeHealthService should describe the problem rather than returning a simple boolean.

lastAcceptedState.coordinationMetaData());
return;
}

if (prevotingRound != null) {
prevotingRound.close();
}
Expand Down