Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

com.hazelcast.client.splitbrainprotection.set.ClientTransactionalSetSplitBrainProtectionWriteTest #18765

Closed
olukas opened this issue May 24, 2021 · 2 comments · Fixed by #19180
Assignees
Milestone

Comments

@olukas
Copy link
Contributor

olukas commented May 24, 2021

master (commit c3c641f)

Failed on Zing JDK 8: http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-master-ZingJDK8/300/testReport/com.hazelcast.client.splitbrainprotection.set/ClientTransactionalSetSplitBrainProtectionWriteTest/com_hazelcast_client_splitbrainprotection_set_ClientTransactionalSetSplitBrainProtectionWriteTest/

Stacktrace:

com.hazelcast.splitbrainprotection.SplitBrainProtectionException: Split brain protection exception: threeNodeSplitBrainProtectionRuleREAD_WRITE has failed!
	at com.hazelcast.splitbrainprotection.impl.SplitBrainProtectionImpl.newSplitBrainProtectionException(SplitBrainProtectionImpl.java:279)
	at com.hazelcast.splitbrainprotection.impl.SplitBrainProtectionImpl.ensureNoSplitBrain(SplitBrainProtectionImpl.java:274)
	at com.hazelcast.splitbrainprotection.impl.SplitBrainProtectionImpl.ensureNoSplitBrain(SplitBrainProtectionImpl.java:269)
	at com.hazelcast.splitbrainprotection.impl.SplitBrainProtectionServiceImpl.ensureNoSplitBrain(SplitBrainProtectionServiceImpl.java:232)
	at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.ensureNoSplitBrain(OperationRunnerImpl.java:338)
	at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:243)
	at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:469)
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:197)
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:137)
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123)
	at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
	at ------ submitted from ------.()
	at com.hazelcast.internal.util.ExceptionUtil.cloneExceptionWithFixedAsyncStackTrace(ExceptionUtil.java:279)
	at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.returnOrThrowWithGetConventions(InvocationFuture.java:112)
	at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveAndThrowIfException(InvocationFuture.java:100)
	at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:617)
	at com.hazelcast.collection.impl.queue.QueueProxySupport.invokeAndGet(QueueProxySupport.java:179)
	at com.hazelcast.collection.impl.queue.QueueProxySupport.invokeAndGet(QueueProxySupport.java:172)
	at com.hazelcast.collection.impl.queue.QueueProxySupport.size(QueueProxySupport.java:110)
	at com.hazelcast.collection.impl.queue.QueueProxyImpl.size(QueueProxyImpl.java:43)
	at com.hazelcast.splitbrainprotection.AbstractSplitBrainProtectionTest.initCluster(AbstractSplitBrainProtectionTest.java:247)
	at com.hazelcast.splitbrainprotection.AbstractSplitBrainProtectionTest.initTestEnvironment(AbstractSplitBrainProtectionTest.java:105)
	at com.hazelcast.client.splitbrainprotection.set.ClientTransactionalSetSplitBrainProtectionWriteTest.setUp(ClientTransactionalSetSplitBrainProtectionWriteTest.java:43)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
	at org.junit.runners.Suite.runChild(Suite.java:128)
	at org.junit.runners.Suite.runChild(Suite.java:27)
	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
	at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
	at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
	at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
	at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
	at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
	at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

Standard output:

08:01:38,439  INFO || - [MetricsConfigHelper] main - [LOCAL] [bjwweemopj] [5.0-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
08:01:38,443  INFO || - [system] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] 
	o  o   O   o---o o--o o      o-o   O    o-o  o-O-o     o--o  o       O  o-O-o o--o  o-o  o--o  o   o 
	|  |  / \     /  |    |     /     / \  |       |       |   | |      / \   |   |    o   o |   | |\ /| 
	O--O o---o  -O-  O-o  |    O     o---o  o-o    |       O--o  |     o---o  |   O-o  |   | O-Oo  | O | 
	|  | |   |  /    |    |     \    |   |     |   |       |     |     |   |  |   |    o   o |  \  |   | 
	o  o o   o o---o o--o O---o  o-o o   o o--o    o       o     O---o o   o  o   o     o-o  o   o o   o
08:01:38,443  INFO || - [system] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Copyright (c) 2008-2021, Hazelcast, Inc. All Rights Reserved.
08:01:38,443  INFO || - [system] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Hazelcast Platform 5.0-SNAPSHOT (20210522 - c3c641f) starting at [127.0.0.1]:5701
08:01:38,443  INFO || - [system] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Cluster name: bjwweemopj
08:01:38,451  INFO || - [MetricsConfigHelper] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
08:01:38,462  WARN || - [CPSubsystem] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
08:01:38,468  INFO || - [JetService] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Setting number of cooperative threads and default parallelism to 2
08:01:38,470  INFO || - [Diagnostics] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
08:01:38,470  INFO || - [LifecycleService] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5701 is STARTING
08:01:38,471  INFO || - [JetExtension] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Jet extension is enabled after the cluster version upgrade.
08:01:38,471  INFO || - [ClusterService] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:1, ver:1} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978 this
]

08:01:38,471  INFO || - [JetExtension] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Jet extension is enabled
08:01:38,471  INFO || - [LifecycleService] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5701 is STARTED
08:01:38,471  INFO || - [MetricsConfigHelper] main - [LOCAL] [bjwweemopj] [5.0-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
08:01:38,474  INFO || - [HealthMonitor] hz.heuristic_ptolemy.HealthMonitor - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] processors=4, physical.memory.total=29.8G, physical.memory.free=17.0G, swap.space.total=0, swap.space.free=0, heap.memory.used=1.4G, heap.memory.free=1.9G, heap.memory.total=3.3G, heap.memory.max=3.3G, heap.memory.used/total=42.52%, heap.memory.used/max=42.52%, minor.gc.count=0, minor.gc.time=0ms, major.gc.count=0, major.gc.time=0ms, load.process=100.00%, load.system=100.00%, load.systemAverage=7.13, thread.count=67, thread.peakCount=983, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=0, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
08:01:38,475  INFO || - [system] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] 
	o  o   O   o---o o--o o      o-o   O    o-o  o-O-o     o--o  o       O  o-O-o o--o  o-o  o--o  o   o 
	|  |  / \     /  |    |     /     / \  |       |       |   | |      / \   |   |    o   o |   | |\ /| 
	O--O o---o  -O-  O-o  |    O     o---o  o-o    |       O--o  |     o---o  |   O-o  |   | O-Oo  | O | 
	|  | |   |  /    |    |     \    |   |     |   |       |     |     |   |  |   |    o   o |  \  |   | 
	o  o o   o o---o o--o O---o  o-o o   o o--o    o       o     O---o o   o  o   o     o-o  o   o o   o
08:01:38,475  INFO || - [system] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Copyright (c) 2008-2021, Hazelcast, Inc. All Rights Reserved.
08:01:38,475  INFO || - [system] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Hazelcast Platform 5.0-SNAPSHOT (20210522 - c3c641f) starting at [127.0.0.1]:5702
08:01:38,475  INFO || - [system] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Cluster name: bjwweemopj
08:01:38,476 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-6 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
08:01:38,476 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-6 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
08:01:38,484  INFO || - [MetricsConfigHelper] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
08:01:38,499  WARN || - [CPSubsystem] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
08:01:38,505  INFO || - [JetService] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Setting number of cooperative threads and default parallelism to 2
08:01:38,509  INFO || - [Diagnostics] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
08:01:38,509  INFO || - [LifecycleService] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5702 is STARTING
08:01:38,510  INFO || - [MockServer] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5701, alive=true}
08:01:38,510  INFO || - [MockServer] hz.heuristic_ptolemy.generic-operation.thread-1 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5702, alive=true}
08:01:38,511  INFO || - [ClusterService] hz.heuristic_ptolemy.generic-operation.thread-1 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:2, ver:2} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978 this
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
]

08:01:38,576 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-4 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
08:01:38,576 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-5 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
08:01:38,611  INFO || - [JetExtension] hz.priceless_ptolemy.generic-operation.thread-1 - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Jet extension is enabled after the cluster version upgrade.
08:01:38,611  INFO || - [ClusterService] hz.priceless_ptolemy.generic-operation.thread-1 - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:2, ver:2} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb this
]

08:01:38,612  INFO || - [JetExtension] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Jet extension is enabled
08:01:38,612  INFO || - [LifecycleService] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5702 is STARTED
08:01:38,613  INFO || - [MetricsConfigHelper] main - [LOCAL] [bjwweemopj] [5.0-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
08:01:38,617  INFO || - [system] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] 
	o  o   O   o---o o--o o      o-o   O    o-o  o-O-o     o--o  o       O  o-O-o o--o  o-o  o--o  o   o 
	|  |  / \     /  |    |     /     / \  |       |       |   | |      / \   |   |    o   o |   | |\ /| 
	O--O o---o  -O-  O-o  |    O     o---o  o-o    |       O--o  |     o---o  |   O-o  |   | O-Oo  | O | 
	|  | |   |  /    |    |     \    |   |     |   |       |     |     |   |  |   |    o   o |  \  |   | 
	o  o o   o o---o o--o O---o  o-o o   o o--o    o       o     O---o o   o  o   o     o-o  o   o o   o
08:01:38,617  INFO || - [system] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Copyright (c) 2008-2021, Hazelcast, Inc. All Rights Reserved.
08:01:38,617  INFO || - [system] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Hazelcast Platform 5.0-SNAPSHOT (20210522 - c3c641f) starting at [127.0.0.1]:5703
08:01:38,617  INFO || - [system] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Cluster name: bjwweemopj
08:01:38,626  INFO || - [MetricsConfigHelper] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
08:01:38,636  WARN || - [CPSubsystem] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
08:01:38,643  INFO || - [JetService] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Setting number of cooperative threads and default parallelism to 2
08:01:38,646  INFO || - [Diagnostics] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
08:01:38,646  INFO || - [LifecycleService] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5703 is STARTING
08:01:38,647  INFO || - [MockServer] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5701, alive=true}
08:01:38,647  INFO || - [MockServer] hz.heuristic_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5703, alive=true}
08:01:38,648  INFO || - [ClusterService] hz.heuristic_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:3, ver:3} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978 this
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
	Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
]

08:01:38,649  INFO || - [MockServer] hz.priceless_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5703, alive=true}
08:01:38,649  INFO || - [ClusterService] hz.priceless_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:3, ver:3} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb this
	Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
]

08:01:38,677 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-3 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
08:01:38,677 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-5 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
08:01:38,748  INFO || - [JetExtension] hz.jovial_ptolemy.generic-operation.thread-1 - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Jet extension is enabled after the cluster version upgrade.
08:01:38,749  INFO || - [ClusterService] hz.jovial_ptolemy.generic-operation.thread-1 - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:3, ver:3} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
	Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a this
]

08:01:38,749  INFO || - [MockServer] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5702, alive=true}
08:01:38,749  INFO || - [JetExtension] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Jet extension is enabled
08:01:38,749  INFO || - [LifecycleService] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5703 is STARTED
08:01:38,749  INFO || - [MetricsConfigHelper] main - [LOCAL] [bjwweemopj] [5.0-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
08:01:38,754  INFO || - [system] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] 
	o  o   O   o---o o--o o      o-o   O    o-o  o-O-o     o--o  o       O  o-O-o o--o  o-o  o--o  o   o 
	|  |  / \     /  |    |     /     / \  |       |       |   | |      / \   |   |    o   o |   | |\ /| 
	O--O o---o  -O-  O-o  |    O     o---o  o-o    |       O--o  |     o---o  |   O-o  |   | O-Oo  | O | 
	|  | |   |  /    |    |     \    |   |     |   |       |     |     |   |  |   |    o   o |  \  |   | 
	o  o o   o o---o o--o O---o  o-o o   o o--o    o       o     O---o o   o  o   o     o-o  o   o o   o
08:01:38,754  INFO || - [system] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Copyright (c) 2008-2021, Hazelcast, Inc. All Rights Reserved.
08:01:38,754  INFO || - [system] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Hazelcast Platform 5.0-SNAPSHOT (20210522 - c3c641f) starting at [127.0.0.1]:5704
08:01:38,754  INFO || - [system] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Cluster name: bjwweemopj
08:01:38,763  INFO || - [MetricsConfigHelper] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
08:01:38,773  WARN || - [CPSubsystem] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
08:01:38,777 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-5 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
08:01:38,777 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-5 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
08:01:38,780  INFO || - [JetService] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Setting number of cooperative threads and default parallelism to 2
08:01:38,783  INFO || - [Diagnostics] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
08:01:38,783  INFO || - [LifecycleService] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5704 is STARTING
08:01:38,783  INFO || - [MockServer] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5701, alive=true}
08:01:38,784  INFO || - [MockServer] hz.heuristic_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5704, alive=true}
08:01:38,785  INFO || - [ClusterService] hz.heuristic_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:4, ver:4} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978 this
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
	Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
]

08:01:38,785  INFO || - [MockServer] hz.jovial_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5704, alive=true}
08:01:38,786  INFO || - [MockServer] hz.priceless_ptolemy.generic-operation.thread-0 - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5704, alive=true}
08:01:38,786  INFO || - [ClusterService] hz.jovial_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:4, ver:4} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
	Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a this
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
]

08:01:38,786  INFO || - [ClusterService] hz.priceless_ptolemy.generic-operation.thread-0 - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:4, ver:4} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb this
	Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
]

08:01:38,877 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-3 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
08:01:38,877 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-3 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
08:01:38,884  INFO || - [JetExtension] hz.frosty_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Jet extension is enabled after the cluster version upgrade.
08:01:38,885  INFO || - [ClusterService] hz.frosty_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:4, ver:4} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
	Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776 this
]

08:01:38,886  INFO || - [MockServer] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5702, alive=true}
08:01:38,886  INFO || - [MockServer] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5703, alive=true}
08:01:38,886  INFO || - [JetExtension] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Jet extension is enabled
08:01:38,886  INFO || - [LifecycleService] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5704 is STARTED
08:01:38,886  INFO || - [MetricsConfigHelper] main - [LOCAL] [bjwweemopj] [5.0-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
08:01:38,892  INFO || - [system] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] 
	o  o   O   o---o o--o o      o-o   O    o-o  o-O-o     o--o  o       O  o-O-o o--o  o-o  o--o  o   o 
	|  |  / \     /  |    |     /     / \  |       |       |   | |      / \   |   |    o   o |   | |\ /| 
	O--O o---o  -O-  O-o  |    O     o---o  o-o    |       O--o  |     o---o  |   O-o  |   | O-Oo  | O | 
	|  | |   |  /    |    |     \    |   |     |   |       |     |     |   |  |   |    o   o |  \  |   | 
	o  o o   o o---o o--o O---o  o-o o   o o--o    o       o     O---o o   o  o   o     o-o  o   o o   o
08:01:38,892  INFO || - [system] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Copyright (c) 2008-2021, Hazelcast, Inc. All Rights Reserved.
08:01:38,892  INFO || - [system] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Hazelcast Platform 5.0-SNAPSHOT (20210522 - c3c641f) starting at [127.0.0.1]:5705
08:01:38,892  INFO || - [system] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Cluster name: bjwweemopj
08:01:38,901  INFO || - [MetricsConfigHelper] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
08:01:38,911  WARN || - [CPSubsystem] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
08:01:38,918  INFO || - [JetService] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Setting number of cooperative threads and default parallelism to 2
08:01:38,922  INFO || - [Diagnostics] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
08:01:38,922  INFO || - [LifecycleService] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5705 is STARTING
08:01:38,922  INFO || - [MockServer] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5701, alive=true}
08:01:38,923  INFO || - [MockServer] hz.heuristic_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5705, alive=true}
08:01:38,924  INFO || - [ClusterService] hz.heuristic_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:5, ver:5} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978 this
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
	Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
	Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01
]

08:01:38,924  INFO || - [MockServer] hz.jovial_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5705, alive=true}
08:01:38,924  INFO || - [ClusterService] hz.jovial_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:5, ver:5} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
	Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a this
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
	Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01
]

08:01:38,925  INFO || - [MockServer] hz.priceless_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5705, alive=true}
08:01:38,925  INFO || - [ClusterService] hz.priceless_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:5, ver:5} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb this
	Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
	Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01
]

08:01:38,926  INFO || - [MockServer] hz.frosty_ptolemy.generic-operation.thread-0 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5705, alive=true}
08:01:38,926  INFO || - [ClusterService] hz.frosty_ptolemy.generic-operation.thread-0 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:5, ver:5} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
	Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776 this
	Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01
]

08:01:38,977 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-3 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
08:01:38,977 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-3 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
08:01:39,023  INFO || - [JetExtension] hz.sharp_ptolemy.generic-operation.thread-0 - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Jet extension is enabled after the cluster version upgrade.
08:01:39,025  INFO || - [ClusterService] hz.sharp_ptolemy.generic-operation.thread-0 - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:5, ver:5} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
	Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
	Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01 this
]

08:01:39,025  INFO || - [MockServer] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5702, alive=true}
08:01:39,025  INFO || - [MockServer] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5703, alive=true}
08:01:39,025  INFO || - [MockServer] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5704, alive=true}
08:01:39,025  INFO || - [JetExtension] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Jet extension is enabled
08:01:39,025  INFO || - [LifecycleService] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5705 is STARTED
08:01:39,026  INFO || - [PartitionStateManager] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Initializing cluster partition table arrangement...
08:01:39,029  INFO || - [LifecycleService] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5703 is SHUTTING_DOWN
08:01:39,029  WARN || - [Node] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Terminating forcefully...
08:01:39,029  INFO || - [Node] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Shutting down connection manager...
08:01:39,029  INFO || - [MockServer] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5703, alive=false}
08:01:39,029  INFO || - [MockServer] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5702, alive=false}
08:01:39,029  INFO || - [MockServer] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5703, alive=false}
08:01:39,029  INFO || - [MockServer] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5705, alive=false}
08:01:39,029  INFO || - [MockServer] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5703, alive=false}
08:01:39,030  INFO || - [MockServer] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5704, alive=false}
08:01:39,030  INFO || - [MockServer] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5703, alive=false}
08:01:39,030  INFO || - [MockServer] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5701, alive=false}
08:01:39,030  WARN || - [MembershipManager] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a is suspected to be dead for reason: Connection manager is stopped on Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a this
08:01:39,030  WARN || - [MembershipManager] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a is suspected to be dead for reason: Connection manager is stopped on Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a this
08:01:39,030  WARN || - [MembershipManager] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a is suspected to be dead for reason: Connection manager is stopped on Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a this
08:01:39,030  INFO || - [MembershipManager] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Removing Member [127.0.0.1]:5703 - c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
08:01:39,030  INFO || - [ClusterService] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:4, ver:6} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978 this
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
	Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01
]

08:01:39,030  INFO || - [Node] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Shutting down node engine...
08:01:39,030  INFO || - [ClusterService] hz.priceless_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:4, ver:6} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb this
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
	Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01
]

08:01:39,031  INFO || - [TransactionManagerService] hz.priceless_ptolemy.cached.thread-3 - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5703, UUID: c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
08:01:39,032  INFO || - [TransactionManagerService] hz.heuristic_ptolemy.cached.thread-3 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5703, UUID: c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
08:01:39,032  INFO || - [ClusterService] hz.frosty_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:4, ver:6} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776 this
	Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01
]

08:01:39,032  INFO || - [TransactionManagerService] hz.frosty_ptolemy.cached.thread-2 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5703, UUID: c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
08:01:39,032  INFO || - [ClusterService] hz.sharp_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:4, ver:6} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
	Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01 this
]

08:01:39,034  INFO || - [TransactionManagerService] hz.sharp_ptolemy.cached.thread-2 - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5703, UUID: c75e1e0c-1e10-4c4b-9fbb-814bb03ade4a
08:01:39,039  INFO || - [MigrationManager] hz.heuristic_ptolemy.migration - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Repartitioning cluster data. Migration tasks count: 9
08:01:39,045  INFO || - [NodeExtension] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Destroying node NodeExtension.
08:01:39,045  INFO || - [Node] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 16 ms.
08:01:39,045  INFO || - [LifecycleService] main - [127.0.0.1]:5703 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5703 is SHUTDOWN
08:01:39,045  INFO || - [LifecycleService] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5702 is SHUTTING_DOWN
08:01:39,045  WARN || - [Node] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Terminating forcefully...
08:01:39,045  INFO || - [Node] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Shutting down connection manager...
08:01:39,045  INFO || - [MockServer] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5702, alive=false}
08:01:39,045  INFO || - [MockServer] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5705, alive=false}
08:01:39,045  INFO || - [MockServer] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5702, alive=false}
08:01:39,046  INFO || - [MockServer] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5704, alive=false}
08:01:39,046  INFO || - [MockServer] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5702, alive=false}
08:01:39,046  INFO || - [MockServer] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5701, alive=false}
08:01:39,046  WARN || - [MembershipManager] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb is suspected to be dead for reason: Connection manager is stopped on Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb this
08:01:39,046  WARN || - [MembershipManager] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb is suspected to be dead for reason: Connection manager is stopped on Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb this
08:01:39,046  INFO || - [MembershipManager] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Removing Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
08:01:39,046  INFO || - [ClusterService] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:3, ver:7} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978 this
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
	Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01
]

08:01:39,046  INFO || - [Node] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Shutting down node engine...
08:01:39,047  INFO || - [TransactionManagerService] hz.sharp_ptolemy.cached.thread-5 - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5702, UUID: 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
08:01:39,051  WARN || - [MigrationRequestOperation] hz.frosty_ptolemy.partition-operation.thread-1 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Failure while executing MigrationInfo{uuid=4933a0ad-15a2-4faf-987e-bdab48a40837, partitionId=7, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=2, master=[127.0.0.1]:5701, initialPartitionVersion=6, partitionVersionIncrement=2, status=ACTIVE}
com.hazelcast.spi.exception.TargetNotMemberException: Destination of migration could not be found! => com.hazelcast.internal.partition.operation.MigrationRequestOperation{serviceName='hz:core:partitionService', identityHash=547452341, partitionId=7, replicaIndex=0, callId=41, invocationTime=1621670499039 (2021-05-22 08:01:39.039), waitTimeout=-1, callTimeout=300000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0, migration=MigrationInfo{uuid=4933a0ad-15a2-4faf-987e-bdab48a40837, partitionId=7, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=2, master=[127.0.0.1]:5701, initialPartitionVersion=6, partitionVersionIncrement=2, status=ACTIVE}}
	at com.hazelcast.internal.partition.operation.BaseMigrationOperation.verifyExistingDestination(BaseMigrationOperation.java:205) ~[classes/:?]
	at com.hazelcast.internal.partition.operation.MigrationRequestOperation.trySendNewFragment(MigrationRequestOperation.java:154) [classes/:?]
	at com.hazelcast.internal.partition.operation.MigrationRequestOperation.access$1000(MigrationRequestOperation.java:71) [classes/:?]
	at com.hazelcast.internal.partition.operation.MigrationRequestOperation$SendNewMigrationFragmentRunnable.run(MigrationRequestOperation.java:331) [classes/:?]
	at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:192) [classes/:?]
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:207) [classes/:?]
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:141) [classes/:?]
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) [classes/:?]
	at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
08:01:39,051  WARN || - [MigrationManager] ForkJoinPool.commonPool-worker-452 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Failed migration from Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb for MigrationInfo{uuid=a98bcae6-7467-484e-93c5-817d40be99e1, partitionId=6, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=2, master=[127.0.0.1]:5701, initialPartitionVersion=6, partitionVersionIncrement=2, status=ACTIVE}
com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb has left cluster!
	at com.hazelcast.spi.impl.operationservice.impl.InvocationMonitor$OnMemberLeftTask.onTargetLoss(InvocationMonitor.java:429) ~[classes/:?]
	at com.hazelcast.spi.impl.operationservice.impl.InvocationMonitor$OnMemberLeftTask.run0(InvocationMonitor.java:396) ~[classes/:?]
	at com.hazelcast.spi.impl.operationservice.impl.InvocationMonitor$MonitorTask.run(InvocationMonitor.java:255) ~[classes/:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0-zing_18.06.0.0]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0-zing_18.06.0.0]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0-zing_18.06.0.0]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0-zing_18.06.0.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0-zing_18.06.0.0]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0-zing_18.06.0.0]
	at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0-zing_18.06.0.0]
08:01:39,066  WARN || - [MigrationManager] ForkJoinPool.commonPool-worker-452 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Migration failed: MigrationInfo{uuid=a98bcae6-7467-484e-93c5-817d40be99e1, partitionId=6, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=2, master=[127.0.0.1]:5701, initialPartitionVersion=6, partitionVersionIncrement=2, status=ACTIVE}
08:01:39,066  WARN || - [MigrationManager] ForkJoinPool.commonPool-worker-452 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Migration failed: MigrationInfo{uuid=4933a0ad-15a2-4faf-987e-bdab48a40837, partitionId=7, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=2, master=[127.0.0.1]:5701, initialPartitionVersion=6, partitionVersionIncrement=2, status=ACTIVE}
08:01:39,051  WARN || - [MigrationRequestOperation] hz.sharp_ptolemy.partition-operation.thread-0 - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Failure while executing MigrationInfo{uuid=5c0f0d1e-659d-490d-969d-a539c96c4e4e, partitionId=8, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=3, master=[127.0.0.1]:5701, initialPartitionVersion=6, partitionVersionIncrement=2, status=ACTIVE}
com.hazelcast.spi.exception.TargetNotMemberException: Destination of migration could not be found! => com.hazelcast.internal.partition.operation.MigrationRequestOperation{serviceName='hz:core:partitionService', identityHash=1064251549, partitionId=8, replicaIndex=0, callId=42, invocationTime=1621670499039 (2021-05-22 08:01:39.039), waitTimeout=-1, callTimeout=300000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0, migration=MigrationInfo{uuid=5c0f0d1e-659d-490d-969d-a539c96c4e4e, partitionId=8, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=3, master=[127.0.0.1]:5701, initialPartitionVersion=6, partitionVersionIncrement=2, status=ACTIVE}}
	at com.hazelcast.internal.partition.operation.BaseMigrationOperation.verifyExistingDestination(BaseMigrationOperation.java:205) ~[classes/:?]
	at com.hazelcast.internal.partition.operation.MigrationRequestOperation.trySendNewFragment(MigrationRequestOperation.java:154) [classes/:?]
	at com.hazelcast.internal.partition.operation.MigrationRequestOperation.access$1000(MigrationRequestOperation.java:71) [classes/:?]
	at com.hazelcast.internal.partition.operation.MigrationRequestOperation$SendNewMigrationFragmentRunnable.run(MigrationRequestOperation.java:331) [classes/:?]
	at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:192) [classes/:?]
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:207) [classes/:?]
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:141) [classes/:?]
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) [classes/:?]
	at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
08:01:39,067  WARN || - [InternalPartitionService] hz.frosty_ptolemy.generic-operation.thread-1 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Following unknown addresses are found in partition table sent from master[[127.0.0.1]:5701]. (Probably they have recently joined or left the cluster.) {
	[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
}
08:01:39,067  WARN || - [MigrationRequestOperation] hz.heuristic_ptolemy.partition-operation.thread-0 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Failure while executing MigrationInfo{uuid=b4635135-c43e-439f-9c03-4835f000cd51, partitionId=10, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=1, master=[127.0.0.1]:5701, initialPartitionVersion=8, partitionVersionIncrement=2, status=ACTIVE}
com.hazelcast.spi.exception.TargetNotMemberException: Destination of migration could not be found! => com.hazelcast.internal.partition.operation.MigrationRequestOperation{serviceName='hz:core:partitionService', identityHash=1468070503, partitionId=10, replicaIndex=0, callId=44, invocationTime=1621670499039 (2021-05-22 08:01:39.039), waitTimeout=-1, callTimeout=300000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0, migration=MigrationInfo{uuid=b4635135-c43e-439f-9c03-4835f000cd51, partitionId=10, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=1, master=[127.0.0.1]:5701, initialPartitionVersion=8, partitionVersionIncrement=2, status=ACTIVE}}
	at com.hazelcast.internal.partition.operation.BaseMigrationOperation.verifyExistingDestination(BaseMigrationOperation.java:205) ~[classes/:?]
	at com.hazelcast.internal.partition.operation.MigrationRequestOperation.trySendNewFragment(MigrationRequestOperation.java:154) [classes/:?]
	at com.hazelcast.internal.partition.operation.MigrationRequestOperation.access$1000(MigrationRequestOperation.java:71) [classes/:?]
	at com.hazelcast.internal.partition.operation.MigrationRequestOperation$SendNewMigrationFragmentRunnable.run(MigrationRequestOperation.java:331) [classes/:?]
	at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:192) [classes/:?]
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:207) [classes/:?]
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:141) [classes/:?]
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) [classes/:?]
	at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
08:01:39,051  WARN || - [MigrationManager] ForkJoinPool.commonPool-worker-11 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Failed migration from Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb for MigrationInfo{uuid=080e0d35-2c8c-4ff2-8f6e-12a5a619d975, partitionId=5, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=1, master=[127.0.0.1]:5701, initialPartitionVersion=6, partitionVersionIncrement=2, status=ACTIVE}
com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb has left cluster!
	at com.hazelcast.spi.impl.operationservice.impl.InvocationMonitor$OnMemberLeftTask.onTargetLoss(InvocationMonitor.java:429) ~[classes/:?]
	at com.hazelcast.spi.impl.operationservice.impl.InvocationMonitor$OnMemberLeftTask.run0(InvocationMonitor.java:396) ~[classes/:?]
	at com.hazelcast.spi.impl.operationservice.impl.InvocationMonitor$MonitorTask.run(InvocationMonitor.java:255) ~[classes/:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0-zing_18.06.0.0]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0-zing_18.06.0.0]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0-zing_18.06.0.0]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0-zing_18.06.0.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0-zing_18.06.0.0]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0-zing_18.06.0.0]
	at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0-zing_18.06.0.0]
08:01:39,050  WARN || - [MigrationRequestOperation] ForkJoinPool.commonPool-worker-310 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Failure while executing MigrationInfo{uuid=ae021969-463e-4c72-8180-3406715d79b7, partitionId=1, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=1, master=[127.0.0.1]:5701, initialPartitionVersion=8, partitionVersionIncrement=2, status=ACTIVE}
com.hazelcast.internal.partition.PartitionStateVersionMismatchException: Local partition stamp is not equal to master's stamp! Local: 6, Master: 8
	at com.hazelcast.internal.partition.operation.BaseMigrationOperation.verifyPartitionVersion(BaseMigrationOperation.java:138) ~[classes/:?]
	at com.hazelcast.internal.partition.operation.BaseMigrationOperation.beforeRun(BaseMigrationOperation.java:91) ~[classes/:?]
	at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:247) ~[classes/:?]
	at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:469) ~[classes/:?]
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:197) ~[classes/:?]
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:137) ~[classes/:?]
	at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) ~[classes/:?]
	at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?]
08:01:39,071  WARN || - [MigrationManager] ForkJoinPool.commonPool-worker-11 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Migration failed: MigrationInfo{uuid=080e0d35-2c8c-4ff2-8f6e-12a5a619d975, partitionId=5, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=1, master=[127.0.0.1]:5701, initialPartitionVersion=6, partitionVersionIncrement=2, status=ACTIVE}
08:01:39,050  INFO || - [NodeExtension] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Destroying node NodeExtension.
08:01:39,071  WARN || - [MigrationManager] ForkJoinPool.commonPool-worker-310 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Migration failed: MigrationInfo{uuid=b4635135-c43e-439f-9c03-4835f000cd51, partitionId=10, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=1, master=[127.0.0.1]:5701, initialPartitionVersion=8, partitionVersionIncrement=2, status=ACTIVE}
08:01:39,048  INFO || - [ClusterService] hz.frosty_ptolemy.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:3, ver:7} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776 this
	Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01
]

08:01:39,048  INFO || - [TransactionManagerService] hz.frosty_ptolemy.cached.thread-4 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5702, UUID: 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
08:01:39,047  INFO || - [TransactionManagerService] hz.heuristic_ptolemy.cached.thread-3 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5702, UUID: 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
08:01:39,047  INFO || - [ClusterService] hz.sharp_ptolemy.generic-operation.thread-1 - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:3, ver:7} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
	Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01 this
]

08:01:39,072  WARN || - [InternalPartitionService] hz.sharp_ptolemy.generic-operation.thread-1 - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Following unknown addresses are found in partition table sent from master[[127.0.0.1]:5701]. (Probably they have recently joined or left the cluster.) {
	[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
}
08:01:39,072  WARN || - [MigrationManager] ForkJoinPool.commonPool-worker-11 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Migration failed: MigrationInfo{uuid=ae021969-463e-4c72-8180-3406715d79b7, partitionId=1, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=1, master=[127.0.0.1]:5701, initialPartitionVersion=8, partitionVersionIncrement=2, status=ACTIVE}
08:01:39,074  WARN || - [InternalPartitionService] hz.frosty_ptolemy.generic-operation.thread-0 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Following unknown addresses are found in partition table sent from master[[127.0.0.1]:5701]. (Probably they have recently joined or left the cluster.) {
	[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
}
08:01:39,068  WARN || - [MigrationManager] ForkJoinPool.commonPool-worker-452 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Migration failed: MigrationInfo{uuid=5c0f0d1e-659d-490d-969d-a539c96c4e4e, partitionId=8, source=null, sourceCurrentReplicaIndex=-1, sourceNewReplicaIndex=-1, destination=[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb, destinationCurrentReplicaIndex=4, destinationNewReplicaIndex=3, master=[127.0.0.1]:5701, initialPartitionVersion=6, partitionVersionIncrement=2, status=ACTIVE}
08:01:39,074  WARN || - [InternalPartitionService] hz.sharp_ptolemy.generic-operation.thread-0 - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Following unknown addresses are found in partition table sent from master[[127.0.0.1]:5701]. (Probably they have recently joined or left the cluster.) {
	[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
}
08:01:39,074  INFO || - [MigrationManager] hz.heuristic_ptolemy.migration - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Rebalance process was  failed. Ignoring remaining migrations. Will recalculate the new migration plan. (repartitionTime=Sat May 22 08:01:39 UTC 2021, plannedMigrations=9, completedMigrations=9, remainingMigrations=0, totalCompletedMigrations=9, elapsedMigrationOperationTime=230ms, totalElapsedMigrationOperationTime=230ms, elapsedDestinationCommitTime=1ms, totalElapsedDestinationCommitTime=1ms, elapsedMigrationTime=281ms, totalElapsedMigrationTime=281ms)
08:01:39,074  WARN || - [InternalPartitionService] hz.sharp_ptolemy.generic-operation.thread-0 - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Following unknown addresses are found in partition table sent from master[[127.0.0.1]:5701]. (Probably they have recently joined or left the cluster.) {
	[127.0.0.1]:5702 - 90e11e6c-1e43-4586-9e7b-fcd39cf91bcb
}
08:01:39,075  INFO || - [Node] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 30 ms.
08:01:39,075  INFO || - [LifecycleService] main - [127.0.0.1]:5702 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5702 is SHUTDOWN
08:01:39,075  INFO || - [LifecycleService] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5705 is SHUTTING_DOWN
08:01:39,075  WARN || - [Node] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Terminating forcefully...
08:01:39,075  INFO || - [Node] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Shutting down connection manager...
08:01:39,075  INFO || - [MockServer] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5705, alive=false}
08:01:39,075  INFO || - [MockServer] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5704, alive=false}
08:01:39,075  INFO || - [MockServer] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5705, alive=false}
08:01:39,075  INFO || - [MockServer] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5701, alive=false}
08:01:39,075  WARN || - [MembershipManager] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01 is suspected to be dead for reason: Connection manager is stopped on Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01 this
08:01:39,075  INFO || - [MembershipManager] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Removing Member [127.0.0.1]:5705 - 9fdf4179-d0ec-4c54-a403-94cd493c5a01
08:01:39,076  INFO || - [ClusterService] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:2, ver:8} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978 this
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
]

08:01:39,076  INFO || - [Node] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Shutting down node engine...
08:01:39,076  INFO || - [ClusterService] hz.frosty_ptolemy.generic-operation.thread-0 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:2, ver:8} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978
	Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776 this
]

08:01:39,076  INFO || - [TransactionManagerService] hz.frosty_ptolemy.cached.thread-4 - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5705, UUID: 9fdf4179-d0ec-4c54-a403-94cd493c5a01
08:01:39,077  INFO || - [TransactionManagerService] hz.heuristic_ptolemy.cached.thread-3 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5705, UUID: 9fdf4179-d0ec-4c54-a403-94cd493c5a01
08:01:39,078 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-6 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
08:01:39,078 DEBUG || - [JobCoordinationService] hz.heuristic_ptolemy.cached.thread-6 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
08:01:39,081  INFO || - [NodeExtension] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Destroying node NodeExtension.
08:01:39,082  INFO || - [Node] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 7 ms.
08:01:39,082  INFO || - [LifecycleService] main - [127.0.0.1]:5705 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5705 is SHUTDOWN
08:01:39,082  INFO || - [LifecycleService] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5704 is SHUTTING_DOWN
08:01:39,082  WARN || - [Node] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Terminating forcefully...
08:01:39,082  INFO || - [Node] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Shutting down connection manager...
08:01:39,082  INFO || - [MockServer] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5704, alive=false}
08:01:39,083  INFO || - [MockServer] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5701, alive=false}
08:01:39,083  INFO || - [MembershipManager] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Removing Member [127.0.0.1]:5704 - 56686e59-799d-4ace-bad9-2908860ff776
08:01:39,083  INFO || - [ClusterService] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] 

Members {size:1, ver:9} [
	Member [127.0.0.1]:5701 - 96d857b1-64ec-465c-9459-ba00ddce8978 this
]

08:01:39,083  INFO || - [TransactionManagerService] hz.heuristic_ptolemy.cached.thread-6 - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5704, UUID: 56686e59-799d-4ace-bad9-2908860ff776
08:01:39,083  INFO || - [Node] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Shutting down node engine...
08:01:39,087  INFO || - [NodeExtension] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Destroying node NodeExtension.
08:01:39,088  INFO || - [Node] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 6 ms.
08:01:39,088  INFO || - [LifecycleService] main - [127.0.0.1]:5704 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5704 is SHUTDOWN
08:01:39,088  INFO || - [LifecycleService] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5701 is SHUTTING_DOWN
08:01:39,088  WARN || - [Node] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Terminating forcefully...
08:01:39,088  INFO || - [Node] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Shutting down connection manager...
08:01:39,088  INFO || - [Node] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Shutting down node engine...
08:01:39,092  INFO || - [NodeExtension] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Destroying node NodeExtension.
08:01:39,092  INFO || - [Node] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 4 ms.
08:01:39,092  INFO || - [LifecycleService] main - [127.0.0.1]:5701 [bjwweemopj] [5.0-SNAPSHOT] [127.0.0.1]:5701 is SHUTDOWN
@mmedenjak
Copy link
Contributor

All of these seem related:
#18950
#18777
#18766
#18765
#18764

@mmedenjak
Copy link
Contributor

Also this - #18930

sancar pushed a commit to sancar/hazelcast that referenced this issue Jul 26, 2021
SplitBrainStatus is set by a single thread at the hazelcast instance
start in an async manner. Therefore, we are not sure that a the
status is set when the test starts.

Adding an eventually check at the start so that we make sure we
have the min-size cluster for all split brain protections before
we actually split the cluster

fixes hazelcast#18950
fixes hazelcast#18777
fixes hazelcast#18766
fixes hazelcast#18765
fixes hazelcast#18764
fixes hazelcast#18930
@sancar sancar self-assigned this Jul 26, 2021
mmedenjak pushed a commit that referenced this issue Jul 26, 2021
SplitBrainStatus is set by a single thread at the hazelcast instance
start in an async manner. Therefore, we are not sure that a the
status is set when the test starts.

Adding an eventually check at the start so that we make sure we
have the min-size cluster for all split brain protections before
we actually split the cluster

fixes #18950
fixes #18777
fixes #18766
fixes #18765
fixes #18764
fixes #18930
sancar pushed a commit to sancar/hazelcast that referenced this issue Jul 27, 2021
SplitBrainStatus is set by a single thread at the hazelcast instance
start in an async manner. Therefore, we are not sure that a the
status is set when the test starts.

Adding an eventually check at the start so that we make sure we
have the min-size cluster for all split brain protections before
we actually split the cluster

fixes hazelcast#18950
fixes hazelcast#18777
fixes hazelcast#18766
fixes hazelcast#18765
fixes hazelcast#18764
fixes hazelcast#18930

(cherry picked from commit a33a1a7)
sancar pushed a commit to sancar/hazelcast that referenced this issue Jul 27, 2021
SplitBrainStatus is set by a single thread at the hazelcast instance
start in an async manner. Therefore, we are not sure that a the
status is set when the test starts.

Adding an eventually check at the start so that we make sure we
have the min-size cluster for all split brain protections before
we actually split the cluster

fixes hazelcast#18950
fixes hazelcast#18777
fixes hazelcast#18766
fixes hazelcast#18765
fixes hazelcast#18764
fixes hazelcast#18930

(cherry picked from commit a33a1a7)
sancar pushed a commit to sancar/hazelcast that referenced this issue Jul 27, 2021
SplitBrainStatus is set by a single thread at the hazelcast instance
start in an async manner. Therefore, we are not sure that a the
status is set when the test starts.

Adding an eventually check at the start so that we make sure we
have the min-size cluster for all split brain protections before
we actually split the cluster

fixes hazelcast#18950
fixes hazelcast#18777
fixes hazelcast#18766
fixes hazelcast#18765
fixes hazelcast#18764
fixes hazelcast#18930

(cherry picked from commit a33a1a7)
sancar pushed a commit to sancar/hazelcast that referenced this issue Jul 27, 2021
SplitBrainStatus is set by a single thread at the hazelcast instance
start in an async manner. Therefore, we are not sure that a the
status is set when the test starts.

Adding an eventually check at the start so that we make sure we
have the min-size cluster for all split brain protections before
we actually split the cluster

fixes hazelcast#18950
fixes hazelcast#18777
fixes hazelcast#18766
fixes hazelcast#18765
fixes hazelcast#18764
fixes hazelcast#18930

(cherry picked from commit a33a1a7)
sancar pushed a commit to sancar/hazelcast that referenced this issue Jul 27, 2021
SplitBrainStatus is set by a single thread at the hazelcast instance
start in an async manner. Therefore, we are not sure that a the
status is set when the test starts.

Adding an eventually check at the start so that we make sure we
have the min-size cluster for all split brain protections before
we actually split the cluster

fixes hazelcast#18950
fixes hazelcast#18777
fixes hazelcast#18766
fixes hazelcast#18765
fixes hazelcast#18764
fixes hazelcast#18930

(cherry picked from commit a33a1a7)
mmedenjak pushed a commit that referenced this issue Jul 28, 2021
SplitBrainStatus is set by a single thread at the hazelcast instance
start in an async manner. Therefore, we are not sure that a the
status is set when the test starts.

Adding an eventually check at the start so that we make sure we
have the min-size cluster for all split brain protections before
we actually split the cluster

fixes #18950
fixes #18777
fixes #18766
fixes #18765
fixes #18764
fixes #18930

(cherry picked from commit a33a1a7)
mmedenjak pushed a commit that referenced this issue Jul 28, 2021
SplitBrainStatus is set by a single thread at the hazelcast instance
start in an async manner. Therefore, we are not sure that a the
status is set when the test starts.

Adding an eventually check at the start so that we make sure we
have the min-size cluster for all split brain protections before
we actually split the cluster

fixes #18950
fixes #18777
fixes #18766
fixes #18765
fixes #18764
fixes #18930

(cherry picked from commit a33a1a7)
mmedenjak pushed a commit that referenced this issue Jul 28, 2021
SplitBrainStatus is set by a single thread at the hazelcast instance
start in an async manner. Therefore, we are not sure that a the
status is set when the test starts.

Adding an eventually check at the start so that we make sure we
have the min-size cluster for all split brain protections before
we actually split the cluster

fixes #18950
fixes #18777
fixes #18766
fixes #18765
fixes #18764
fixes #18930

(cherry picked from commit a33a1a7)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants