Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster leave and repartition exception #8736

Closed
cstamas opened this issue Aug 17, 2016 · 4 comments

Comments

Projects
None yet
2 participants
@cstamas
Copy link

commented Aug 17, 2016

Similar setup as in #8733

Two nodes, nodeA, and nodeB.

Steps:

  • start nodeA, forms cluster alone as 72f4c8c4-0ac4-4fba-a5ae-a875b44cd7d5
  • start nodeB, it joins as 849e65d2-0292-4899-adbe-19bba2019f29
  • stop nodeA

After nodeA started, as in referenced issue, instances of IAtomicLongs and ISemaphores were created and destroyed. During stopping of nodeA, nodeB reported some exceptions. Relevant logs pasted here:

nodeA:

jvm 1    | 2016-08-17 13:30:01,269+0200 INFO  [hz._hzInstance_1_hzgroup.cached.thread-2] *SYSTEM com.hazelcast.nio.tcp.InitConnectionTask - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Connecting to /192.168.5.82:5702, timeout: 0, bind-any: true
jvm 1    | 2016-08-17 13:30:01,270+0200 INFO  [hz._hzInstance_1_hzgroup.cached.thread-4] *SYSTEM com.hazelcast.transaction.TransactionManagerService - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Committing/rolling-back alive transactions of Member [192.168.5.82]:5702 - 12d5c714-6a0b-4556-820d-ebf4d4e361cf, UUID: 12d5c714-6a0b-4556-820d-ebf4d4e361cf
jvm 1    | 2016-08-17 13:30:01,273+0200 INFO  [hz._hzInstance_1_hzgroup.migration] *SYSTEM com.hazelcast.internal.partition.impl.MigrationManager - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Partition balance is ok, no need to re-partition cluster data... 
jvm 1    | 2016-08-17 13:30:02,279+0200 INFO  [hz._hzInstance_1_hzgroup.migration] *SYSTEM com.hazelcast.internal.partition.impl.MigrationThread - [192.168.5.82]:5701 [hzgroup] [3.7-EA] All migration tasks have been completed, queues are empty.
jvm 1    | 2016-08-17 13:30:02,513+0200 INFO  [hz._hzInstance_1_hzgroup.cached.thread-2] *SYSTEM com.hazelcast.nio.tcp.InitConnectionTask - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Could not connect to: /192.168.5.82:5702. Reason: SocketException[Connection refused to address /192.168.5.82:5702]
jvm 1    | 2016-08-17 13:30:06,093+0200 INFO  [hz._hzInstance_1_hzgroup.InvocationMonitorThread] *SYSTEM com.hazelcast.spi.OperationService - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Invocations:1 timeouts:0 backup-timeouts:1
jvm 1    | 2016-08-17 13:30:30,563+0200 INFO  [hz._hzInstance_1_hzgroup.IO.thread-Acceptor] *SYSTEM com.hazelcast.nio.tcp.SocketAcceptorThread - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Accepting socket connection from /192.168.5.82:56298
jvm 1    | 2016-08-17 13:30:30,564+0200 INFO  [hz._hzInstance_1_hzgroup.cached.thread-4] *SYSTEM com.hazelcast.nio.tcp.TcpIpConnectionManager - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Established socket connection between /192.168.5.82:5701 and /192.168.5.82:56298
jvm 1    | 2016-08-17 13:30:36,576+0200 INFO  [hz._hzInstance_1_hzgroup.priority-generic-operation.thread-0] *SYSTEM com.hazelcast.internal.cluster.ClusterService - [192.168.5.82]:5701 [hzgroup] [3.7-EA] 
jvm 1    | 
jvm 1    | Members [2] {
jvm 1    |  Member [192.168.5.82]:5701 - 72f4c8c4-0ac4-4fba-a5ae-a875b44cd7d5 this
jvm 1    |  Member [192.168.5.82]:5702 - 849e65d2-0292-4899-adbe-19bba2019f29
jvm 1    | }
jvm 1    | 
jvm 1    | 2016-08-17 13:30:36,832+0200 INFO  [hz._hzInstance_1_hzgroup.migration] *SYSTEM com.hazelcast.internal.partition.impl.MigrationManager - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Re-partitioning cluster data... Migration queue size: 271
jvm 1    | 2016-08-17 13:30:37,083+0200 INFO  [hz._hzInstance_1_hzgroup.cached.thread-3] *SYSTEM com.hazelcast.internal.partition.InternalPartitionService - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Remaining migration tasks in queue => 173
jvm 1    | 2016-08-17 13:30:38,405+0200 INFO  [hz._hzInstance_1_hzgroup.migration] *SYSTEM com.hazelcast.internal.partition.impl.MigrationThread - [192.168.5.82]:5701 [hzgroup] [3.7-EA] All migration tasks have been completed, queues are empty.

>>> STOPPED HERE, Shutdown starts

jvm 1    | 2016-08-17 13:31:31,144+0200 INFO  [WrapperListener_stop_runner] *SYSTEM com.hazelcast.core.LifecycleService - [192.168.5.82]:5701 [hzgroup] [3.7-EA] [192.168.5.82]:5701 is SHUTTING_DOWN
jvm 1    | 2016-08-17 13:31:31,145+0200 INFO  [WrapperListener_stop_runner] *SYSTEM com.hazelcast.internal.partition.impl.MigrationManager - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Shutdown request of [192.168.5.82]:5701 is handled
jvm 1    | 2016-08-17 13:31:31,344+0200 INFO  [hz._hzInstance_1_hzgroup.migration] *SYSTEM com.hazelcast.internal.partition.impl.MigrationManager - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Re-partitioning cluster data... Migration queue size: 136
jvm 1    | 2016-08-17 13:31:31,649+0200 INFO  [WrapperListener_stop_runner] *SYSTEM com.hazelcast.instance.Node - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Shutting down multicast service...
jvm 1    | 2016-08-17 13:31:31,650+0200 INFO  [WrapperListener_stop_runner] *SYSTEM com.hazelcast.instance.Node - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Shutting down connection manager...
jvm 1    | 2016-08-17 13:31:31,652+0200 INFO  [hz._hzInstance_1_hzgroup.IO.thread-in-1] *SYSTEM com.hazelcast.nio.tcp.TcpIpConnection - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Closing connection due to exception in NonBlockingSocketReader
jvm 1    | 2016-08-17 13:31:31,652+0200 INFO  [WrapperListener_stop_runner] *SYSTEM com.hazelcast.instance.Node - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Shutting down node engine...
jvm 1    | 2016-08-17 13:31:31,653+0200 INFO  [hz._hzInstance_1_hzgroup.IO.thread-in-1] *SYSTEM com.hazelcast.nio.tcp.nonblocking.NonBlockingSocketReader - [192.168.5.82]:5701 [hzgroup] [3.7-EA] hz._hzInstance_1_hzgroup.IO.thread-in-1 Closing socket to endpoint [192.168.5.82]:5702, Cause:java.io.EOFException: Remote socket closed!
jvm 1    | 2016-08-17 13:31:31,677+0200 INFO  [WrapperListener_stop_runner] *SYSTEM com.hazelcast.instance.NodeExtension - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Destroying node NodeExtension.
jvm 1    | 2016-08-17 13:31:31,678+0200 INFO  [WrapperListener_stop_runner] *SYSTEM com.hazelcast.instance.Node - [192.168.5.82]:5701 [hzgroup] [3.7-EA] Hazelcast Shutdown is completed in 533 ms.
jvm 1    | 2016-08-17 13:31:31,678+0200 INFO  [WrapperListener_stop_runner] *SYSTEM com.hazelcast.core.LifecycleService - [192.168.5.82]:5701 [hzgroup] [3.7-EA] [192.168.5.82]:5701 is SHUTDOWN

nodeB:

jvm 1    | 2016-08-17 13:30:29,683+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.config.XmlConfigLocator - Loading 'hazelcast.xml' from classpath.
jvm 1    | 2016-08-17 13:30:29,879+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.instance.DefaultAddressPicker - [LOCAL] [hzgroup] [3.7-EA] Prefer IPv4 stack is true.
jvm 1    | 2016-08-17 13:30:29,894+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.instance.DefaultAddressPicker - [LOCAL] [hzgroup] [3.7-EA] Picked [192.168.5.82]:5702, using socket ServerSocket[addr=/0.0.0.0,localport=5702], bind any local is true
jvm 1    | 2016-08-17 13:30:29,906+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.system - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Hazelcast 3.7-EA (20160608 - cf27e20) starting at [192.168.5.82]:5702
jvm 1    | 2016-08-17 13:30:29,906+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.system - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Copyright (c) 2008-2016, Hazelcast, Inc. All Rights Reserved.
jvm 1    | 2016-08-17 13:30:29,906+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.system - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Configured Hazelcast Serialization version : 1
jvm 1    | 2016-08-17 13:30:30,011+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.spi.OperationService - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Backpressure is disabled
jvm 1    | 2016-08-17 13:30:30,297+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.instance.Node - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Creating MulticastJoiner
jvm 1    | 2016-08-17 13:30:30,301+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.core.LifecycleService - [192.168.5.82]:5702 [hzgroup] [3.7-EA] [192.168.5.82]:5702 is STARTING
jvm 1    | 2016-08-17 13:30:30,374+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Starting 8 partition threads
jvm 1    | 2016-08-17 13:30:30,375+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Starting 5 generic threads (1 dedicated for priority tasks)
jvm 1    | 2016-08-17 13:30:30,378+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThreadingModel - [192.168.5.82]:5702 [hzgroup] [3.7-EA] TcpIpConnectionManager configured with Non Blocking IO-threading model: 3 input threads and 3 output threads
jvm 1    | 2016-08-17 13:30:30,554+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.internal.cluster.impl.MulticastJoiner - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Trying to join to discovered node: [192.168.5.82]:5701
jvm 1    | 2016-08-17 13:30:30,561+0200 INFO  [hz._hzInstance_1_hzgroup.cached.thread-2] *SYSTEM com.hazelcast.nio.tcp.InitConnectionTask - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Connecting to /192.168.5.82:5701, timeout: 0, bind-any: true
jvm 1    | 2016-08-17 13:30:30,568+0200 INFO  [hz._hzInstance_1_hzgroup.cached.thread-2] *SYSTEM com.hazelcast.nio.tcp.TcpIpConnectionManager - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Established socket connection between /192.168.5.82:56298 and /192.168.5.82:5701
jvm 1    | 2016-08-17 13:30:36,582+0200 INFO  [hz._hzInstance_1_hzgroup.generic-operation.thread-2] *SYSTEM com.hazelcast.internal.cluster.ClusterService - [192.168.5.82]:5702 [hzgroup] [3.7-EA] 
jvm 1    | 
jvm 1    | Members [2] {
jvm 1    |  Member [192.168.5.82]:5701 - 72f4c8c4-0ac4-4fba-a5ae-a875b44cd7d5
jvm 1    |  Member [192.168.5.82]:5702 - 849e65d2-0292-4899-adbe-19bba2019f29 this
jvm 1    | }
jvm 1    | 
jvm 1    | 2016-08-17 13:30:37,083+0200 WARN  [hz._hzInstance_1_hzgroup.priority-generic-operation.thread-0] *SYSTEM com.hazelcast.internal.partition.InternalPartitionService - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Master version should be greater than ours! Local version: 1283, Master version: 1282 Master: [192.168.5.82]:5701
jvm 1    | 2016-08-17 13:30:38,601+0200 INFO  [jetty-main-1] *SYSTEM com.hazelcast.core.LifecycleService - [192.168.5.82]:5702 [hzgroup] [3.7-EA] [192.168.5.82]:5702 is STARTED

OTHER NODE STOPPED HERE

jvm 1    | 2016-08-17 13:31:31,165+0200 WARN  [HZ-semaphore-remover] *SYSTEM com.hazelcast.util.FutureUtil - Exception occurred
jvm 1    | java.util.concurrent.ExecutionException: com.hazelcast.core.HazelcastInstanceNotActiveException: This node is currently passive! Operation: com.hazelcast.spi.impl.proxyservice.impl.operations.DistributedObjectDestroyOperation{serviceName='hz:core:proxyService', identityHash=2056833059, partitionId=-1, replicaIndex=0, callId=210325, invocationTime=1471433491145 (2016-08-17 13:31:31.145), waitTimeout=-1, callTimeout=60000}
jvm 1    |  at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveAndThrow(InvocationFuture.java:85) ~[hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:186) ~[hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  at com.hazelcast.util.FutureUtil.executeWithDeadline(FutureUtil.java:294) [hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  at com.hazelcast.util.FutureUtil.waitWithDeadline(FutureUtil.java:278) [hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  at com.hazelcast.util.FutureUtil.waitWithDeadline(FutureUtil.java:252) [hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  at com.hazelcast.spi.impl.proxyservice.impl.ProxyServiceImpl.destroyDistributedObject(ProxyServiceImpl.java:162) [hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  at com.hazelcast.spi.AbstractDistributedObject.destroy(AbstractDistributedObject.java:61) [hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  at org.sonatype.sisu.locks.HazelcastResourceLockFactory$1.run(HazelcastResourceLockFactory.java:97) [takari-nexus-locks-1.2.9-SNAPSHOT.jar:1.2.9-SNAPSHOT]
jvm 1    |  at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
jvm 1    | Caused by: com.hazelcast.core.HazelcastInstanceNotActiveException: This node is currently passive! Operation: com.hazelcast.spi.impl.proxyservice.impl.operations.DistributedObjectDestroyOperation{serviceName='hz:core:proxyService', identityHash=2056833059, partitionId=-1, replicaIndex=0, callId=210325, invocationTime=1471433491145 (2016-08-17 13:31:31.145), waitTimeout=-1, callTimeout=60000}
jvm 1    |  at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.checkNodeState(OperationRunnerImpl.java:217) ~[hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:167) ~[hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:397) ~[hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:117) ~[hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:102) ~[hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  at ------ submitted from ------.(Unknown Source) ~[na:na]
jvm 1    |  at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolve(InvocationFuture.java:111) ~[hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveAndThrow(InvocationFuture.java:74) ~[hazelcast-3.7-EA.jar:3.7-EA]
jvm 1    |  ... 8 common frames omitted
jvm 1    | 2016-08-17 13:31:31,649+0200 INFO  [hz._hzInstance_1_hzgroup.priority-generic-operation.thread-0] *SYSTEM com.hazelcast.internal.cluster.ClusterService - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Old master [192.168.5.82]:5701 left the cluster, assigning new master Member [192.168.5.82]:5702 - 849e65d2-0292-4899-adbe-19bba2019f29 this
jvm 1    | 2016-08-17 13:31:31,653+0200 INFO  [hz._hzInstance_1_hzgroup.priority-generic-operation.thread-0] *SYSTEM com.hazelcast.nio.tcp.TcpIpConnection - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Removing member [192.168.5.82]:5701, uuid: 72f4c8c4-0ac4-4fba-a5ae-a875b44cd7d5, requested by: [192.168.5.82]:5701
jvm 1    | 2016-08-17 13:31:31,654+0200 INFO  [hz._hzInstance_1_hzgroup.priority-generic-operation.thread-0] *SYSTEM com.hazelcast.internal.cluster.ClusterService - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Removing Member [192.168.5.82]:5701 - 72f4c8c4-0ac4-4fba-a5ae-a875b44cd7d5
jvm 1    | 2016-08-17 13:31:31,654+0200 INFO  [hz._hzInstance_1_hzgroup.cached.thread-4] *SYSTEM com.hazelcast.nio.tcp.InitConnectionTask - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Connecting to /192.168.5.82:5701, timeout: 0, bind-any: true
jvm 1    | 2016-08-17 13:31:31,656+0200 INFO  [hz._hzInstance_1_hzgroup.priority-generic-operation.thread-0] *SYSTEM com.hazelcast.internal.cluster.ClusterService - [192.168.5.82]:5702 [hzgroup] [3.7-EA] 
jvm 1    | 
jvm 1    | Members [1] {
jvm 1    |  Member [192.168.5.82]:5702 - 849e65d2-0292-4899-adbe-19bba2019f29 this
jvm 1    | }
jvm 1    | 
jvm 1    | 2016-08-17 13:31:31,656+0200 INFO  [hz._hzInstance_1_hzgroup.cached.thread-2] *SYSTEM com.hazelcast.transaction.TransactionManagerService - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Committing/rolling-back alive transactions of Member [192.168.5.82]:5701 - 72f4c8c4-0ac4-4fba-a5ae-a875b44cd7d5, UUID: 72f4c8c4-0ac4-4fba-a5ae-a875b44cd7d5
jvm 1    | 2016-08-17 13:31:31,818+0200 INFO  [hz._hzInstance_1_hzgroup.migration] *SYSTEM com.hazelcast.internal.partition.InternalPartitionService - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Fetching most recent partition table! my version: 1897
jvm 1    | 2016-08-17 13:31:31,818+0200 INFO  [hz._hzInstance_1_hzgroup.migration] *SYSTEM com.hazelcast.internal.partition.InternalPartitionService - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Most recent partition table version: 1897
jvm 1    | 2016-08-17 13:31:31,831+0200 INFO  [hz._hzInstance_1_hzgroup.migration] *SYSTEM com.hazelcast.internal.partition.impl.MigrationManager - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Partition balance is ok, no need to re-partition cluster data... 
jvm 1    | 2016-08-17 13:31:32,469+0200 INFO  [hz._hzInstance_1_hzgroup.cached.thread-4] *SYSTEM com.hazelcast.nio.tcp.InitConnectionTask - [192.168.5.82]:5702 [hzgroup] [3.7-EA] Could not connect to: /192.168.5.82:5701. Reason: SocketException[Connection refused to address /192.168.5.82:5701]

What are these exceptions about? Why was nodeB passive?

Hazelcast version: 3.7-EA
Java: Oracle Java8
OS: dev OSX, prod Linux

@jerrinot

This comment has been minimized.

Copy link
Contributor

commented Aug 17, 2016

@cstamas: good catch! It's actually a misleading message and we should fix it. The nodeA is passive as it is in the shutdown process.

However the exception is sent across the network to nodeB which will print it. It's safe to ignore it. let's see what we can do to make it less scary.

@cstamas

This comment has been minimized.

Copy link
Author

commented Aug 17, 2016

@jerrinot ack, good to know, thanks

@jerrinot jerrinot added this to the 3.7.1 milestone Aug 17, 2016

jerrinot added a commit to jerrinot/hazelcast that referenced this issue Aug 17, 2016

Fix a potentially misleading exception message
Exception can be sent over network hence using "this node"
is not great.

See hazelcast#8736

@jerrinot jerrinot self-assigned this Aug 17, 2016

jerrinot added a commit to jerrinot/hazelcast that referenced this issue Aug 18, 2016

Fix a potentially misleading exception message
Exception can be sent over network hence using "this node"
is not great.

See hazelcast#8736

jerrinot added a commit to jerrinot/hazelcast that referenced this issue Aug 19, 2016

Do not log a warning when a remote node is shutdown while destroying …
…a proxy

Reasoning:
When a member is shutdown then it's not a big deal when a proxy was
not  destryoed. It's just a noise and it looks scary.

See hazelcast#8736

jerrinot added a commit to jerrinot/hazelcast that referenced this issue Aug 19, 2016

Do not log a warning when a remote node is shutdown while destroying …
…a proxy

Reasoning:
When a member is shutdown then it's not a big deal when a proxy was
not  destryoed. It's just a noise and it looks scary.

See hazelcast#8736

jerrinot added a commit to jerrinot/hazelcast that referenced this issue Aug 19, 2016

Do not log a warning when a remote node is shutdown while destroying …
…a proxy

Reasoning:
When a member is shutdown then it's not a big deal when a proxy was
not  destryoed. It's just a noise and it looks scary.

See hazelcast#8736

jerrinot added a commit to jerrinot/hazelcast that referenced this issue Aug 19, 2016

Do not log a warning when a remote node is shutdown while destroying …
…a proxy

Reasoning:
When a member is shutdown then it's not a big deal when a proxy was
not  destryoed. It's just a noise and it looks scary.

See hazelcast#8736

@jerrinot jerrinot modified the milestones: 3.7, 3.7.1 Aug 20, 2016

@jerrinot

This comment has been minimized.

Copy link
Contributor

commented Aug 20, 2016

It has the same root cause as #8733. It is fixed in 3.7
edit: my bad. this one should remain open until #8750 is merged.

@jerrinot jerrinot closed this Aug 20, 2016

@jerrinot jerrinot reopened this Aug 20, 2016

@jerrinot jerrinot modified the milestones: 3.7.1, 3.7 Aug 22, 2016

@jerrinot

This comment has been minimized.

Copy link
Contributor

commented Aug 23, 2016

Fixed by #8750 and #8738

@jerrinot jerrinot closed this Aug 23, 2016

jerrinot added a commit to jerrinot/hazelcast that referenced this issue Aug 27, 2016

Fix a potentially misleading exception message
Exception can be sent over network hence using "this node"
is not great.

See hazelcast#8736
(cherry picked from commit 4de69e4)

jerrinot added a commit to jerrinot/hazelcast that referenced this issue Aug 27, 2016

Do not log a warning when a remote node is shutdown while destroying …
…a proxy

Reasoning:
When a member is shutdown then it's not a big deal when a proxy was
not  destryoed. It's just a noise and it looks scary.

See hazelcast#8736
(cherry picked from commit 39486d8)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.