Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nodes Auto-downing themselves #2903

Closed
EduardsBrown opened this issue Jul 25, 2017 · 22 comments
Closed

Nodes Auto-downing themselves #2903

EduardsBrown opened this issue Jul 25, 2017 · 22 comments

Comments

@EduardsBrown
Copy link

  • Current Stable Release (v1.2.3)
  • Windows 10, Server 2012, Server Core
  • I can't reliably reproduce this

I am experiencing a self shutdown issue when I leave my ActorSystem running for a while (12 hours).

I can't reproduce self shutdown, I've gone through the Akka .NET code and found the code path it takes to produce some of the errors, I can only reproduce this after leaving it running over night.

We have:

  • 2 Seed Nodes
    -- These nodes have lighthouse roles
  • 3 Processing Nodes
    -- 2 Nodes have role1 role and 1 node has both role1 and role2 roles

role1 - using persistence and sharding
role2 - is a singleton that inserts data in MSSQL

Error Log of a node:

  • 17 07 20 21:02:45 WARN ClusterCoreDaemon - Cluster Node [akka.tcp://ActorSystem@node1:5050] - Marking node(s) as UNREACHABLE [Member(address = akka.tcp://ActorSystem@node2:5050, Uid=1443627858 status = Up, role=[role2,role1], upNumber=6), Member(address = akka.tcp://ActorSystem@lighthouse2:4053, Uid=1727715640 status = Up, role=[lighthouse], upNumber=2)]. Node roles [role1]
  • 17 07 20 21:02:45 INFO ClusterMonitor - Member detected as unreachable: Member(address = akka.tcp://ActorSystem@node2:5050, Uid=1443627858 status = Up, role=[role2,role1], upNumber=6)
  • 17 07 20 21:02:45 INFO ClusterMonitor - Member detected as unreachable: Member(address = akka.tcp://ActorSystem@lighthouse2:4053, Uid=1727715640 status = Up, role=[lighthouse], upNumber=2)
  • 17 07 20 21:02:45 INFO ClusterCoreDaemon - Ignoring received gossip status from unreachable [UniqueAddress: (akka.tcp://ActorSystem@node2:5050, 1443627858)]
  • 17 07 20 21:02:46 INFO ClusterCoreDaemon - Marking node(s) as REACHABLE [Member(address = akka.tcp://ActorSystem@node2:5050, Uid=1443627858 status = Up, role=[role2,role1], upNumber=6)]. Node roles [role1]
  • 17 07 20 21:02:47 INFO ClusterCoreDaemon - Ignoring received gossip status from unreachable [UniqueAddress: (akka.tcp://ActorSystem@lighthouse2:4053, 1727715640)]
  • 17 07 20 21:02:48 INFO ClusterCoreDaemon - Marking node(s) as REACHABLE [Member(address = akka.tcp://ActorSystem@lighthouse2:4053, Uid=1727715640 status = Up, role=[lighthouse], upNumber=2)]. Node roles [role1]
  • 17 07 20 21:02:53 WARN ClusterCoreDaemon - Cluster Node [akka.tcp://ActorSystem@node1:5050] - Marking node(s) as UNREACHABLE [Member(address = akka.tcp://ActorSystem@node2:5050, Uid=1443627858 status = Up, role=[role2,role1], upNumber=6), Member(address = akka.tcp://ActorSystem@lighthouse2:4053, Uid=1727715640 status = Up, role=[lighthouse], upNumber=2)]. Node roles [role1]
  • 17 07 20 21:02:53 INFO ClusterMonitor - Member detected as unreachable: Member(address = akka.tcp://ActorSystem@node2:5050, Uid=1443627858 status = Up, role=[role2,role1], upNumber=6)
  • 17 07 20 21:02:53 INFO ClusterMonitor - Member detected as unreachable: Member(address = akka.tcp://ActorSystem@lighthouse2:4053, Uid=1727715640 status = Up, role=[lighthouse], upNumber=2)
  • 17 07 20 21:02:53 INFO ClusterCoreDaemon - Ignoring received gossip from unreachable [UniqueAddress: (akka.tcp://ActorSystem@lighthouse2:4053, 1727715640)]
  • 17 07 20 21:02:53 INFO ClusterCoreDaemon - Ignoring received gossip from unreachable [UniqueAddress: (akka.tcp://ActorSystem@lighthouse2:4053, 1727715640)]
  • 17 07 20 21:02:53 INFO ClusterCoreDaemon - Ignoring received gossip from unreachable [UniqueAddress: (akka.tcp://ActorSystem@lighthouse2:4053, 1727715640)]
  • 17 07 20 21:02:53 INFO ClusterCoreDaemon - Ignoring received gossip from unreachable [UniqueAddress: (akka.tcp://ActorSystem@lighthouse2:4053, 1727715640)]
  • 17 07 20 21:02:54 INFO ClusterCoreDaemon - Marking node(s) as REACHABLE [Member(address = akka.tcp://ActorSystem@lighthouse2:4053, Uid=1727715640 status = Up, role=[lighthouse], upNumber=2)]. Node roles [role1]
  • 17 07 20 21:02:59 ERROR TcpServerHandler - Error caught channel [::ffff:xxx.xx.xxx.xx]:5050->[::ffff:xxx.xx.xxx.xx]:51653
    System.Net.Sockets.SocketException (0x80004005): An existing connection was forcibly closed by the remote host
    at DotNetty.Transport.Channels.Sockets.AbstractSocketByteChannel.SocketByteChannelUnsafe.FinishRead(SocketChannelAsyncOperation operation)
  • 17 07 20 21:02:59 WARN ReliableDeliverySupervisor - Association with remote system akka.tcp://ActorSystem@node2:5050 has failed; address is now gated for 5000 ms. Reason is: [Akka.Remote.EndpointDisassociatedException: Disassociated
    at Akka.Remote.EndpointWriter.PublishAndThrow(Exception reason, LogLevel level, Boolean needToThrow)
    at Akka.Remote.EndpointWriter.Unhandled(Object message)
    at Akka.Actor.UntypedActor.Receive(Object message)
    at Akka.Actor.ActorBase.AroundReceive(Receive receive, Object message)
    at Akka.Actor.ActorCell.ReceiveMessage(Object message)
    at Akka.Actor.ActorCell.AutoReceiveMessage(Envelope envelope)
    at Akka.Actor.ActorCell.Invoke(Envelope envelope)]
  • 17 07 20 21:02:59 ERROR OneForOneStrategy - Disassociated
    Akka.Remote.EndpointDisassociatedException: Disassociated
    at Akka.Remote.EndpointWriter.PublishAndThrow(Exception reason, LogLevel level, Boolean needToThrow)
    at Akka.Remote.EndpointWriter.Unhandled(Object message)
    at Akka.Actor.UntypedActor.Receive(Object message)
    at Akka.Actor.ActorBase.AroundReceive(Receive receive, Object message)
    at Akka.Actor.ActorCell.ReceiveMessage(Object message)
    at Akka.Actor.ActorCell.AutoReceiveMessage(Envelope envelope)
    at Akka.Actor.ActorCell.Invoke(Envelope envelope)
  • 17 07 20 21:03:03 INFO DummyClassForStringSources - Cluster Node [akka.tcp://ActorSystem@node1:5050] Leader is auto-downing unreachable node [akka.tcp://ActorSystem@node1:5050]
  • 17 07 20 21:03:03 INFO ClusterCoreDaemon - Marking unreachable node [akka.tcp://ActorSystem@node2:5050] as [Down]
  • 17 07 20 21:03:04 INFO ClusterCoreDaemon - Leader can currently not perform its duties, reachability status: [Reachability([akka.tcp://ActorSystem@node1:5050 -> UniqueAddress: (akka.tcp://ActorSystem@node2:5050, 1443627858): Unreachable [Unreachable] (6)], [akka.tcp://ActorSystem@node1:5050 -> UniqueAddress: (akka.tcp://ActorSystem@lighthouse2:4053, 1727715640): Reachable [Reachable] (8)])], member status: [$akka.tcp://ActorSystem@node1:5050 $Up seen=$True, $akka.tcp://ActorSystem@node2:5050 $Down seen=$False, $akka.tcp://ActorSystem@lighthouse1:4053 $Up seen=$False, $akka.tcp://ActorSystem@lighthouse2:4053 $Up seen=$False, $akka.tcp://ActorSystem@node3:5050 $Up seen=$True]
  • 17 07 20 21:03:04 WARN ReliableDeliverySupervisor - Association with remote system akka.tcp://ActorSystem@node2:5050 has failed; address is now gated for 5000 ms. Reason is: [Akka.Remote.EndpointDisassociatedException: Disassociated
    at Akka.Remote.EndpointWriter.PublishAndThrow(Exception reason, LogLevel level, Boolean needToThrow)
    at Akka.Remote.EndpointWriter.Unhandled(Object message)
    at Akka.Actor.ActorCell.<>c__DisplayClass112_0.<Akka.Actor.IUntypedActorContext.Become>b__0(Object m)
    at Akka.Actor.ActorBase.AroundReceive(Receive receive, Object message)
    at Akka.Actor.ActorCell.ReceiveMessage(Object message)
    at Akka.Actor.ActorCell.AutoReceiveMessage(Envelope envelope)
    at Akka.Actor.ActorCell.Invoke(Envelope envelope)]
  • 17 07 20 21:03:04 ERROR OneForOneStrategy - Disassociated
    Akka.Remote.EndpointDisassociatedException: Disassociated
    at Akka.Remote.EndpointWriter.PublishAndThrow(Exception reason, LogLevel level, Boolean needToThrow)
    at Akka.Remote.EndpointWriter.Unhandled(Object message)
    at Akka.Actor.ActorCell.<>c__DisplayClass112_0.<Akka.Actor.IUntypedActorContext.Become>b__0(Object m)
    at Akka.Actor.ActorBase.AroundReceive(Receive receive, Object message)
    at Akka.Actor.ActorCell.ReceiveMessage(Object message)
    at Akka.Actor.ActorCell.AutoReceiveMessage(Envelope envelope)
    at Akka.Actor.ActorCell.Invoke(Envelope envelope)
  • 17 07 20 21:03:04 ERROR EndpointWriter - AssociationError [akka.tcp://ActorSystem@node1:5050] <- akka.tcp://ActorSystem@node2:5050: Error [The remote system has quarantined this system. No further associations to the remote system are possible until this system is restarted.] [ at Akka.Remote.EndpointReader.HandleDisassociated(DisassociateInfo info)
    at lambda_method(Closure , Object , Action1 , Action1 , Action1 ) at Akka.Tools.MatchHandler.PartialHandlerArgumentsCapture4.Handle(T value)
    at Akka.Actor.ReceiveActor.ExecutePartialMessageHandler(Object message, PartialAction`1 partialAction)
    at Akka.Actor.UntypedActor.Receive(Object message)
    at Akka.Actor.ActorBase.AroundReceive(Receive receive, Object message)
    at Akka.Actor.ActorCell.ReceiveMessage(Object message)
    at Akka.Actor.ActorCell.Invoke(Envelope envelope)]
  • 17 07 20 21:03:04 INFO DummyClassForStringSources - Quarantined address [akka.tcp://ActorSystem@node2:5050] is still unreachable or has not been restarted. Keeping it quarantined.
  • 17 07 20 21:03:05 INFO ClusterCoreDaemon - Shutting down myself
  • 17 07 20 21:03:05 INFO DummyClassForStringSources - Cluster Node [akka.tcp://ActorSystem@node1:5050] - Shutting down...
  • 17 07 20 21:03:05 INFO DummyClassForStringSources - Cluster Node [akka.tcp://ActorSystem@node1:5050] - Successfully shut down
  • 17 07 20 21:03:05 INFO ClusterMonitor - Member is Removed: Member(address = akka.tcp://ActorSystem@node1:5050, Uid=67638642 status = Removed, role=[role1], upNumber=3)
  • 17 07 20 21:03:05 INFO ClusterMonitor - Member is Removed: Member(address = akka.tcp://ActorSystem@node2:5050, Uid=1443627858 status = Removed, role=[role2,role1], upNumber=6)
  • 17 07 20 21:03:05 INFO ClusterMonitor - Member is Removed: Member(address = akka.tcp://ActorSystem@lighthouse1:4053, Uid=1600650347 status = Removed, role=[lighthouse], upNumber=1)
  • 17 07 20 21:03:05 INFO ClusterMonitor - Member is Removed: Member(address = akka.tcp://ActorSystem@lighthouse2:4053, Uid=1727715640 status = Removed, role=[lighthouse], upNumber=2)
  • 17 07 20 21:03:05 INFO ClusterMonitor - Member is Removed: Member(address = akka.tcp://ActorSystem@node3:5050, Uid=5360973 status = Removed, role=[role1], upNumber=5)
  • 17 07 20 21:03:05 INFO ClusterSingletonManager - Self removed, stopping ClusterSingletonManager
  • 17 07 20 21:03:05 INFO RemoteActorRefProvider+RemotingTerminator - Shutting down remote daemon.
  • 17 07 20 21:03:05 INFO RemoteActorRefProvider+RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
  • 17 07 20 21:03:05 INFO DummyClassForStringSources - Remoting shut down
  • 17 07 20 21:03:05 INFO RemoteActorRefProvider+RemotingTerminator - Remoting shut down.
@Aaronontheweb
Copy link
Member

Do you have cluster.auto-down-unreachable-after turned on?

@Aaronontheweb
Copy link
Member

TL;DR;, nodes can't Down themselves without being explicitly given a Down command. Akka.NET never does this by default. Can only happen if you explicitly turn this setting on.

@EduardsBrown
Copy link
Author

right yes, I have cluster.auto-down-unreachable-after. So if I get rid of this, the error wont happen?

@Danthar
Copy link
Member

Danthar commented Jul 26, 2017

Yes. Your node will not be Downed. However they will still Disassociate. Some of your nodes are becoming Unreachable. This happens for a reason.

@EduardsBrown
Copy link
Author

Ok Thank you.

@jalchr
Copy link

jalchr commented Jul 26, 2017

@Danthar When they "Disassociate", will this mean they can not talk to each other, thus they are "logically" unreachable. Any way to associate them again, or this happens automatically within Akka ?
I'm experiencing same issues as @EduardsBrown , my question is: Do we have "clear" scenarios as to why this "happens" ?

@Danthar
Copy link
Member

Danthar commented Jul 26, 2017

As you can see in the logs that where posted. It starts with the a node marking another node as unreachable. This means that the first node has determined it has lost its connection to the other node.
This then get communicated to the leader.
Once the entire cluster agrees that this particular node is unreachable. And auto-downing is on, eventually the leader will mark the node as Down, and disassociation starts to happen.

If you dont down the node manually, or automatically. It will remain in unreachable state, untill it becomes reachable again.

So when can this happen. Generally network issues are the cause. But those are usually intermittent.
Another cause is that the node simply crashed. Restarting the Node on the same endpoint address. Will also resolve the problem.

@jalchr
Copy link

jalchr commented Jul 27, 2017

I'm seeing lots of messages like this:

2017-07-27 04:15:33,608 [4] WARN  MyClusterStatus - Unreachable Member; Role: lighthouse, Status: Up, Address: akka.tcp://ArchiveSystem@140.125.4.5:4053, 
2017-07-27 04:15:38,624 [73] WARN  MyClusterStatus - RoleLeader: lighthouse, No leader found!
2017-07-27 04:15:38,624 [73] WARN  MyClusterStatus - Unreachable Member; Role: lighthouse, Status: Up, Address: akka.tcp://ArchiveSystem@140.125.4.5:4053, 
2017-07-27 04:15:43,639 [65] WARN  MyClusterStatus - RoleLeader: lighthouse, No leader found!
2017-07-27 04:15:43,639 [65] WARN  MyClusterStatus - Unreachable Member; Role: lighthouse, Status: Up, Address: akka.tcp://ArchiveSystem@140.125.4.5:4053, 
2017-07-27 04:15:48,655 [91] WARN  MyClusterStatus - RoleLeader: lighthouse, No leader found!
2017-07-27 04:15:48,655 [91] WARN  MyClusterStatus - Unreachable Member; Role: lighthouse, Status: Up, Address: akka.tcp://ArchiveSystem@140.125.4.5:4053, 
2017-07-27 04:15:53,671 [91] WARN  MyClusterStatus - RoleLeader: lighthouse, No leader found!
2017-07-27 04:15:53,671 [91] WARN  MyClusterStatus - Unreachable Member; Role: lighthouse, Status: Up, Address: akka.tcp://ArchiveSystem@140.125.4.5:4053, 
2017-07-27 04:15:58,686 [73] WARN  MyClusterStatus - RoleLeader: lighthouse, No leader found!
2017-07-27 04:15:58,686 [73] WARN  MyClusterStatus - Unreachable Member; Role: lighthouse, Status: Up, Address: akka.tcp://ArchiveSystem@140.125.4.5:4053, 
2017-07-27 04:16:03,702 [127] WARN  MyClusterStatus - RoleLeader: lighthouse, No leader found!
2017-07-27 04:16:03,702 [127] WARN  MyClusterStatus - Unreachable Member; Role: lighthouse, Status: Up, Address: akka.tcp://ArchiveSystem@140.125.4.5:4053, 
2017-07-27 04:16:04,124 [73] INFO  MyClusterStatus - ReachableMember: Member(address = akka.tcp://ArchiveSystem@140.125.4.5:4053, Uid=277630045 status = Up, role=[lighthouse], upNumber=1), Role(s): lighthouse
2017-07-27 04:17:04,032 [127] INFO  MyClusterStatus - UnreachableMember: Member(address = akka.tcp://ArchiveSystem@140.125.4.5:4053, Uid=277630045 status = Up, role=[lighthouse], upNumber=1), Role(s): lighthouse
2017-07-27 04:17:08,907 [127] WARN  MyClusterStatus - RoleLeader: lighthouse, No leader found!
2017-07-27 04:17:08,907 [127] WARN  MyClusterStatus - Unreachable Member; Role: lighthouse, Status: Up, Address: akka.tcp://ArchiveSystem@140.125.4.5:4053, 
2017-07-27 04:17:13,922 [90] WARN  MyClusterStatus - RoleLeader: lighthouse, No leader found!
2017-07-27 04:17:13,922 [90] WARN  MyClusterStatus - Unreachable Member; Role: lighthouse, Status: Up, Address: akka.tcp://ArchiveSystem@140.125.4.5:4053, 
2017-07-27 04:17:18,938 [73] WARN  MyClusterStatus - RoleLeader: lighthouse, No leader found!
2017-07-27 04:17:18,938 [73] WARN  MyClusterStatus - Unreachable Member; Role: lighthouse, Status: Up, Address: akka.tcp://ArchiveSystem@140.125.4.5:4053, 
2017-07-27 04:17:19,126 [65] INFO  MyClusterStatus - ReachableMember: Member(address = akka.tcp://ArchiveSystem@140.125.4.5:4053, Uid=277630045 status = Up, role=[lighthouse], upNumber=1), Role(s): lighthouse

Is this normal ? or there is a problem in my network or lighthouse ?

At lighthouse, event viewer shows this errors occurring ...

Akka.Remote.Transport.DotNetty.TcpServerHandler: Error caught channel [[::ffff:140.125.4.5]:4053->[::ffff:140.125.4.4]:52309](Id=59f29fbe) System.Net.Sockets.SocketException (0x80004005): An existing connection was forcibly closed by the remote host
   at DotNetty.Transport.Channels.Sockets.SocketChannelAsyncOperation.Validate()
   at DotNetty.Transport.Channels.Sockets.AbstractSocketByteChannel.SocketByteChannelUnsafe.FinishRead(SocketChannelAsyncOperation operation)

@Danthar
Copy link
Member

Danthar commented Jul 27, 2017

The problems are in your network. It seems like your connection is flaky, causing these logs.
Also make sure your Lighthouse instance is using the same akka.net version as your own nodes are using.

@ingted
Copy link

ingted commented Jul 27, 2023

Bumped into the auto-down myself issue recently when PubSub plugin received a message around 2Mb. (Akka 1.5.0)

image

@Aaronontheweb
Copy link
Member

@ingted what does your cluster config look like?

@Aaronontheweb
Copy link
Member

Because a single big message can't cause that all on its own

@ingted
Copy link

ingted commented Jul 27, 2023

Hi @Aaronontheweb ,

My config is this one:


val it: Settings =
    akka : {
    version : "0.0.1 Akka"
    home : 
    loggers : ["Akka.Logger.NLog.NLogLogger, Akka.Logger.NLog"]
    loggers-dispatcher : akka.actor.default-dispatcher
    logger-startup-timeout : 5s
    logger-async-start : false
    logger-formatter : "Akka.Event.DefaultLogMessageFormatter, Akka"
    loglevel : DEBUG
    suppress-json-serializer-warning : on
    stdout-loglevel : INFO
    stdout-logger-class : Akka.Event.StandardOutLogger
    log-config-on-start : off
    log-serializer-override-on-start : on
    log-dead-letters : 10
    log-dead-letters-during-shutdown : off
    log-dead-letters-suspend-duration : "5 minutes"
    extensions : [Akka.Cluster.Tools.PublishSubscribe.DistributedPubSubExtensionProvider,Akka.Cluster.Tools]
    daemonic : off
    actor : {
      provider : "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
      guardian-supervisor-strategy : Akka.Actor.DefaultSupervisorStrategy
      creation-timeout : 20s
      reaper-interval : 5
      serialize-messages : off
      serialize-creators : off
      unstarted-push-timeout : 10s
      ask-timeout : infinite

      telemetry : {
        enabled : false
      }
      typed : {
        timeout : 5
      }
      inbox : {
        inbox-size : 10000000
        default-timeout : 5s
      }
      router : {
        type-mapping : {
          from-code : Akka.Routing.NoRouter
          round-robin-pool : Akka.Routing.RoundRobinPool
          round-robin-group : Akka.Routing.RoundRobinGroup
          random-pool : Akka.Routing.RandomPool
          random-group : Akka.Routing.RandomGroup
          smallest-mailbox-pool : Akka.Routing.SmallestMailboxPool
          broadcast-pool : Akka.Routing.BroadcastPool
          broadcast-group : Akka.Routing.BroadcastGroup
          scatter-gather-pool : Akka.Routing.ScatterGatherFirstCompletedPool
          scatter-gather-group : Akka.Routing.ScatterGatherFirstCompletedGroup
          consistent-hashing-pool : Akka.Routing.ConsistentHashingPool
          consistent-hashing-group : Akka.Routing.ConsistentHashingGroup
          tail-chopping-pool : Akka.Routing.TailChoppingPool
          tail-chopping-group : Akka.Routing.TailChoppingGroup
          cluster-metrics-adaptive-pool : "Akka.Cluster.Metrics.AdaptiveLoadBalancingPool, Akka.Cluster.Metrics"
          cluster-metrics-adaptive-group : "Akka.Cluster.Metrics.AdaptiveLoadBalancingGroup, Akka.Cluster.Metrics"
        }
      }
      deployment : {
        default : {
          dispatcher : 
          mailbox : 
          router : from-code
          nr-of-instances : 1
          within : "5 s"
          virtual-nodes-factor : 10
          routees : {
            paths : <<unknown value>>
          }
          resizer : {
            enabled : off
            lower-bound : 1
            upper-bound : 10
            pressure-threshold : 1
            rampup-rate : 0.2
            backoff-threshold : 0.3
            backoff-rate : 0.1
            messages-per-resize : 10
          }
          remote : 
          target : {
            nodes : <<unknown value>>
          }
          metrics-selector : mix
          cluster : {
            enabled : off
            max-nr-of-instances-per-node : 1
            max-total-nr-of-instances : 10000
            allow-local-routees : on
            use-role : 
          }
        }
      }
      synchronized-dispatcher : {
        type : SynchronizedDispatcher
        executor : current-context-executor
        throughput : 10
      }
      task-dispatcher : {
        type : TaskDispatcher
        executor : task-executor
        throughput : 30
      }
      default-fork-join-dispatcher : {
        type : ForkJoinDispatcher
        executor : fork-join-executor
        throughput : 30
        dedicated-thread-pool : {
          thread-count : 3
          threadtype : background
        }
      }
      default-dispatcher : {
        type : Dispatcher
        executor : default-executor
        default-executor : {
        }
        thread-pool-executor : {
        }
        fork-join-executor : {
          parallelism-min : 8
          parallelism-factor : 1.0
          parallelism-max : 64
          task-peeking-mode : FIFO
        }
        current-context-executor : {
        }
        shutdown-timeout : 1s
        throughput : 30
        throughput-deadline-time : 0ms
        attempt-teamwork : on
        mailbox-requirement : 
      }
      internal-dispatcher : {
        type : Dispatcher
        executor : fork-join-executor
        throughput : 5
        fork-join-executor : {
          parallelism-min : 4
          parallelism-factor : 1.0
          parallelism-max : 64
        }
        channel-executor : {
          priority : high
        }
      }
      default-blocking-io-dispatcher : {
        type : Dispatcher
        executor : thread-pool-executor
        throughput : 1
      }
      default-mailbox : {
        mailbox-type : Akka.Dispatch.UnboundedMailbox
        mailbox-capacity : 1000
        mailbox-push-timeout-time : 10s
        stash-capacity : -1
      }
      mailbox : {
        requirements : {
          Akka.Dispatch.IUnboundedMessageQueueSemantics : akka.actor.mailbox.unbounded-queue-based
          Akka.Dispatch.IBoundedMessageQueueSemantics : akka.actor.mailbox.bounded-queue-based
          Akka.Dispatch.IDequeBasedMessageQueueSemantics : akka.actor.mailbox.unbounded-deque-based
          Akka.Dispatch.IUnboundedDequeBasedMessageQueueSemantics : akka.actor.mailbox.unbounded-deque-based
          Akka.Dispatch.IBoundedDequeBasedMessageQueueSemantics : akka.actor.mailbox.bounded-deque-based
          Akka.Dispatch.IMultipleConsumerSemantics : akka.actor.mailbox.unbounded-queue-based
          Akka.Event.ILoggerMessageQueueSemantics : akka.actor.mailbox.logger-queue
        }
        unbounded-queue-based : {
          mailbox-type : Akka.Dispatch.UnboundedMailbox
        }
        bounded-queue-based : {
          mailbox-type : Akka.Dispatch.BoundedMailbox
        }
        unbounded-deque-based : {
          mailbox-type : Akka.Dispatch.UnboundedDequeBasedMailbox
        }
        bounded-deque-based : {
          mailbox-type : Akka.Dispatch.BoundedDequeBasedMailbox
        }
        logger-queue : {
          mailbox-type : Akka.Event.LoggerMailboxType
        }
      }
      debug : {
        receive : off
        autoreceive : off
        lifecycle : off
        fsm : off
        event-stream : off
        unhandled : off
        router-misconfiguration : off
      }
      serializers : {
        json : "Akka.Serialization.NewtonSoftJsonSerializer, Akka"
        bytes : "Akka.Serialization.ByteArraySerializer, Akka"
        akka-containers : "Akka.Remote.Serialization.MessageContainerSerializer, Akka.Remote"
        akka-misc : "Akka.Remote.Serialization.MiscMessageSerializer, Akka.Remote"
        primitive : "Akka.Remote.Serialization.PrimitiveSerializers, Akka.Remote"
        proto : "Akka.Remote.Serialization.ProtobufSerializer, Akka.Remote"
        daemon-create : "Akka.Remote.Serialization.DaemonMsgCreateSerializer, Akka.Remote"
        akka-system-msg : "Akka.Remote.Serialization.SystemMessageSerializer, Akka.Remote"
        akka-cluster : "Akka.Cluster.Serialization.ClusterMessageSerializer, Akka.Cluster"
        akka-pubsub : "Akka.Cluster.Tools.PublishSubscribe.Serialization.DistributedPubSubMessageSerializer, Akka.Cluster.Tools"
        hyperion : "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
        akka-data-replication : "Akka.DistributedData.Serialization.ReplicatorMessageSerializer, Akka.DistributedData"
        akka-replicated-data : "Akka.DistributedData.Serialization.ReplicatedDataSerializer, Akka.DistributedData"
        akka-singleton : "Akka.Cluster.Tools.Singleton.Serialization.ClusterSingletonMessageSerializer, Akka.Cluster.Tools"
        akka-sharding : "Akka.Cluster.Sharding.Serialization.ClusterShardingMessageSerializer, Akka.Cluster.Sharding"
        akka-cluster-client : "Akka.Cluster.Tools.Client.Serialization.ClusterClientMessageSerializer, Akka.Cluster.Tools"
        FSharpExpr : "FAkka.Shared.FActorSystemConfig+ExprSerializer, FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
      }
      serialization-bindings : {
        System.Byte[] : bytes
        System.Object : hyperion

        "Akka.Actor.ActorSelectionMessage, Akka" : akka-containers
        "Akka.Remote.DaemonMsgCreate, Akka.Remote" : daemon-create
        "Google.Protobuf.IMessage, Google.Protobuf" : proto
        "Akka.Actor.Identify, Akka" : akka-misc
        "Akka.Actor.ActorIdentity, Akka" : akka-misc
        "Akka.Actor.IActorRef, Akka" : akka-misc
        "Akka.Actor.PoisonPill, Akka" : akka-misc
        "Akka.Actor.Kill, Akka" : akka-misc
        "Akka.Actor.Status+Failure, Akka" : akka-misc
        "Akka.Actor.Status+Success, Akka" : akka-misc
        "Akka.Actor.RemoteScope, Akka" : akka-misc
        "Akka.Routing.FromConfig, Akka" : akka-misc
        "Akka.Routing.DefaultResizer, Akka" : akka-misc
        "Akka.Routing.RoundRobinPool, Akka" : akka-misc
        "Akka.Routing.BroadcastPool, Akka" : akka-misc
        "Akka.Routing.RandomPool, Akka" : akka-misc
        "Akka.Routing.ScatterGatherFirstCompletedPool, Akka" : akka-misc
        "Akka.Routing.TailChoppingPool, Akka" : akka-misc
        "Akka.Routing.ConsistentHashingPool, Akka" : akka-misc
        "Akka.Configuration.Config, Akka" : akka-misc
        "Akka.Remote.RemoteWatcher+Heartbeat, Akka.Remote" : akka-misc
        "Akka.Remote.RemoteWatcher+HeartbeatRsp, Akka.Remote" : akka-misc
        "Akka.Remote.Routing.RemoteRouterConfig, Akka.Remote" : akka-misc
        "Akka.Dispatch.SysMsg.SystemMessage, Akka" : akka-system-msg
        System.String : primitive
        System.Int32 : primitive
        System.Int64 : primitive
        "Akka.Cluster.IClusterMessage, Akka.Cluster" : akka-cluster
        "Akka.Cluster.Routing.ClusterRouterPool, Akka.Cluster" : akka-cluster
        "Akka.Cluster.Tools.PublishSubscribe.IDistributedPubSubMessage, Akka.Cluster.Tools" : akka-pubsub
        "Akka.Cluster.Tools.PublishSubscribe.Internal.SendToOneSubscriber, Akka.Cluster.Tools" : akka-pubsub
        "Akka.DistributedData.IReplicatorMessage, Akka.DistributedData" : akka-data-replication
        "Akka.DistributedData.IReplicatedDataSerialization, Akka.DistributedData" : akka-replicated-data
        "Akka.Cluster.Tools.Singleton.IClusterSingletonMessage, Akka.Cluster.Tools" : akka-singleton
        "Akka.Cluster.Sharding.IClusterShardingSerializable, Akka.Cluster.Sharding" : akka-sharding
        "Akka.Cluster.Tools.Client.IClusterClientMessage, Akka.Cluster.Tools" : akka-cluster-client
        "Microsoft.FSharp.Quotations.FSharpExpr, FSharp.Core" : FSharpExpr
        "MdcSession+AkkaMdcEvent+HistoryResponse, MdcFsClient70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" : FSharpExpr
        "MdcSession+AkkaMdcEvent2+HistoryResponse, MdcFsClient70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" : FSharpExpr
        "Akkling.Cluster.Sharding.ShardEnvelope, Akkling.Cluster.Sharding, Version=0.12.0.0, Culture=neutral, PublicKeyToken=null" : FSharpExpr
        "MdcCSApi.Mdcs_Tick, MdcCSApi70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" : FSharpExpr
        "System.Tuple`2[[FAkka.CQRS.EventWrapper`1[[MdcSession+AkkaMdcEvent, MdcFsClient70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], FAkka.Shared, Version=1.0.5.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[System.Collections.Immutable.ImmutableHashSet`1[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]], System.Collections.Immutable, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" : FSharpExpr
        "System.Tuple`2[[FAkka.CQRS.EventWrapper`1[[MdcSession+AkkaMdcEvent2, MdcFsClient70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], FAkka.Shared, Version=1.0.5.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[System.Collections.Immutable.ImmutableHashSet`1[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]], System.Collections.Immutable, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" : FSharpExpr
        "System.Tuple`2[[FAkka.CQRS.EventWrapper`1[[MdcSession+AkkaMdcEvent3, MdcFsClient70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], FAkka.Shared, Version=1.0.5.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[System.Collections.Immutable.ImmutableHashSet`1[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]], System.Collections.Immutable, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" : FSharpExpr
        "System.Tuple`2[[Akka.Persistence.AtLeastOnceDeliverySnapshot, Akka.Persistence, Version=1.5.0.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[MdcSession+AkkaMdcEvent2, MdcFsClient70, Version=1.2.6.0, Culture=neutral, PublicKeyToken=null]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]]" : hyperion
        "System.Tuple`2[[MdcSession+AkkaMdcEvent2, MdcFsClient70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[System.Collections.Immutable.ImmutableHashSet`1[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]], System.Collections.Immutable, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]]" : FSharpExpr
        "Akka.Persistence.Journal.Tagged, Akka.Persistence, Version=1.5.0.0, Culture=neutral, PublicKeyToken=null" : FSharpExpr
      }
      serialization-identifiers : {
        "Akka.Serialization.ByteArraySerializer, Akka" : 4
        "Akka.Serialization.NewtonSoftJsonSerializer, Akka" : 1
        "Akka.Remote.Serialization.ProtobufSerializer, Akka.Remote" : 2
        "Akka.Remote.Serialization.DaemonMsgCreateSerializer, Akka.Remote" : 3
        "Akka.Remote.Serialization.MessageContainerSerializer, Akka.Remote" : 6
        "Akka.Remote.Serialization.MiscMessageSerializer, Akka.Remote" : 16
        "Akka.Remote.Serialization.PrimitiveSerializers, Akka.Remote" : 17
        "Akka.Remote.Serialization.SystemMessageSerializer, Akka.Remote" : 22
        "Akka.Cluster.Serialization.ClusterMessageSerializer, Akka.Cluster" : 5
        "Akka.Cluster.Tools.PublishSubscribe.Serialization.DistributedPubSubMessageSerializer, Akka.Cluster.Tools" : 9
        "Akka.DistributedData.Serialization.ReplicatedDataSerializer, Akka.DistributedData" : 11
        "Akka.DistributedData.Serialization.ReplicatorMessageSerializer, Akka.DistributedData" : 12
        "Akka.Cluster.Tools.Singleton.Serialization.ClusterSingletonMessageSerializer, Akka.Cluster.Tools" : 14
        "Akka.Cluster.Sharding.Serialization.ClusterShardingMessageSerializer, Akka.Cluster.Sharding" : 13
        "Akka.Cluster.Tools.Client.Serialization.ClusterClientMessageSerializer, Akka.Cluster.Tools" : 15
        "FAkka.Shared.FActorSystemConfig+ExprSerializer, FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null" : 99
      }
      serialization-settings : {
        json : {
          use-pooled-string-builder : true
          pooled-string-builder-minsize : 2048
          pooled-string-builder-maxsize : 32768
        }
        primitive : {
          use-legacy-behavior : on
        }
      }
    }
    channel-scheduler : {
      parallelism-min : 4
      parallelism-factor : 1
      parallelism-max : 64
      work-max : 10
      work-interval : 500
      work-step : 2
    }
    scheduler : {
      tick-duration : 10ms
      ticks-per-wheel : 512
      implementation : Akka.Actor.HashedWheelTimerScheduler
      shutdown-timeout : 5s
    }
    io : {
      pinned-dispatcher : {
        type : PinnedDispatcher
        executor : fork-join-executor
      }
      tcp : {
        direct-buffer-pool : {
          class : "Akka.IO.Buffers.DirectBufferPool, Akka"
          buffer-size : 512
          buffers-per-segment : 500
          initial-segments : 1
          buffer-pool-limit : 1024
        }
        disabled-buffer-pool : {
          class : "Akka.IO.Buffers.DisabledBufferPool, Akka"
          buffer-size : 512
        }
        buffer-pool : akka.io.tcp.disabled-buffer-pool
        max-channels : 256000
        selector-association-retries : 10
        batch-accept-limit : 10
        register-timeout : 5s
        max-received-message-size : unlimited
        trace-logging : off
        selector-dispatcher : akka.io.pinned-dispatcher
        worker-dispatcher : akka.actor.internal-dispatcher
        management-dispatcher : akka.actor.internal-dispatcher
        file-io-dispatcher : akka.actor.default-blocking-io-dispatcher
        file-io-transferTo-limit : 524288
        finish-connect-retries : 5
        windows-connection-abort-workaround-enabled : off
        outgoing-socket-force-ipv4 : false
      }
      udp : {
        direct-buffer-pool : {
          class : "Akka.IO.Buffers.DirectBufferPool, Akka"
          buffer-size : 512
          buffers-per-segment : 500
          initial-segments : 1
          buffer-pool-limit : 1024
        }
        disabled-buffer-pool : {
          class : "Akka.IO.Buffers.DisabledBufferPool, Akka"
          buffer-size : 512
        }
        buffer-pool : akka.io.udp.disabled-buffer-pool
        nr-of-socket-async-event-args : 32
        max-channels : 4096
        select-timeout : infinite
        selector-association-retries : 10
        receive-throughput : 3
        received-message-size-limit : unlimited
        trace-logging : off
        selector-dispatcher : akka.io.pinned-dispatcher
        worker-dispatcher : akka.actor.internal-dispatcher
        management-dispatcher : akka.actor.internal-dispatcher
      }
      udp-connected : {
        direct-buffer-pool : {
          class : "Akka.IO.Buffers.DirectBufferPool, Akka"
          buffer-size : 512
          buffers-per-segment : 500
          initial-segments : 1
          buffer-pool-limit : 1024
        }
        disabled-buffer-pool : {
          class : "Akka.IO.Buffers.DisabledBufferPool, Akka"
          buffer-size : 512
        }
        buffer-pool : akka.io.udp-connected.disabled-buffer-pool
        nr-of-socket-async-event-args : 32
        max-channels : 4096
        select-timeout : infinite
        selector-association-retries : 10
        receive-throughput : 3
        received-message-size-limit : unlimited
        trace-logging : off
        selector-dispatcher : akka.io.pinned-dispatcher
        worker-dispatcher : akka.actor.internal-dispatcher
        management-dispatcher : akka.actor.internal-dispatcher
      }
      dns : {
        dispatcher : akka.actor.internal-dispatcher
        resolver : inet-address
        inet-address : {
          provider-object : Akka.IO.InetAddressDnsProvider
          positive-ttl : 30s
          negative-ttl : 10s
          cache-cleanup-interval : 120s
          use-ipv6 : true
        }
      }
    }
    coordinated-shutdown : {
      default-phase-timeout : "5 s"
      terminate-actor-system : on
      exit-clr : off
      run-by-clr-shutdown-hook : on
      run-by-actor-system-terminate : on
      phases : {
        before-service-unbind : {
        }
        service-unbind : {
          depends-on : [before-service-unbind]
        }
        service-requests-done : {
          depends-on : [service-unbind]
        }
        service-stop : {
          depends-on : [service-requests-done]
        }
        before-cluster-shutdown : {
          depends-on : [service-stop]
        }
        cluster-sharding-shutdown-region : {
          timeout : "10 s"
          depends-on : [before-cluster-shutdown]
        }
        cluster-leave : {
          depends-on : [cluster-sharding-shutdown-region]
        }
        cluster-exiting : {
          timeout : "10 s"
          depends-on : [cluster-leave]
        }
        cluster-exiting-done : {
          depends-on : [cluster-exiting]
        }
        cluster-shutdown : {
          depends-on : [cluster-exiting-done]
        }
        before-actor-system-terminate : {
          depends-on : [cluster-shutdown]
        }
        actor-system-terminate : {
          timeout : "10 s"
          depends-on : [before-actor-system-terminate]
        }
      }
    }
    remote : {
      startup-timeout : "10 s"
      shutdown-timeout : "10 s"
      flush-wait-on-shutdown : "2 s"
      use-passive-connections : on
      backoff-interval : "0.05 s"
      command-ack-timeout : "30 s"
      handshake-timeout : "15 s"
      use-dispatcher : akka.remote.default-remote-dispatcher
      untrusted-mode : off
      trusted-selection-paths : <<unknown value>>
      require-cookie : off
      secure-cookie : 
      log-received-messages : off
      log-sent-messages : off
      log-remote-lifecycle-events : on
      log-frame-size-exceeding : off
      log-buffer-size-exceeding : 50000
      transport-failure-detector : {
        implementation-class : Akka.Remote.DeadlineFailureDetector,Akka.Remote
        heartbeat-interval : "4 s"
        acceptable-heartbeat-pause : "120 s"
      }
      watch-failure-detector : {
        implementation-class : Akka.Remote.PhiAccrualFailureDetector,Akka.Remote
        heartbeat-interval : "1 s"
        threshold : 10.0
        max-sample-size : 200
        min-std-deviation : "100 ms"
        acceptable-heartbeat-pause : "10 s"
        unreachable-nodes-reaper-interval : 1s
        expected-response-after : "1 s"
      }
      retry-gate-closed-for : "5 s"
      prune-quarantine-marker-after : "5 d"
      quarantine-after-silence : "2 d"
      system-message-buffer-size : 20000
      system-message-ack-piggyback-timeout : "0.3 s"
      resend-interval : "2 s"
      resend-limit : 200
      initial-system-message-delivery-timeout : "3 m"
      enabled-transports : [akka.remote.dot-netty.tcp]
      adapters : {
        gremlin : Akka.Remote.Transport.FailureInjectorProvider,Akka.Remote
        trttl : Akka.Remote.Transport.ThrottlerProvider,Akka.Remote
      }
      helios : {
        tcp : {
          transport-class : "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote.Transport.Helios"
        }
      }
      dot-netty : {
        tcp : {
          transport-class : Akka.Remote.Transport.DotNetty.TcpTransport,Akka.Remote
          applied-adapters : <<unknown value>>
          transport-protocol : tcp
          byte-order : little-endian
          port : 53316
          public-port : 53316
          hostname : 10.38.112.143
          public-hostname : 10.38.112.143
          dns-use-ipv6 : false
          enforce-ip-family : false
          enable-ssl : false
          enable-backwards-compatibility : false
          connection-timeout : "15 s"
          batching : {
            enabled : true
            max-pending-writes : 30
          }
          use-dispatcher-for-io : 
          write-buffer-high-water-mark : 0b
          write-buffer-low-water-mark : 0b
          send-buffer-size : 100000000b
          receive-buffer-size : 100000000b
          maximum-frame-size : 100000000b
          backlog : 4096
          tcp-nodelay : on
          tcp-keepalive : on
          tcp-reuse-addr : off-for-windows
          server-socket-worker-pool : {
            pool-size-min : 2
            pool-size-factor : 1.0
            pool-size-max : 2
          }
          client-socket-worker-pool : {
            pool-size-min : 2
            pool-size-factor : 1.0
            pool-size-max : 2
          }
          message-frame-size : 100000000b
        }
        udp : {
          transport-protocol : udp
        }
        ssl : {
          certificate : {
            path : 
            password : 
            use-thumbprint-over-file : false
            thumbprint : 
            store-name : 
            store-location : current-user
          }
          suppress-validation : false
        }
      }
      gremlin : {
        debug : off
      }
      default-remote-dispatcher : {
        executor : fork-join-executor
        fork-join-executor : {
          parallelism-min : 2
          parallelism-factor : 0.5
          parallelism-max : 16
        }
        channel-executor : {
          priority : high
        }
      }
      backoff-remote-dispatcher : {
        executor : fork-join-executor
        fork-join-executor : {
          parallelism-min : 2
          parallelism-max : 2
        }
        channel-executor : {
          priority : low
        }
      }
      maximum-payload-bytes : 1000000000b
    }
    cluster : {
      seed-nodes : <<unknown value>>
      seed-node-timeout : 5s
      retry-unsuccessful-join-after : 10s
      shutdown-after-unsuccessful-join-seed-nodes : off
      auto-down-unreachable-after : off
      down-removal-margin : off
      downing-provider-class : 
      allow-weakly-up-members : 7s
      roles : [petabridge.cmd,10.38.112.143,ShardReadServiceNode,ShardNode]
      app-version : assembly-version
      run-coordinated-shutdown-when-down : on
      role : {
      }
      min-nr-of-members : 1
      log-info : on
      log-info-verbose : off
      periodic-tasks-initial-delay : 1s
      gossip-interval : 1s
      gossip-time-to-live : 2s
      leader-actions-interval : 1s
      unreachable-nodes-reaper-interval : 1s
      publish-stats-interval : off
      use-dispatcher : 
      gossip-different-view-probability : 0.8
      reduce-gossip-different-view-probability : 400
      use-legacy-heartbeat-message : false
      failure-detector : {
        implementation-class : "Akka.Remote.PhiAccrualFailureDetector, Akka.Remote"
        heartbeat-interval : "1 s"
        threshold : 8.0
        max-sample-size : 1000
        min-std-deviation : "100 ms"
        acceptable-heartbeat-pause : "3 s"

        monitored-by-nr-of-members : 9
        expected-response-after : "1 s"
      }
      scheduler : {
        tick-duration : 33ms
        ticks-per-wheel : 512
      }
      debug : {
        verbose-heartbeat-logging : off
        verbose-receive-gossip-logging : off
      }
      split-brain-resolver : {
        active-strategy : keep-majority
        stable-after : 20s
        down-all-when-unstable : on
        static-quorum : {
          quorum-size : undefined
          role : 
        }
        keep-majority : {
          role : 
        }
        keep-oldest : {
          down-if-alone : on
          role : 
        }
        lease-majority : {
          lease-implementation : 
          lease-name : 
          acquire-lease-delay-for-minority : 2s
          release-after : 40s
          role : 
        }
        keep-referee : {
          address : 
          down-all-if-less-than-nodes : 1
        }
      }
      pub-sub : {
        name : distributedPubSubMediator
        role : ShardNode
        routing-logic : random
        gossip-interval : 1s
        removed-time-to-live : 120s
        max-delta-elements : 3000
        send-to-dead-letters-when-no-subscribers : false
        use-dispatcher : 
      }
      distributed-data : {
        name : ddataReplicator
        role : dd
        gossip-interval : "2 s"
        notify-subscribers-interval : "500 ms"
        max-delta-elements : 500
        use-dispatcher : 
        pruning-interval : "120 s"
        max-pruning-dissemination : "300 s"
        pruning-marker-time-to-live : "6 h"
        serializer-cache-time-to-live : 10s
        recreate-on-failure : off
        prefer-oldest : off
        verbose-debug-logging : off
        delta-crdt : {
          enabled : on
          max-delta-size : 50
        }
        durable : {
          keys : <<unknown value>>
          pruning-marker-time-to-live : "10 d"
          store-actor-class : "Akka.DistributedData.LightningDB.LmdbDurableStore, Akka.DistributedData.LightningDB"
          use-dispatcher : akka.cluster.distributed-data.durable.pinned-store
          pinned-store : {
            executor : thread-pool-executor
            type : PinnedDispatcher
          }
          lmdb : {
            dir : ddata
            map-size : "100 MiB"
            write-behind-interval : off
          }
        }
      }
      singleton : {
        singleton-name : singleton
        role : 
        hand-over-retry-interval : 1s
        min-number-of-hand-over-retries : 15
        use-lease : 
        lease-retry-interval : 5s
        consider-app-version : false
      }
      singleton-proxy : {
        singleton-name : singleton
        role : 
        singleton-identification-interval : 1s
        buffer-size : 1000
      }
      sharding : {
        guardian-name : sharding
        role : ShardNode
        remember-entities : off
        remember-entities-store : ddata
        passivate-idle-entity-after : 120s
        coordinator-failure-backoff : "5 s"
        retry-interval : 2s
        buffer-size : 1000000
        handoff-timeout : 60s
        shard-start-timeout : 10s
        shard-failure-backoff : 10s
        entity-restart-backoff : 10s
        rebalance-interval : 10s
        journal-plugin-id : akka.persistence.journal.sharding
        snapshot-plugin-id : akka.persistence.snapshot-store.sharding
        state-store-mode : persistence
        snapshot-after : 1000
        keep-nr-of-batches : 2
        least-shard-allocation-strategy : {
          rebalance-absolute-limit : 0
          rebalance-relative-limit : 0.1
          rebalance-threshold : 1
          max-simultaneous-rebalance : 3
        }
        waiting-for-state-timeout : "2 s"
        updating-state-timeout : "5 s"
        shard-region-query-timeout : "3 s"
        entity-recovery-strategy : all
        entity-recovery-constant-rate-strategy : {
          frequency : "100 ms"
          number-of-entities : 5
        }
        event-sourced-remember-entities-store : {
          max-updates-per-write : 100
        }
        coordinator-singleton : akka.cluster.singleton
        coordinator-state : {
          write-majority-plus : 3
          read-majority-plus : 5
        }
        distributed-data : {
          majority-min-cap : 5
          durable : {
            keys : [shard-*]
          }
          max-delta-elements : 5
        }
        use-dispatcher : 
        use-lease : 
        lease-retry-interval : 5s
        verbose-debug-logging : off
        fail-on-invalid-entity-state-transition : off
      }
      sharded-daemon-process : {
        sharding : {
          guardian-name : sharding
          role : 
          remember-entities : off
          remember-entities-store : ddata
          passivate-idle-entity-after : 120s
          coordinator-failure-backoff : "5 s"
          retry-interval : 2s
          buffer-size : 100000
          handoff-timeout : 60s
          shard-start-timeout : 10s
          shard-failure-backoff : 10s
          entity-restart-backoff : 10s
          rebalance-interval : 10s
          journal-plugin-id : 
          snapshot-plugin-id : 
          state-store-mode : persistence
          snapshot-after : 1000
          keep-nr-of-batches : 2
          least-shard-allocation-strategy : {
            rebalance-absolute-limit : 0
            rebalance-relative-limit : 0.1
            rebalance-threshold : 1
            max-simultaneous-rebalance : 3
          }
          waiting-for-state-timeout : "2 s"
          updating-state-timeout : "5 s"
          shard-region-query-timeout : "3 s"
          entity-recovery-strategy : all
          entity-recovery-constant-rate-strategy : {
            frequency : "100 ms"
            number-of-entities : 5
          }
          event-sourced-remember-entities-store : {
            max-updates-per-write : 100
          }
          coordinator-singleton : akka.cluster.singleton
          coordinator-state : {
            write-majority-plus : 3
            read-majority-plus : 5
          }
          distributed-data : {
            majority-min-cap : 5
            durable : {
              keys : [shard-*]
            }
            max-delta-elements : 5
          }
          use-dispatcher : 
          use-lease : 
          lease-retry-interval : 5s
          verbose-debug-logging : off
          fail-on-invalid-entity-state-transition : off
        }
        keep-alive-interval : 10s
      }
    }
    test : {
      timefactor : 1
    }
    debug : {
      receive : on
      autoreceive : on
      lifecycle : on
      event-stream : on
      unhandled : on
    }
    persistence : {
      view : {
        auto-update-interval : 100
      }
      journal-plugin-fallback : {
        recovery-event-timeout : 300s
        circuit-breaker : {
          max-failures : 10
          call-timeout : 150s
          reset-timeout : 150s
        }
      }
      snapshot-store-plugin-fallback : {
        recovery-event-timeout : 300s
        circuit-breaker : {
          max-failures : 10
          call-timeout : 150s
          reset-timeout : 150s
        }
      }
      query : {
        journal : {
          sql : {
            class : "Akka.Persistence.Query.Sql.SqlReadJournalProvider, Akka.Persistence.Query.Sql"
            write-plugin : 
            refresh-interval : 100ms
            max-buffer-size : 10000
            max-concurrent-queries : 100
          }
        }
      }
      journal : {
        plugin : akka.persistence.journal.sql-server
        sql-server : {
          class : "Akka.Persistence.SqlServer.Journal.BatchingSqlServerJournal, Akka.Persistence.SqlServer"
          connection-string : "Persist Security Info=False;User ID=sa;Password=;Initial Catalog=AkkaPersistence_fakka;Server=10.38.112.93;Trust Server Certificate=True;Max Pool Size=2000;MultiSubnetFailover=True; Encrypt=false"
          connection-timeout : 120s
          schema-name : dbo
          table-name : journal
          metadata-table-name : metadata
          auto-initialize : off
          serializer : FSharpExpr
          max-batch-size : 10000
          max-concurrent-operations : 1000
          replay-filter : {
            mode : repair-by-discard-old
          }
          event-adapters : {
            f-adapter : "FAkka.Shared.Type.AkkaPersistenceType+FEventAdapter, FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
            event-tagger : "FAkka.CQRS.EventTagger`1[[MdcSession+AkkaMdcEvent3, MdcFsClient70, Version=1.2.18.0, Culture=neutral, PublicKeyToken=null]], FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
          }
          event-adapter-bindings : {
            "Microsoft.FSharp.Quotations.FSharpExpr, FSharp.Core" : [f-adapter]
            "System.Tuple`2[[MdcSession+AkkaMdcEvent3, MdcFsClient70, Version=1.2.18.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[System.Collections.Immutable.ImmutableHashSet`1[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]], System.Collections.Immutable, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" : event-tagger
          }
        }
        postgresql : {
          class : "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
          connection-string : "Persist Security Info=False;User ID=sa;Password=;Initial Catalog=AkkaPersistence_fakka;Server=10.38.112.93;Trust Server Certificate=True;Max Pool Size=2000;MultiSubnetFailover=True; Encrypt=false"
          schema-name : public
          table-name : journal
          auto-initialize : off
          timestamp-provider : "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
          metadata-table-name : metadata
          stored-as : BYTEA
          sequential-access : off
          use-bigint-identity-for-ordering-column : on
          replay-filter : {
            mode : repair-by-discard-old
          }
          tags-column-size : 2000
          serializer : FSharpExpr
          event-adapters : {
            f-adapter : "FAkka.Shared.Type.AkkaPersistenceType+FEventAdapter, FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
            event-tagger : "FAkka.CQRS.EventTagger`1[[MdcSession+AkkaMdcEvent3, MdcFsClient70, Version=1.2.18.0, Culture=neutral, PublicKeyToken=null]], FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
          }
          event-adapter-bindings : {
            "Microsoft.FSharp.Quotations.FSharpExpr, FSharp.Core" : [f-adapter]
            "System.Tuple`2[[MdcSession+AkkaMdcEvent3, MdcFsClient70, Version=1.2.18.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[System.Collections.Immutable.ImmutableHashSet`1[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]], System.Collections.Immutable, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" : event-tagger
          }
        }
        sharding : {
          connection-string : "Persist Security Info=False;User ID=sa;Password=;Initial Catalog=AkkaPersistence_fakka;Server=10.38.112.93;Trust Server Certificate=True;Max Pool Size=2000;MultiSubnetFailover=True; Encrypt=false"
          auto-initialize : on
          plugin-dispatcher : akka.actor.default-dispatcher
          class : "Akka.Persistence.SqlServer.Journal.SqlServerJournal, Akka.Persistence.SqlServer"
          connection-timeout : 120s
          schema-name : dbo
          table-name : shardingjournal
          serializer : FSharpExpr
          timestamp-provider : "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
          metadata-table-name : metadata
          max-batch-size : 10000
          max-concurrent-operations : 1000
        }
      }
      snapshot-store : {
        plugin : akka.persistence.snapshot-store.sql-server
        sql-server : {
          class : "Akka.Persistence.SqlServer.Snapshot.SqlServerSnapshotStore, Akka.Persistence.SqlServer"
          serializer : hyperion
          connection-string : "Persist Security Info=False;User ID=sa;Password=;Initial Catalog=AkkaPersistence_fakka;Server=10.38.112.93;Trust Server Certificate=True;Max Pool Size=2000;MultiSubnetFailover=True; Encrypt=false"
          connection-timeout : 120s
          schema-name : dbo
          table-name : snapshot
          auto-initialize : on
          replay-filter : {
            mode : repair-by-discard-old
          }
        }
        postgresql : {
          class : "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
          connection-string : "Persist Security Info=False;User ID=sa;Password=;Initial Catalog=AkkaPersistence_fakka;Server=10.38.112.93;Trust Server Certificate=True;Max Pool Size=2000;MultiSubnetFailover=True; Encrypt=false"
          connection-timeout : 30s
          schema-name : public
          table-name : snapshot
          auto-initialize : on
          timestamp-provider : "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
          metadata-table-name : metadata
          stored-as : BYTEA
          sequential-access : off
          use-bigint-identity-for-ordering-column : on
          replay-filter : {
            mode : repair-by-discard-old
          }
          tags-column-size : 2000
          serializer : hyperion
        }
        sharding : {
          class : "Akka.Persistence.SqlServer.Snapshot.SqlServerSnapshotStore, Akka.Persistence.SqlServer"
          plugin-dispatcher : akka.actor.default-dispatcher
          connection-string : "Persist Security Info=False;User ID=sa;Password=;Initial Catalog=AkkaPersistence_fakka;Server=10.38.112.93;Trust Server Certificate=True;Max Pool Size=2000;MultiSubnetFailover=True; Encrypt=false"
          connection-timeout : 120s
          serializer : hyperion
          schema-name : dbo
          table-name : shardingsnapshotstore
          auto-initialize : on
        }
      }
    }
  }
  fquery-priority-mailbox : {
    mailbox-type : "FAkka.Shared.Data.FQueryPriorityMailbox, FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
  }
  generic-priority-mailbox : {
    mailbox-type : "FAkka.Shared.FActorSystemConfig+GenericPriorityMailbox, FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
  }
  petabridge : {
    cmd : {
      host : 0.0.0.0
      port : 53316
      log-palettes-on-startup : on
    }
  }

    {AddLoggingReceive = false;
     AskTimeout = -00:00:00.0010000;
     Config =   fquery-priority-mailbox : {
    mailbox-type : "FAkka.Shared.Data.FQueryPriorityMailbox, FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
  }
  generic-priority-mailbox : {
    mailbox-type : "FAkka.Shared.FActorSystemConfig+GenericPriorityMailbox, FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
  }
  petabridge : {
    cmd : {
      host : 0.0.0.0
      port : 53316
      log-palettes-on-startup : on
    }
  }
  akka : {
    test : {
      timefactor : 1
    }
    stdout-loglevel : INFO
    loglevel : DEBUG
    loggers : ["Akka.Logger.NLog.NLogLogger, Akka.Logger.NLog"]
    debug : {
      receive : on
      autoreceive : on
      lifecycle : on
      event-stream : on
      unhandled : on
    }
    actor : {
      provider : "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
      serializers : {
        hyperion : "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
        akka-cluster-client : "Akka.Cluster.Tools.Client.Serialization.ClusterClientMessageSerializer, Akka.Cluster.Tools"
        FSharpExpr : "FAkka.Shared.FActorSystemConfig+ExprSerializer, FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
      }
      serialization-bindings : {
        "Akka.Cluster.Tools.Client.IClusterClientMessage, Akka.Cluster.Tools" : akka-cluster-client
        "Microsoft.FSharp.Quotations.FSharpExpr, FSharp.Core" : FSharpExpr
        "MdcSession+AkkaMdcEvent+HistoryResponse, MdcFsClient70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" : FSharpExpr
        "MdcSession+AkkaMdcEvent2+HistoryResponse, MdcFsClient70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" : FSharpExpr
        "Akkling.Cluster.Sharding.ShardEnvelope, Akkling.Cluster.Sharding, Version=0.12.0.0, Culture=neutral, PublicKeyToken=null" : FSharpExpr
        "MdcCSApi.Mdcs_Tick, MdcCSApi70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" : FSharpExpr
        "System.Tuple`2[[FAkka.CQRS.EventWrapper`1[[MdcSession+AkkaMdcEvent, MdcFsClient70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], FAkka.Shared, Version=1.0.5.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[System.Collections.Immutable.ImmutableHashSet`1[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]], System.Collections.Immutable, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" : FSharpExpr
        "System.Tuple`2[[FAkka.CQRS.EventWrapper`1[[MdcSession+AkkaMdcEvent2, MdcFsClient70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], FAkka.Shared, Version=1.0.5.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[System.Collections.Immutable.ImmutableHashSet`1[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]], System.Collections.Immutable, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" : FSharpExpr
        "System.Tuple`2[[FAkka.CQRS.EventWrapper`1[[MdcSession+AkkaMdcEvent3, MdcFsClient70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], FAkka.Shared, Version=1.0.5.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[System.Collections.Immutable.ImmutableHashSet`1[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]], System.Collections.Immutable, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" : FSharpExpr
        "System.Tuple`2[[Akka.Persistence.AtLeastOnceDeliverySnapshot, Akka.Persistence, Version=1.5.0.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[MdcSession+AkkaMdcEvent2, MdcFsClient70, Version=1.2.6.0, Culture=neutral, PublicKeyToken=null]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]]" : hyperion
        "System.Tuple`2[[MdcSession+AkkaMdcEvent2, MdcFsClient70, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[System.Collections.Immutable.ImmutableHashSet`1[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]], System.Collections.Immutable, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]]" : FSharpExpr
        "Akka.Persistence.Journal.Tagged, Akka.Persistence, Version=1.5.0.0, Culture=neutral, PublicKeyToken=null" : FSharpExpr
      }
      serialization-identifiers : {
        "Akka.Cluster.Tools.Client.Serialization.ClusterClientMessageSerializer, Akka.Cluster.Tools" : 15
        "FAkka.Shared.FActorSystemConfig+ExprSerializer, FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null" : 99
      }
      inbox : {
        inbox-size : 10000000
      }
    }
    remote : {
      maximum-payload-bytes : 1000000000b
      dot-netty : {
        tcp : {
          public-hostname : 10.38.112.143
          hostname : 10.38.112.143
          public-port : 53316
          port : 53316
          message-frame-size : 100000000b
          send-buffer-size : 100000000b
          receive-buffer-size : 100000000b
          maximum-frame-size : 100000000b
        }
      }
    }
    cluster : {
      distributed-data : {
        role : dd
      }
      auto-down-unreachable-after : off
      pub-sub : {
        send-to-dead-letters-when-no-subscribers : false
        role : ShardNode
      }
      roles : [petabridge.cmd,10.38.112.143,ShardReadServiceNode,ShardNode]
      sharding : {
        buffer-size : 1000000
        role : ShardNode
        journal-plugin-id : akka.persistence.journal.sharding
        snapshot-plugin-id : akka.persistence.snapshot-store.sharding
      }
    }
    persistence : {
      view : {
        auto-update-interval : 100
      }
      journal-plugin-fallback : {
        recovery-event-timeout : 300s
        circuit-breaker : {
          max-failures : 10
          call-timeout : 150s
          reset-timeout : 150s
        }
      }
      snapshot-store-plugin-fallback : {
        recovery-event-timeout : 300s
        circuit-breaker : {
          max-failures : 10
          call-timeout : 150s
          reset-timeout : 150s
        }
      }
      query : {
        journal : {
          sql : {
            class : "Akka.Persistence.Query.Sql.SqlReadJournalProvider, Akka.Persistence.Query.Sql"
            write-plugin : 
            refresh-interval : 100ms
            max-buffer-size : 10000
            max-concurrent-queries : 100
          }
        }
      }
      journal : {
        plugin : akka.persistence.journal.sql-server
        sql-server : {
          class : "Akka.Persistence.SqlServer.Journal.BatchingSqlServerJournal, Akka.Persistence.SqlServer"
          connection-string : "Persist Security Info=False;User ID=sa;Password=;Initial Catalog=AkkaPersistence_fakka;Server=10.38.112.93;Trust Server Certificate=True;Max Pool Size=2000;MultiSubnetFailover=True; Encrypt=false"
          connection-timeout : 120s
          schema-name : dbo
          table-name : journal
          metadata-table-name : metadata
          auto-initialize : off
          serializer : FSharpExpr
          max-batch-size : 10000
          max-concurrent-operations : 1000
          replay-filter : {
            mode : repair-by-discard-old
          }
          event-adapters : {
            f-adapter : "FAkka.Shared.Type.AkkaPersistenceType+FEventAdapter, FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
            event-tagger : "FAkka.CQRS.EventTagger`1[[MdcSession+AkkaMdcEvent3, MdcFsClient70, Version=1.2.18.0, Culture=neutral, PublicKeyToken=null]], FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
          }
          event-adapter-bindings : {
            "Microsoft.FSharp.Quotations.FSharpExpr, FSharp.Core" : [f-adapter]
            "System.Tuple`2[[MdcSession+AkkaMdcEvent3, MdcFsClient70, Version=1.2.18.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[System.Collections.Immutable.ImmutableHashSet`1[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]], System.Collections.Immutable, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" : event-tagger
          }
        }
        postgresql : {
          class : "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
          connection-string : "Persist Security Info=False;User ID=sa;Password=;Initial Catalog=AkkaPersistence_fakka;Server=10.38.112.93;Trust Server Certificate=True;Max Pool Size=2000;MultiSubnetFailover=True; Encrypt=false"
          schema-name : public
          table-name : journal
          auto-initialize : off
          timestamp-provider : "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
          metadata-table-name : metadata
          stored-as : BYTEA
          sequential-access : off
          use-bigint-identity-for-ordering-column : on
          replay-filter : {
            mode : repair-by-discard-old
          }
          tags-column-size : 2000
          serializer : FSharpExpr
          event-adapters : {
            f-adapter : "FAkka.Shared.Type.AkkaPersistenceType+FEventAdapter, FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
            event-tagger : "FAkka.CQRS.EventTagger`1[[MdcSession+AkkaMdcEvent3, MdcFsClient70, Version=1.2.18.0, Culture=neutral, PublicKeyToken=null]], FAkka.Shared, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null"
          }
          event-adapter-bindings : {
            "Microsoft.FSharp.Quotations.FSharpExpr, FSharp.Core" : [f-adapter]
            "System.Tuple`2[[MdcSession+AkkaMdcEvent3, MdcFsClient70, Version=1.2.18.0, Culture=neutral, PublicKeyToken=null],[Microsoft.FSharp.Core.FSharpOption`1[[System.Collections.Immutable.ImmutableHashSet`1[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]], System.Collections.Immutable, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], FSharp.Core, Version=7.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]], System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" : event-tagger
          }
        }
        sharding : {
          connection-string : "Persist Security Info=False;User ID=sa;Password=;Initial Catalog=AkkaPersistence_fakka;Server=10.38.112.93;Trust Server Certificate=True;Max Pool Size=2000;MultiSubnetFailover=True; Encrypt=false"
          auto-initialize : on
          plugin-dispatcher : akka.actor.default-dispatcher
          class : "Akka.Persistence.SqlServer.Journal.SqlServerJournal, Akka.Persistence.SqlServer"
          connection-timeout : 120s
          schema-name : dbo
          table-name : shardingjournal
          serializer : FSharpExpr
          timestamp-provider : "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
          metadata-table-name : metadata
          max-batch-size : 10000
          max-concurrent-operations : 1000
        }

      }
      snapshot-store : {
        plugin : akka.persistence.snapshot-store.sql-server
        sql-server : {
          class : "Akka.Persistence.SqlServer.Snapshot.SqlServerSnapshotStore, Akka.Persistence.SqlServer"
          serializer : hyperion
          connection-string : "Persist Security Info=False;User ID=sa;Password=;Initial Catalog=AkkaPersistence_fakka;Server=10.38.112.93;Trust Server Certificate=True;Max Pool Size=2000;MultiSubnetFailover=True; Encrypt=false"
          connection-timeout : 120s
          schema-name : dbo
          table-name : snapshot
          auto-initialize : on
          replay-filter : {
            mode : repair-by-discard-old
          }
        }
        postgresql : {
          class : "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
          connection-string : "Persist Security Info=False;User ID=sa;Password=;Initial Catalog=AkkaPersistence_fakka;Server=10.38.112.93;Trust Server Certificate=True;Max Pool Size=2000;MultiSubnetFailover=True; Encrypt=false"
          connection-timeout : 30s
          schema-name : public
          table-name : snapshot
          auto-initialize : on
          timestamp-provider : "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
          metadata-table-name : metadata
          stored-as : BYTEA
          sequential-access : off
          use-bigint-identity-for-ordering-column : on
          replay-filter : {
            mode : repair-by-discard-old
          }
          tags-column-size : 2000
          serializer : hyperion
        }
        sharding : {
          class : "Akka.Persistence.SqlServer.Snapshot.SqlServerSnapshotStore, Akka.Persistence.SqlServer"
          plugin-dispatcher : akka.actor.default-dispatcher
          connection-string : "Persist Security Info=False;User ID=sa;Password=;Initial Catalog=AkkaPersistence_fakka;Server=10.38.112.93;Trust Server Certificate=True;Max Pool Size=2000;MultiSubnetFailover=True; Encrypt=false"
          connection-timeout : 120s
          serializer : hyperion
          schema-name : dbo
          table-name : shardingsnapshotstore
          auto-initialize : on
        }
      }
    }
    extensions : [Akka.Cluster.Tools.PublishSubscribe.DistributedPubSubExtensionProvider,Akka.Cluster.Tools]
  }
;
     ConfigVersion = "0.0.1 Akka";
     CoordinatedShutdownRunByActorSystemTerminate = true;
     CoordinatedShutdownTerminateActorSystem = true;
     CreationTimeout = 00:00:20;
     DebugAutoReceive = false;
     DebugEventStream = false;
     DebugLifecycle = false;
     DebugRouterMisconfiguration = false;
     DebugUnhandledMessage = false;
     DefaultVirtualNodesFactor = 10;
     EmitActorTelemetry = false;
     FsmDebugEvent = false;
     HasCluster = true;
     Home = "";
     LogConfigOnStart = false;
     LogDeadLetters = 10;
     LogDeadLettersDuringShutdown = false;
     LogDeadLettersSuspendDuration = 00:05:00;
     LogFormatter = Akka.Event.DefaultLogMessageFormatter;
     LogLevel = "DEBUG";
     LogSerializerOverrideOnStart = true;
     LoggerAsyncStart = false;
     LoggerStartTimeout = 00:00:05;
     Loggers = seq ["Akka.Logger.NLog.NLogLogger, Akka.Logger.NLog"];
     LoggersDispatcher = "akka.actor.default-dispatcher";
     ProviderClass = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster";
     ProviderSelectionType = Akka.Actor.ProviderSelection+Cluster;
     SchedulerClass = "Akka.Actor.HashedWheelTimerScheduler";
     SchedulerShutdownTimeout = 00:00:05;
     SerializeAllCreators = false;
     SerializeAllMessages = false;
     Setup = ActorSystemSetup();
     StdoutLogLevel = "INFO";
     StdoutLogger = [akka://all-systems/];
     SupervisorStrategyClass = "Akka.Actor.DefaultSupervisorStrategy";
     System = akka://cluster-system;
     UnstartedPushTimeout = 00:00:10;}
 

@Aaronontheweb
Copy link
Member

Aaronontheweb commented Jul 27, 2023

maximum-payload-bytes : 1000000000b

so max payload size is set to 1Gb? Ditto with the send and receive buffer size?

edit:

dot-netty : {
        tcp : {
          public-hostname : 10.38.112.143
          hostname : 10.38.112.143
          public-port : 53316
          port : 53316
          message-frame-size : 100000000b
          send-buffer-size : 100000000b
          receive-buffer-size : 100000000b
          maximum-frame-size : 100000000b
        }
      }

@ingted
Copy link

ingted commented Jul 27, 2023

It seems like

"System.Object" = FSharpExpr

could resolve the issue but it is impossible to add customized FsPickler extended pickler for every type without [Serializable] tag...

@Aaronontheweb
Copy link
Member

Is the issue in this case just a dropped message over the wire? Or a full blown disassociation? What's the error message exactly?

@ingted
Copy link

ingted commented Jul 27, 2023

There is no any error message in publisher side nor in subscriber side. Just a full blown disassociation. The seed node would quarantine the subscriber node...

@ingted
Copy link

ingted commented Jul 27, 2023

I am testing to replace

"Akka.Cluster.Tools.PublishSubscribe.IDistributedPubSubMessage, Akka.Cluster.Tools" : akka-pubsub
"Akka.Cluster.Tools.PublishSubscribe.Internal.SendToOneSubscriber, Akka.Cluster.Tools" : akka-pubsub

with

"Akka.Cluster.Tools.PublishSubscribe.IDistributedPubSubMessage, Akka.Cluster.Tools" : FSharpExpr 
"Akka.Cluster.Tools.PublishSubscribe.Internal.SendToOneSubscriber, Akka.Cluster.Tools" : akka-pubsub

to see if it helps or not...

@ingted
Copy link

ingted commented Jul 27, 2023

Both the ["Akka.Cluster.Tools.PublishSubscribe.IDistributedPubSubMessage, Akka.Cluster.Tools" : FSharpExpr] and [System.Object:hyperion] would lead to

 [DEBUG][07/27/2023 09:55:05.985Z][Thread 0045][CoordinatedShutdown (akka://cluster-system)] Performing phase [cluster-shutdown] with [1] tasks: [wait-shutdown]
[DEBUG][07/27/2023 09:55:05.985Z][Thread 0045][CoordinatedShutdown (akka://cluster-system)] Performing phase [before-actor-system-terminate] with [0] tasks.
[DEBUG][07/27/2023 09:55:05.985Z][Thread 0045][CoordinatedShutdown (akka://cluster-system)] Performing phase [actor-system-terminate] with [1] tasks: [terminate-system]
[DEBUG][07/27/2023 09:55:05.985Z][Thread 0045][ActorSystem(cluster-system)] System shutdown initiated
[INFO][07/27/2023 09:55:05.988Z][Thread 0024][remoting-terminator] Shutting down remote daemon.
[INFO][07/27/2023 09:55:05.989Z][Thread 0024][remoting-terminator] Remote daemon shut down; proceeding with flushing remote transports.
[DEBUG][07/27/2023 09:55:06.000Z][Thread 0024][akka.tcp://cluster-system@10.38.112.143:53316/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fcluster-system%4010.38.112.143%3A9050-3/endpointWriter] Disassociated [akka.tcp://cluster-system@10.38.112.143:53316] <- akka.tcp://cluster-system@10.38.112.143:9050
[DEBUG][07/27/2023 09:55:06.000Z][Thread 0023][akka.tcp://cluster-system@10.38.112.143:53316/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fcluster-system%4010.38.112.143%3A9000-1/endpointWriter] Disassociated [akka.tcp://cluster-system@10.38.112.143:53316] -> akka.tcp://cluster-system@10.38.112.143:9000
[DEBUG][07/27/2023 09:55:06.000Z][Thread 0025][akka.tcp://cluster-system@10.38.112.143:53316/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fcluster-system%4010.38.112.143%3A9051-2/endpointWriter] Disassociated [akka.tcp://cluster-system@10.38.112.143:53316] <- akka.tcp://cluster-system@10.38.112.143:9051
[DEBUG][07/27/2023 09:55:06.003Z][Thread 0025][akka.tcp://cluster-system@10.38.112.143:53316/system/transports/akkaprotocolmanager.tcp.0/akkaProtocol-tcp%3A%2F%2Fcluster-system%40%5B%3A%3Affff%3A10.38.112.143%5D%3A56757-3] Association between local [tcp://cluster-system@10.38.112.143:53316] and remote [tcp://cluster-system@[::ffff:10.38.112.143]:56757] was disassociated because the ProtocolStateActor was stopped normally
[DEBUG][07/27/2023 09:55:06.003Z][Thread 0023][akka.tcp://cluster-system@10.38.112.143:53316/system/transports/akkaprotocolmanager.tcp.0/akkaProtocol-tcp%3A%2F%2Fcluster-system%40%5B%3A%3Affff%3A10.38.112.143%5D%3A56756-2] Association between local [tcp://cluster-system@10.38.112.143:53316] and remote [tcp://cluster-system@[::ffff:10.38.112.143]:56756] was disassociated because the ProtocolStateActor was stopped normally
[DEBUG][07/27/2023 09:55:06.005Z][Thread 0024][akka.tcp://cluster-system@10.38.112.143:53316/system/transports/akkaprotocolmanager.tcp.0/akkaProtocol-tcp%3A%2F%2Fcluster-system%4010.38.112.143%3A9000-1] Association between local [tcp://cluster-system@10.38.112.143:56755] and remote [tcp://cluster-system@10.38.112.143:9000] was disassociated because the ProtocolStateActor was stopped normally
[INFO][07/27/2023 09:55:06.005Z][Thread 0025][akka.tcp://cluster-system@10.38.112.143:53316/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fcluster-system%4010.38.112.143%3A9050-3] Removing receive buffers for [akka.tcp://cluster-system@10.38.112.143:53316]->[akka.tcp://cluster-system@10.38.112.143:9050]
[INFO][07/27/2023 09:55:06.005Z][Thread 0024][akka.tcp://cluster-system@10.38.112.143:53316/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fcluster-system%4010.38.112.143%3A9000-1] Removing receive buffers for [akka.tcp://cluster-system@10.38.112.143:53316]->[akka.tcp://cluster-system@10.38.112.143:9000]
[INFO][07/27/2023 09:55:06.005Z][Thread 0024][akka.tcp://cluster-system@10.38.112.143:53316/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fcluster-system%4010.38.112.143%3A9051-2] Removing receive buffers for [akka.tcp://cluster-system@10.38.112.143:53316]->[akka.tcp://cluster-system@10.38.112.143:9051]
[INFO][07/27/2023 09:55:06.036Z][Thread 0043][remoting (akka://cluster-system)] Remoting shut down
[INFO][07/27/2023 09:55:06.036Z][Thread 0022][remoting-terminator] Remoting shut down.
[DEBUG][07/27/2023 09:55:06.037Z][Thread 0019][EventStream] Shutting down: StandardOutLogger started

@ingted
Copy link

ingted commented Jul 28, 2023

The resolution is to specifiy serializer(s) for each type of messages which would be passed by pubsub in the serialization-bindings section.

@ingted
Copy link

ingted commented Oct 12, 2023

BTW,

" nodes can't Down themselves without being explicitly given a Down command. Akka.NET never does this by default. Can only happen if you explicitly turn this setting on."
=> saved my days...

(I have down logic in my code and caused unexpected BATCHLY nodeS downed... After read this, I confirmed that the logic caused my all nodes down during peak CPU time) (this time not the sin of FSharp object serialization...) Oops!!!

@Aaronontheweb
Copy link
Member

" nodes can't Down themselves without being explicitly given a Down command. Akka.NET never does this by default. Can only happen if you explicitly turn this setting on."

I should note that as of Akka.NET v1.5, the Split Brain Resolver (SBR) will now actually down nodes automatically and is enabled by default - but it's highly configurable and can be turned off.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants