You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've got a service that I'm building up that will accept requests to an endpoint and will push the payload to RabbitMQ, which will then be read by a Broadway pipeline. The pipeline will send the messages to GenServers based on the id in the payload. The GenServers will start one per id, handle processing the state per message and timeout if the GenServer doesn't receive any additional messages within a specific timespan.
The problem is, I'm encountering multiple issues once I incorporate Horde into this around timeouts and crashes, and eventually messages just stop processing entirely. I've read the issues #233 and #227 and they both seem related, but the issue seems to stem from adding or removing nodes from the cluster.
Some example code here (some of the logic is omitted)
This setup will usually process messages just fine if I do not change any nodes. If I add or remove a node, sometimes processing will continue, but other times I'll encounter deadlocks or crashes. Some of the error messages I get..
This one usually happens after I've added or removed a node and we see messages stop processing.
Exit while fetching metrics from MyApp.HordeSupervisor.
Skip poll action. Reason: {:timeout, {GenServer, :call, [MyApp.HordeSupervisor, :get_telemetry, 5000]}}.
Adding or removing nodes sometimes gets this, usually when recycling kubernetes pods or scaling up while processing a lot of messages at once (this error message is using a custom node observer that is the one provided by Horde's docs). I've omitted the node names purposely.
I also see these messages and usually the Supervisor crashes right after. This may or may not be related.
GenServer MyApp.HordeSupervisor terminating
** (MatchError) no match of right hand side value: {nil, #Reference<0.369051839.3555065857.132324>}
(horde 0.8.7) lib/horde/dynamic_supervisor_impl.ex:233: Horde.DynamicSupervisorImpl.handle_cast/2
(stdlib 3.17.2.1) gen_server.erl:695: :gen_server.try_dispatch/4
(stdlib 3.17.2.1) gen_server.erl:771: :gen_server.handle_msg/6
(stdlib 3.17.2.1) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
Last message: {:"$gen_cast", {:disown_child_process, 114388217379722495904587093633141899824}}
The associated crash...
GenServer #PID<0.576.0> terminating
** (stop) exited in: GenServer.call(MyApp.HordeSupervisor, :horde_shutting_down, 5000)
** (EXIT) no process: the process is not alive or there's no process currently associated with the given name, possibly because its application isn't started
(elixir 1.12.3) lib/gen_server.ex:1014: GenServer.call/3
(horde 0.8.7) lib/horde/signal_shutdown.ex:21: anonymous fn/1 in Horde.SignalShutdown.terminate/2
(elixir 1.12.3) lib/enum.ex:930: Enum."-each/2-lists^foreach/1-0-"/2
(stdlib 3.17.2.1) gen_server.erl:733: :gen_server.try_terminate/3
(stdlib 3.17.2.1) gen_server.erl:918: :gen_server.terminate/10
(stdlib 3.17.2.1) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
Last message: {:EXIT, #PID<0.572.0>, :shutdown}
The text was updated successfully, but these errors were encountered:
I've got a service that I'm building up that will accept requests to an endpoint and will push the payload to RabbitMQ, which will then be read by a Broadway pipeline. The pipeline will send the messages to GenServers based on the id in the payload. The GenServers will start one per id, handle processing the state per message and timeout if the GenServer doesn't receive any additional messages within a specific timespan.
The problem is, I'm encountering multiple issues once I incorporate Horde into this around timeouts and crashes, and eventually messages just stop processing entirely. I've read the issues #233 and #227 and they both seem related, but the issue seems to stem from adding or removing nodes from the cluster.
Some example code here (some of the logic is omitted)
This setup will usually process messages just fine if I do not change any nodes. If I add or remove a node, sometimes processing will continue, but other times I'll encounter deadlocks or crashes. Some of the error messages I get..
This one usually happens after I've added or removed a node and we see messages stop processing.
Adding or removing nodes sometimes gets this, usually when recycling kubernetes pods or scaling up while processing a lot of messages at once (this error message is using a custom node observer that is the one provided by Horde's docs). I've omitted the node names purposely.
I also see these messages and usually the Supervisor crashes right after. This may or may not be related.
The associated crash...
The text was updated successfully, but these errors were encountered: