Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Swarm.Supervisor - a dynamic supervisor for distributed processes #6

Open
bitwalker opened this issue Sep 27, 2016 · 31 comments
Open

Comments

@bitwalker
Copy link
Owner

Implement a new supervisor module which can ensure registered processes are properly restarted according to normal supervisor semantics, but are also restarted/shifted based on cluster topology changes.

This supervisor will be for dynamic children only.

@thdxr
Copy link

thdxr commented Nov 1, 2016

Does this address the problem where currently in the example provided, if the process dies it is then restarted by the supervisor yet it doesn't register with Swarm?

@bitwalker
Copy link
Owner Author

@dax0 yes, though in the current example it shouldn't be restarted since the restart type is :temporary (meaning that if it fails, it is not restarted)

@thdxr
Copy link

thdxr commented Nov 1, 2016

So currently what is the best way to create a process that is managed by Swarm that should be restarted?

@thdxr
Copy link

thdxr commented Nov 1, 2016

The following seems to fail to restart when Worker crashes. As soon as I take out name: it works normally

defmodule Worker do
    use GenServer
    def start_link(key) do
        GenServer.start_link(__MODULE__, [key], name: tuple(key))
    end

    def tuple(key) do
        {:via, :swarm, key}
    end

end
defmodule Worker.Supervisor do
    use Supervisor

    def start_link do
        Supervisor.start_link(__MODULE__, [], name: __MODULE__)
    end

    def init(_) do
        import Supervisor.Spec
        children = [
            worker(Worker.Instance, [], restart: :transient)
        ]
        supervise(children, strategy: :simple_one_for_one)
    end

    def start_child(key) do
        {:ok, pid} = Supervisor.start_child(__MODULE__, [key])
    end

end

@bitwalker
Copy link
Owner Author

With simple_one_for_one, if the process exits with :normal, :shutdown, or {:shutdown, term}, it won't be restarted. How are you simulating the crash?

@thdxr
Copy link

thdxr commented Nov 1, 2016

I'm sending an abnormal exit message with Process.exit

On Mon, Oct 31, 2016, 10:40 PM Paul Schoenfelder notifications@github.com
wrote:

With simple_one_for_one, if the process exits with :normal, :shutdown, or {:shutdown,
term}, it won't be restarted. How are you simulating the crash?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#6 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAydINA_5H0UvwmFoxU06q6mmYQKphubks5q5qaNgaJpZM4KIBhd
.

@scrogson scrogson self-assigned this Nov 2, 2016
@kwrooijen
Copy link

I have the same problem when using transient restart strategy and spawning children with Swarm.register_name. The child gets restarted but not registered. If I can help to test anything let me know.

@bschaeffer
Copy link

So, we're just starting a project and were looking to use swarm with this exact use case in mind. Is this currently still an issue?

@bitwalker
Copy link
Owner Author

@bschaeffer Yes, if you have a need for supervising the processes being dynamically registered, you have to link/monitor those children and handle restarting them yourself currently - I would like to add a supervisor module to do this out of the box, but as I haven't had the need for it myself yet, and unfortunately lack the time to develop it at the moment, it may be awhile before I can get to that myself. That said, handling the restart yourself as mentioned above is fairly easy to do, so it's not a huge obstacle, just something to be aware of.

@kwrooijen
Copy link

That said, handling the restart yourself as mentioned above is fairly easy to do, so it's not a huge obstacle, just something to be aware of.

I'm not sure what you're referring to? Currently transient children aren't restarted automatically and there's no solution provided above. Unless I'm missing something?

@bitwalker
Copy link
Owner Author

@kwrooijen I was referring to the first part of my comment, which is supervising children directly by linking/monitoring them and reacting to down/exit signals appropriately - this is what Supervisor does under the covers (it also does things like handling too many restarts within a given interval, etc., but it's core functionality is simply linking and monitoring the process and reacting to those signals).

@narrowtux
Copy link

narrowtux commented Aug 24, 2017

I made a makeshift dynamic supervisor that will re-register the name when the child process terminates with reasons other than :normal and :shutdown (Important for hand-off). Tested this with 2 nodes, and now Swarm.registered/1 always returns the actual process. When I call Process.exit(Swarm.whereis_name("process_name"), :a) it will be restarted and re-registered.

Here's the code in a gist: https://gist.github.com/narrowtux/3b21967599e1f42787b24667633f68f0

It's not a full Supervisor in the sense that it does not give you a behaviour, it doesn't accept Supervisor.Spec maps and it won't crash if the child process has restarted too often.

These are all things that I imagine @bitwalker wants to be fixed before merging it into this library, but it's a start for all those who need this feature right now.

@beardedeagle
Copy link
Collaborator

beardedeagle commented Jan 17, 2018

Could DynamicSupervisor help with this?
https://hexdocs.pm/elixir/1.6/DynamicSupervisor.html

I mean, at minimum this appears to be working as a drop in replacement for the example code in the README:

defmodule MyApp.Supervisor do
  @moduledoc """
  This is the supervisor for the worker processes you wish to distribute
  across the cluster, Swarm is primarily designed around the use case
  where you are dynamically creating many workers in response to events. It
  works with other use cases as well, but that's the ideal use case.
  """
  use DynamicSupervisor

  def start_link(_arg), do: DynamicSupervisor.start_link(__MODULE__, [], name: __MODULE__)

  def init(_arg), do: DynamicSupervisor.init(strategy: :one_for_one)

  def register(worker_name) do
    child_spec = %{
      id: MyApp.Worker,
      start: {MyApp.Worker, :start_link, [worker_name]},
      restart: :temporary
    }

    {:ok, _pid} = DynamicSupervisor.start_child(__MODULE__, child_spec)
  end
end

output:

iex(b@localhost)7> {_, pid} = Swarm.register_name("foo", MyApp.Supervisor, :register, ["foo"])
14:04:34.367 [debug] [swarm on b@localhost] [tracker:handle_call] registering "foo" as process started by Elixir.MyApp.Supervisor.register/1 with args ["foo"]
"foo" says hi!
14:04:34.367 [debug] [swarm on b@localhost] [tracker:do_track] starting "foo" on b@localhost
{:ok, #PID<0.580.0>}
14:04:34.367 [debug] [swarm on b@localhost] [tracker:do_track] started "foo" on b@localhost
iex(b@localhost)8> "foo" says hi!
"foo" says hi!
"foo" says hi!
iex(b@localhost)11> Swarm.registered
[{"foo", #PID<0.585.0>}]
iex(b@localhost)13> Swarm.whereis_name("foo")
#PID<0.585.0>
iex(b@localhost)14> Swarm.unregister_name("foo")
14:08:14.959 [debug] [swarm on b@localhost] [tracker:handle_call] untrack #PID<0.585.0>
:ok
iex(b@localhost)18> DynamicSupervisor.terminate_child(MyApp.Supervisor, pid))
:ok

@beardedeagle
Copy link
Collaborator

So I was able to accomplish something similar to what @narrowtux did above, but with an actual DynamicSupervisor. I'll post a gist as soon as I can sanitize it.

@narrowtux
Copy link

I'm interested!

@jerel
Copy link

jerel commented Feb 8, 2018

beardedeagle, I believe that the flaw with that setup is that if the process crashes and is restarted by the supervisor it will no longer be tracked by swarm. After a restart Swarm.whereis_name/1 returns :undefined. At least that's what my testing here indicates.

I took the route of creating a :simple_one_for_one supervisor (back before DynamicSupervisor existed) that uses register_name/4 inside its start_child. This method works well in the following scenarios:

  • it will restart the process if it crashes, re-registering with swarm
  • if the topology changes and the process is started on a new node it will be supervised again

The thing that I'm not pleased with is that a supervisor could end up supervising a process on a remote node since register_name/4 is at liberty to start the mfa on any node. I have a feeling that this setup may not end well if the network is unstable. This could likely be solved by having the supervisor call the other supervisor and hand over control of the process.

It seems like the best of both worlds might be something like register_supervised_name/5 where it would take name, supervisor, m, f, a (or name, supervisor, spec) and supervisor would be expected to exist as a local name on all nodes. Then swarm could let the process to be supervised by a local supervisor as usual and it could still migrate the process off if the topology changed.

@kelostrada
Copy link
Contributor

kelostrada commented Feb 8, 2018

hey guys, I'm working on a supervisor which will be designed exactly to do that what @jerel mentions. It is based on DynamicSupervisor - it's still work in progress (I have to fix API for naming processes) but generally it seems to work, but for now you have to pass name of the process in first parameter of args passed to worker. My repo: https://github.com/kelostrada/swarm_dynamic_supervisor

@bitwalker
Copy link
Owner Author

Just an update here:

To properly handle supervision, support needs to exist within Swarm itself, at least for things which use register_name/5 to handle auto-distribution and the like. This is because those registrations are handled via a primitive form of supervision within Swarm, and in order to mesh nicely with a real supervisor, Swarm needs to delegate those supervision tasks to that supervisor. If you try to supervise things registered with register_name/5, you'll find yourself fighting Swarm's own supervision.

To that end, I've been building out support for using :one_for_one supervisors with Swarm. It expects that the supervisor is a locally registered process (i.e. not via Swarm), and that each node in the cluster is running an instance of that supervisor. This setup allows Swarm to move processes around the cluster properly, while leaning on the supervisor for restarts. Unfortunately, DynamicSupervisor is out, because it eliminated the one facility we had with supervisors to track specific children (which is necessary in order to identify which pid belongs to a given registration), likewise :simple_one_for_one supervisors are out. These restrictions aren't really a problem, but they are still restrictions. I'm hoping to have this support available soon, but I need to do considerable testing, since it touches a lot of internals in the tracker.

@kelostrada
Copy link
Contributor

@bitwalker I'm not sure why it should touch all the internals, however I created a simple DynamicSupervisor which reregisters names in Swarm on start and after restart (by calling Swarm.register_name which later calls GenServers call to start the process in real supervisor).
You can check my project I linked before, after some testing I think it works nicely and to be honest in real logic it doesn't differ much from the example for :simple_one_for_one that is provided here in README.

My knowledge on the subject is probably limited though so I might be doing something wrong. However for now it seems to be working as expected (process groups don't work this way though).

@bitwalker
Copy link
Owner Author

@kelostrada How are you handling the case where the cluster topology changes? Swarm assigns processes started via register_name/5 based on how the name is sharded across a hash ring containing all of the nodes in the cluster, if nodes come or go, then some percentage of processes will be reassigned to other nodes, and Swarm will perform a handoff process to make that happen, the way this works today is that the new process is started, the old process is asked how to perform the handoff, the old process is then killed, and if handing the state off, we do so to the new process, and then the registry is updated. In your case, at the point the old process is killed, you would attempt to restart the process, via Swarm, and would always return {:error, {:already_registered, pid}} since the new process is already running and registered. If you've specifically handled this problem to take that pid and then supervise it under the correct supervisor, then yeah, it probably works more or less as intended - but if you are raising an error, or not supervising the pid in that case, then you've just orphaned the process from supervision. It's also the case that Swarm is already effectively supervising for you in this case (albeit a very simple form of it), and due to the extra time required to do things via the registry, a lot of supervisor configuration (e.g. restarts/shutdowns) is either not useful anymore, or just plain ignored by Swarm (i.e. Swarm will just keep trying to restart a process, even if the shutdown strategy indicates it should not do so). The restart strategy will be inaccurate, since the time it takes to go through Swarm and contact the tracker on another node is not predictable, so saying "only restart 3 times in 5 seconds" may never actually trigger the overload case if it takes much more than a second to perform startup/registration.

However, if Swarm understands external supervison and delegates it to a true supervisor, then restarts/shutdowns are honored, they don't require going through the tracker, so they behave like any other supervised process; Swarm just needs to consult the supervisor to get the new pid of the restarted process (or in the case where the supervisor dies or the child is removed due to restart strategy, we don't bother re-registering it). We can also coordinate handoff in such a way that the supervisor doesn't think it needs to restart the old process by using terminate_child and delete_child at appropriate points in the handoff process.

I guess all I'm trying to say is that yes, you can get something that works pretty well without explicit support from Swarm, but there are hidden edge cases that may bite you if you aren't aware of them, and it's my opinion that Swarm should support external supervision. I've got most of this done locally, but there are still a few bits left, plus testing to perform. If there are specific supervision requirements you think need to be addressed, now is definitely the time to bring those up.

@kelostrada
Copy link
Contributor

Thanks for your indepth analysis. Yes you are right I didn't handle this case. However with :ignore handoffs this will still work (although obviously I'm missing a lot of functionalities of swarm this way).

I never said this is not needed to be supported internally by Swarm out of the box. I'm very happy to hear you are working on the solution already. Do you have your work in progress hosted somewhere on github perhaps? I would be happy to help test it.

@hickscorp
Copy link

hickscorp commented Mar 27, 2018

I think there are a few more issues to address. I wasn't sure where to post them, I guess this post is a good start.

In my case, there is a dynamic supervisor (Regent) that is used to spawn things using Swarm.register_name/5. What gets started is also a supervisor (BallotBox), which in turn starts a BallotBox.Logic process and its associated BallotBox.State process (We don't want to lose state when the logic crashes). The important bit here is the dynamic nature of how there could be 100 different BallotBox supervisors, each of them monitoring the two processes.

The first issue is that even tho the top-level process supervisor is started and distributed properly, starting the child processes becomes quite hard with Swarm:
Since adding name: {:via, :swarm, ...} in the child spec, or simply using Swarm.register_name/2 both result in a deadlock, we're using the later in a Task.async. This means that there is a time (even tho it's short, it exists) during which the process exists but isn't named / registered. That's however a minor problem.

The second problem is probably most important: it is impossible to have anything distributed this way that would accept state handoff, because Swarm has no idea on how to move the small processes (started by the supervisor itself started by swarm) to another node and therefore cannot possibly handle the transitioning.

I asked around on the Elixir Slack, and pretty much no one around attempted to answer - except for one person who recommended to stay away from Swarm if we also wanted a more complex supervision tree.

I hope the feedback will help to shape that supervisor @bitwalker mentioned, or maybe point to where some documentation is lacking (I could very easily admit that some of these problems would be related to my usage of Swarm in a wrong way).

@Qqwy
Copy link

Qqwy commented Apr 18, 2018

@hickscorp Maybe an idea is to let the state handoff/restoration logic live in the BallotBox since that is the level of abstraction that is registered and moved around by Swarm. So you make process-subtree relocation a concern of the subtree's root.

@hickscorp
Copy link

hickscorp commented Apr 30, 2018

@Qqwy Yeah but that's the thing - a BallotBox is a supervisor (Dynamically started), so it cannot handle the handoff message (Where would I implement it? :D )

@thdxr
Copy link

thdxr commented Jul 24, 2019

Has there been any update on this?

@scrogson scrogson removed their assignment Jul 24, 2019
@DmitryKK
Copy link

What about this issue?

@beardedeagle
Copy link
Collaborator

I believe this is still planned, but time has been focused other places. There are ways you can handle this yourself in the interim though.

@cadamis
Copy link

cadamis commented Sep 13, 2019

Are there any examples of how this can be handled in the interim? From reading this thread, it seems that @narrowtux's pseudo Supervisor might be closest to working with restarts and topology changes. But @beardedeagle if you have any examples of what you had in mind, that would be great.

@hubertlepicki
Copy link

So not sure what @beardedeagle has in mind either ref. the ways you can handle this yourself, @cadamis , but I figured that instead of a worker, i.e. GenServer, I start a Supervisor (with one_for_one) strategy, which starts the worker and also acts as a proxy.

The trick I am using here is that my Supervisor is registered in Swarm, and whenever I send a message, say, GenServer.call, it's being sent to the supervisor.

Supervisor, however, immediately forwards the message to it's started worker, and returns {:noreply ... }, leaving the responsibility of returning value to the caller by the child process.

Unfortunately this also means that things like hand-off when topology changes have to be done on the Supervisor level. Basically it wraps around the GenServer worker and provides all of the infrastructure needed for Swarm + normal supervision functionality, i.e. restarting , and also is a proxy whenever it needs to be called. Not ideal but works.

@cadamis
Copy link

cadamis commented Sep 17, 2019

I ended up working around this by using a standard DynamicSupervisor/Genserver worker setup, and then doing a check in handle_continue to see if the process starting up was registered to Swarm. If not, spawn a task to start under Swarm, and kill the starting up process. It's ugly, and could potentially lead to dropped messages, but will work for my particular use case. Basically this:

  def handle_continue(:load, name) do
    if Swarm.Tracker.whereis(name) == :undefined do
      spawn(fn ->
        Process.sleep(500); Swarm.register_name(name, MyApp.MySupervisor, :start_child, [name])
      end)
      {:stop, :normal, nil}
    else
      state = do_normal_start_stuff(name)  # insert normal logic here
      {:noreply, state}
    end
  end

Assuming do_normal_start_stuff handles state restoration (which it should, under a normal DynamicSupervisor/GenServer worker setup), this works well enough to ensure Swarm workers are restarted by the DynamicSupervisor and still tracked by Swarm. I'll see if I run into any major pitfalls with this.

@bernardd
Copy link

I'm hoping to have this support available soon, but I need to do considerable testing, since it touches a lot of internals in the tracker.

@bitwalker Is this in-progress work available on a branch somewhere? I'm also very keen to get this functionality going (having just been bitten by its absence this week) and I'm happy to work on tests and anything else that still needs doing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests