Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remote restart ipengine #94

Closed
jacksonloper opened this issue Jan 26, 2016 · 10 comments · Fixed by #463
Closed

remote restart ipengine #94

jacksonloper opened this issue Jan 26, 2016 · 10 comments · Fixed by #463

Comments

@jacksonloper
Copy link

I think this actually a pretty old idea, but I was wondering if there has been any movement on it.

In the same way you can restart a kernel in a notebook, it would be awesome if you could restart an ipengine. My understanding is that this would require a nontrivial rewrite of the engine, involving an entire extra monitoring process that just isn't there right now.

Does this seem like something likely to be implemented? If I took a crack at it, would that be helpful? Or is there another plan...

@minrk
Copy link
Member

minrk commented Jan 26, 2016

The plan is to put a nanny process next to each Engine, which would enable remote signalling, restarting, etc. This is a general plan for Jupyter kernels that will be extended to IPython parallel.

@minrk
Copy link
Member

minrk commented Jan 26, 2016

The tricky bit for IPython parallel is to not ruin cases like MPI, where the engine itself must be the main process, it cannot be started by the nanny. This means that either the engine starts the nanny the first time, or we special case MPI somehow.

@jacksonloper
Copy link
Author

Right. MPI.

Well, at the moment, I can't see reliably handling restarts for MPI with any fewer than 3 processes. Kinda dumb, but here's the picture I have in mind...

We have to allow that there may be at least three computers involved in an mpi situation.

  • LauncherComputer -- may shut down at any time; once command is run, this should be irrelevant
  • ClusterNodeA -- a compute node, ipengine could live here
  • ClusterNodeB -- a compute node, ipengine could live here

We want to be able to send keyboard interrupt signals to the engine, ergo, the nanny needs to be on the same node as the engine (correct me if I'm wrong). So at the very least, we would need a setup like this.

  • engine (on ClusterNodeA, launched by mpiexec run on LauncherComputer)
  • nanny (on ClusterNodeA, launched by engine)

Now let's say we get a restart signal. We need to kill engine and launch a new engine that is also part of the same MPI universe. We can do this, with, e.g. MPI_Comm_spawn. Trouble is this: MPI may be subject to an arbitrary and cruel resource manager. It may decide to put the new engine on ClusterNodeB. In which case the nanny needs to live on ClusterNodeB. But it doesn't. Failbot.

To deal with this situation, we actually need a third process. The setup now looks like this:

  • megananny (on ClusterNodeA, launched by mpiexec)
  • engine (on ClusterNodeA, launched by MPI_Comm_spawn)
  • micronanny (on ClusterNodeA, launched by engine)

Now if nanny is told to keyboard-interrupt, it talks to the micronanny, that actually sends the SIGINT. If nanny is told to restart, it creates an entirely new (micronanny,engine) pair, which might be on ClusterNodeA and it might be on ClusterNodeB.

Remarks

  1. One downside of this approach is that the engines will have to make a new intracommunicator (i.e. users can't depend on COMM_WORLD). However, I cannot see any way of avoiding this; if you want to be able to start new processes, you need to use some kind of spawn or join. That will create intercommunicators, which need to get merged into intracommunicators. So we'll want to insert some variable COMM_IPWORLD into the namespace, so you can replace MPI.COMM_WORLD.Allreduce with COMM_IPWORLD.Allreduce.
  2. There are certainly situations in which one can guarantee an MPI process will start on the same host. In this case you wouldn't need the megananny. This may even be the common case; I'm not terribly well acquainted with "standard practice." I could do a little survey of the supercomputers I have access to and check. But there are definitely situations in which I don't know how you could make such a guarantee...
  3. I've never actually tried to kill a single node of an intracommunicator forged by repeatedly spawning and merging. It's possible something will explode.

@jacksonloper
Copy link
Author

...in conclusion, I hope MPI doesn't block progress on this. When they introduced "dynamic processes management" in MPI2.0, I don't believe they were thinking of a scenario where a single worker could restart.

At the base minimum, every time an mpi process restarts we will need to destroy an old intracommunicator and inject a new one into the ipengine namespace. If users have data structures referencing the old intracommunicator, they will become invalid. Which could be a bit confusing for users :). But perhaps somebody with more MPI-foo can come along and prove me wrong!

In other news, let me know if there's a useful way I could contribute to the kernel-nanny architecture for Jupyter.

@minrk
Copy link
Member

minrk commented Jan 27, 2016

I'm 100% okay with MPI engines not being allowed to be restarted, that's not a problem. It's just the parent/child relationship that's an issue, because the mpiexec'd process must be the actual kernel, not the nanny.

@jacksonloper
Copy link
Author

Cool. That makes sense.

@neuralyzer
Copy link

The engine restart feature would be a really useful. E.g. I use theano on a cluster. Once I import theano a GPU is assigned to the importing process. The only way (I know of) to "free" the GPU again is to terminate/restart the process.

@OliverEvans96
Copy link

Any news here in the last two years?

@tavy14t
Copy link

tavy14t commented Apr 10, 2020

Any news in the last 4 years?

import ipyparallel as ipp
client = ipp.Client()
client.shutdown(targets=range(10, 24), restart=True)

NotImplementedError: Engine restart is not yet implemented

@minrk minrk mentioned this issue Jun 4, 2021
11 tasks
@minrk
Copy link
Member

minrk commented Jun 4, 2021

#463 lays the groundwork for this to be possible

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants