Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow launcher to start after workers are ready #386

Open
alculquicondor opened this issue Jul 27, 2021 · 18 comments
Open

Allow launcher to start after workers are ready #386

alculquicondor opened this issue Jul 27, 2021 · 18 comments

Comments

@alculquicondor
Copy link
Collaborator

alculquicondor commented Jul 27, 2021

Part of #373. This was initially discussed in #360, but it didn't have a final conclusion.

In the v1 controller, an init-container was responsible of holding the start of the launcher until all the workers were ready.
In the v2 controller, we removed this init container for scalability and performance reasons (each init container is a controller that has to build a full cache of all the pods in the cluster).

Most of the times, this just works. The workers and launcher start at roughly the same time. ssh itself does some retries before giving up. However, there are scenarios were the launcher fails because it can't find the workers.
This problem could be consistent in the following scenario: The launcher lands in a Node where the image is already present, but the workers launch in Nodes where the image is missing.

There are 2 high level solutions to this problem:

  1. Delay the creation of the launcher Pod. This is possible by watching the Running pods in the controller. The problem here is that we might delay the start up of a Job significantly (in some scenarios, the user has to wait for 2 image pulls to happen sequentially, instead of all the pulls to happen in parallel).
  2. Retry the launcher. We can get this "for-free" if we use a Job for the launcher. The Job controller has exponential backoff for failures. The number of retries is configurable.

Both solutions are not exclusive (we can implement both).
Additional questions come with both solutions:

  1. Should we make the creation delay an API option?
  2. Should we expose the number of retries to the MPIJob user?

I propose we do 2 first, exposing the number of retries in the API, and then re-evaluate if we need to also do 1, based on user feedback after the release of a 2.0.

@alculquicondor
Copy link
Collaborator Author

@gaocegege
Copy link
Member

SGTM, I also prefer 2nd.

@terrytangyuan
Copy link
Member

Agreed. The second option looks better and could leverage a Job.

@ahg-g
Copy link

ahg-g commented Jul 28, 2021

SGTM. +1 to using the job api wherever possible.

@xhejtman
Copy link
Contributor

the second is functional, but looks a bit nasty to me:
mpi-launcher

@xhejtman
Copy link
Contributor

but may be it solvable just to use another type of deployment of launcher job so that it restarts in place?

@alculquicondor
Copy link
Collaborator Author

You can definitely use restartPolicy: OnFailure

@xhejtman
Copy link
Contributor

could it be a default then?

@alculquicondor
Copy link
Collaborator Author

The default in k8s is Always, which is not allowed for Jobs. I guess having OnFailure for MPIJob is fair. Do you mind sending a PR for that?

@xhejtman
Copy link
Contributor

wilco

@xhejtman
Copy link
Contributor

which version should I use for base for PR? Should I change v2 code only?

@alculquicondor
Copy link
Collaborator Author

Yes. The change is not backwards compatible, so we can't do it for older versions.

@alculquicondor
Copy link
Collaborator Author

@xhejtman are you still working on that PR? I think that would be the only change pending before we can release v2.

@xhejtman
Copy link
Contributor

I already sent it. Maybe on wrong place?

alculquicondor#1

@alculquicondor
Copy link
Collaborator Author

Yes, that's my fork. Do it for this repository

@kpouget
Copy link

kpouget commented Jan 12, 2022

FYI, here is the way I awaited for the worker Pods to be ready before launching the main container of the Launcher Pod. This avoids running into errors from the main container:

  mpiReplicaSpecs:
    Launcher:
      replicas: 1
      template:
        spec:
          initContainers:
          - name: wait-hostfilename
            image: <any image>
            command:
            - bash
            - -cx
            - "[[ $(cat /etc/mpi/discover_hosts.sh | wc -l) != 1 ]] && (date; echo Ready; cat /etc/mpi/discover_hosts.sh) || (date; echo 'not ready ...'; sleep 10; exit 1) && while read host; do while ! ssh $host echo $host ; do date; echo \"Pod $host is not up ...\"; sleep 10; done; date; echo \"Pod $host is ready\"; done <<< \"$(/etc/mpi/discover_hosts.sh)\""
            volumeMounts:
            - mountPath: /etc/mpi
              name: mpi-job-config
            - mountPath: /root/.ssh
              name: ssh-auth

(see also my original post)

@alculquicondor
Copy link
Collaborator Author

Alternatively, you can adapt the entry-point that we have for Intel (Intel doesn't do retries, so it's absolutely necessary to wait).

https://github.com/kubeflow/mpi-operator/blob/master/examples/base/intel-entrypoint.sh

@salanki
Copy link

salanki commented Feb 28, 2022

+1 to this. depending on the launcher to fail is pretty hackish

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants