-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propose a new architecture with focus on scalability and robustness #360
Propose a new architecture with focus on scalability and robustness #360
Conversation
- With the above changes, **the following objects can be removed**: | ||
- The ServiceAccount+Role+RoleBinding for the launcher. | ||
- The `kubectl-delivery` init container in the launcher, as there is no need | ||
to obtain IPs, speeding up startup time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do u mean the init-container kubectl-delivery
will be removed? If that, how could we keep the launcher start after the workers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are 2 options:
- Do nothing, leave such handling to the gang scheduler. For this reason it is important that the launcher can do retries.
- To not create the launcher until all the worker pods are running. So everything is handled by the controller, no need for extra cache syncs.
I prefer option 1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so, you would like to speed up startup time, but it does not if you want the launcher to retry for failures. And the scheduler should not know about the job startup strategy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the same time, how do you verify the real launcher failed reason for option 1? for example,
- workers does not startup, the launcher failed and restart
- training job failed, the launcher failed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by "job startup strategy"?
As for differentiating failures, I don't think we can or should do that. There is another type of failure which is kind of a mix of the two you described: a worker pod gets evicted by kubelet after the job already started. We cannot easily differentiate this one from a case where the user's code had a crash, for the purpose of retrying.
But perhaps option 2 is reasonable. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to support retries, then by product we want option 1, correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can still use option 2. It might make startups less flaky. But it's not a guarantee anyways. A worker can simply just fail in between the time it reported running and the launcher started running.
and has to be paid for every job. The API calls also causing additional | ||
stress on the apiserver as the number of launchers increases. | ||
- The launcher pod has execution permissions on any other Pod in the | ||
namespace, which can be a security concern. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The launcher only has the execution permission on the worker pods belonging to its job, you can find it at code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, missed that. However, that doesn't scale as the number of workers increase (k8s objects have a size limit).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we replace kubectl exec with ssh, we can also support the elastic feature, thus I do not think it is a problem for us if I understand it correctly.
stress on the apiserver as the number of launchers increases. | ||
- The launcher pod has execution permissions on any other Pod in the | ||
namespace, which can be a security concern. | ||
- The v1 controller doesn’t implement launcher pod retries, although there are |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are user cases for launcher pod retry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Running jobs on a spot/preemptible VMs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my perspective, I don't think the launcher pod should be scheduled to a spot/preemptible VMs. For the worker pods, yes, because worker pods are stateless. However, the launcher pod seems a stateful instance, whose re-restart means a new job.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still, when running the workers in a preemptible VM, if any of them fail, the entire job fails, including the launcher. So we need launcher retries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alculquicondor What MPI implementation are you expecting users to use after launcher gets restarted to make sure the workers are aware and can reconnect to the new launcher?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The process goes like this:
- a worker terminates unexpectedly
- the launcher notices the worker failure, it closes the rest of ssh connections and terminates with a failure
- the launcher restarts, launches new ssh connections to the workers
This is independent of the MPI implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still, when running the workers in a preemptible VM, if any of them fail, the entire job fails, including the launcher. So we need launcher retries.
From what I observed from other HPC system, like Slurm, for jobs that are not fault-tolerant, when some of the worker fails, it does not retry via a launcher restart. Instead, the entire job will be marked as failed with it resources released. The system will re-queue the failed job if 'retry' is demanded by the user and create a new job. Such process is compatible with the contemporary design of mpi-operator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OTOH, it kind of goes against the declarative and fault-tolerant approach of k8s APIs, including the Job API. Retries is what kubernetes users would expect. And if they don't need it, they could always disable it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is useful if we make automatic retries an option that we default to 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your comments so far. Please let me know if I missed anything else in the background. I got all the details by reading code.
and has to be paid for every job. The API calls also causing additional | ||
stress on the apiserver as the number of launchers increases. | ||
- The launcher pod has execution permissions on any other Pod in the | ||
namespace, which can be a security concern. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated.
cb33c59
to
e96473a
Compare
|
||
```yaml | ||
apiVersion: apps/v1 | ||
kind: StatefulSet |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering how to support elastic mode with statefulset? We may add/remove workers in the fly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You just change .spec.replicas
.
There are 2 main open discussions pending:
Please leave your thoughts |
I asked for 10 min in the next AutoML and Trainging WG meeting. Would you be able to make it @terrytangyuan @gaocegege ? |
I cannot make it due to scheduling conflicts. In the meantime, I'd encourage all existing reviewers to check if your comments can be resolved or discuss if there are further concerns on the proposal (which might be more efficient as we are located in more than 4 timezones). |
+1. Can we also assign an "approver"? |
We should try to reach consensus from people listed in OWNERS. People from @kubeflow/wg-training-leads can also approve after that. |
@rongou any thoughts? |
One topic that was raised at today's meeting was where to place the code. This question is important because there are breaking changes and the new controller wouldn't be able to properly process an existing job. I was thinking that we can just modify v1 code, as it hasn't been released, AFAIK. Is this not the case? Should we create a new folder? like |
It's true that v1 is not officially released yet but I believe there are several companies that are already running it in production which might be concerning. |
I see. Then maybe we can create the new controller in a entire new module, so that we can start with fresh dependencies and not be blocked by kube-batch's (see #364)
If they don't respond, can we assume lazy consensus? |
I believe the API version Also, as attendees suggested on the Kubeflow training meeting, it would be better to present the metric showing the new design does improve the scalability and launching time before this proposal is adopted. |
The problem is not necessarily the API. Let's say someone has running jobs with the existing v1 controller. If they upgrade to the new proposed controller, the existing jobs would have orphan resources that the new controller won't manage. And the existing workers wouldn't work, because they don't have stable hostnames. |
In any case, I think the decision of where to place the code comes later. Can we have an agreement on the direction of the new architecture? See #360 (comment) |
I think @zw0610 raised a good point here. The proposed changes are very significant. I think in this case it makes sense to present evidence of the improvements (perhaps benchmarking on a fork) before we merge it back to the upstream repo. This way we are all confident that the changes are beneficial to the community. @alculquicondor I think you've put together a good starting point in the "analysis" section of the proposal. It would be great to see some real benchmarks/metrics so it's more convincing in practice instead of theoretically. |
Thanks, I'll work on that throughout the week. |
I ran some experiments with these characteristics:
Then I run two jobs, one with 3 workers and one with 20 workers. Each task is doing:
So really, most of the time is spent on setup, but some communication is being tested too. For each run, I recorded the Start time and Completion time as reported by the mpi-operator controller. For 3 workers, I got an average of 9s for the current operator and 3s for the ssh proposal. All runs here: You can see an important improvement. I used a reasonably small cluster and somewhat small jobs. Let me know if you would like any more details about my quick experiments. However, let me reiterate on theoretical analysis already presented, which I think is important on its own. The apiserver is a critical component in the cluster. We should avoid stressing it with requests that intend to do pod-to-pod communication. The apiserver has limits for ongoing open connections and it throttles API requests. With the current architecture, just a few dozens of jobs could cause starvation to API requests from critical components such as the scheduler or the controller manager. |
@terrytangyuan @zw0610 anything to add? |
Thanks for putting this together! I think both theoretical and analytical improvements look great to me. As it's already close to the weekend, please leave additional comments by June 22nd and I will merge this if there's no major concerns by then. |
I feel that this proposal is a strong case for a revamp based on #360 (comment). /lgtm |
/lgtm Thanks for the proposal. |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: terrytangyuan The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Fixes #315