-
Notifications
You must be signed in to change notification settings - Fork 6.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move to kubeadm as default deployment #3301
Comments
Why ? |
@woopstar Thanks for starting this! Question I think we need to answer: How much variance do we actually need between ansible templated and kubeadm generated static pod manifests? If it turns out we can use kubeadm to generate a manifest that will work in place for either provisioning path, could we retire the ansible templates and always use kubeadm? See: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-controlplane That's how I'd like to approach this transformation effort across the project, phase by phase. In cases where there's no advantage to using an ansible native approach compared to an equivalent kubeadm phase, I think we should always favor kubeadm and remove redundant options. |
This was something @mattymo wanted before we switched. |
My opinion is that we do not want variation. There should be no need to maintain two versions of the manifest files. If we want to alter something on them, then we should look back and see "why do we want to do so?" and then maybe add it as a PR to the kubeadm itself so we only maintain templates in one place. |
Updated the checklist. Since we seem to all agree that kubeadm only phases are the path forward:
|
I guess if we rename our manifests and files to match the scheme provided by kubeadm, the transitions between non-kubeadm and kubeadm is fairly easy as the files will just be overwritten with either of the configs? |
I still don't have a clear answer to this. |
I agree. But I guess @mattymo should answer to this as it was his initial demand. |
Just let us know how we can help. |
We need to test and verify the cloud provider are working as expected with kubeadm. I think that is the last part. |
Can we check that scaling up the cluster (adding new master and/or node) with kubeadm works ? |
is this already handled? how and where? |
* Switch to kubeadm deployment mode Discuss:#3301 * Add non-kubeadm upgrage to kubeadm cluster
Does anyone know how to convert a non-kubeadm installation to a kubeadm installation? Can't find any info. Now we use Kubernetes 1.16.8 with Kubespray fork with support of non-kubeadm: https://github.com/southbridgeio/kubespray |
This issue is about to make a checklist of what we need to do before we can move to kubeadm as default deployment.
kube_basic_auth
works. It requires volume mounts to the apiserver. Mount basic auth or token auth dirs to support it on kubeadm deployments #3351kube_token_auth
works. It requires volume mounts to the apiserver. Mount basic auth or token auth dirs to support it on kubeadm deployments #3351The following settings are currently missing:
profiling
enable-aggregator-routing
repair-malformed-updates
anonymous-auth
Please feel free to add comments and I'll update the list here.
The text was updated successfully, but these errors were encountered: