-
Notifications
You must be signed in to change notification settings - Fork 39.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Joining nodes to local development cluster #115319
Comments
/sig node |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
assign this issue to me. |
@MathurUtkarsh I'm glad to fix this (based on your instructions) if you think it makes sense for the community. |
local-up-cluster is meant to be a minimum viable tool for a single-node cluster on the current host, as opposed to being kubeadm compatible (for which you should use I don't think you need to change the pod CIDR to use kubeadm, you can configure kubeadm to match instead? cc @dims |
/remove-sig node |
@BenTheElder I can understand the scope is limited of this local cluster. But on the other hand, these two lines The end result is I'm able to test any Kubernetes code change on a multi-node (virtualized) Kubernetes cluster in 5-6 minutes. My flow is the following:
I think this is the fastest way of Kubernetes development. In summary: small change, huge benefit :) I agree that I don't have to change pod CIDR, except I would like to test some network features where I have to change the pod CIDR. It isn't a game changer but makes things easier. |
@mhmxs please go ahead and file a PR, if you can add a small markdown doc as well about this flow, let's see how that looks and decide? (yes, +1 to really small changes to local-up-cluster) |
@dims Let's first describe my env here to have a better understanding of how I solved all the issues around the multi-node setup. I have a bunch of environment variables on all the nodes: export NET_PLUGIN=cni
export ALLOW_PRIVILEGED=1
export ETCD_HOST=${MASTER_IP}
export API_HOST=${MASTER_IP}
export ADVERTISE_ADDRESS=${MASTER_IP}
export API_CORS_ALLOWED_ORIGINS=".*"
export KUBE_CONTROLLERS="*,bootstrapsigner,tokencleaner"
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
export POD_CIDR="10.88.0.0/16"
export SERVICE_CLUSTER_IP_RANGE="10.0.0.0/24" I start single node instance as usual: KUBELET_HOST=0.0.0.0 HOSTNAME_OVERRIDE=${MASTER_NAME} ./hack/local-up-cluster.sh -O In the next step I install a CNI driver, I prefer Calico: curl -Ls https://docs.projectcalico.org/manifests/calico.yaml | kubectl apply -f - Time to generate join token: kubeadm token create --print-join-command > /var/run/kubernetes/join.sh Allow apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:bootstrap-signer-clusterinfo
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:anonymous
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
rules:
- apiGroups:
- ''
resources:
- configmaps
verbs:
- get Then generate configs: apiVersion: v1
clusters:
- cluster:
certificate-authority-data: $(base64 -iw0 /var/run/kubernetes/server-ca.crt)
server: https://${MASTER_IP}:6443/
name: ''
contexts: []
current-context: ''
kind: Config
preferences: {}
users: []
---
apiServer:
timeoutForControlPlane: 2m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: local-up-cluster
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: ${KUBE_VERSION}
networking:
dnsDomain: cluster.local
serviceSubnet: ${SERVICE_CLUSTER_IP_RANGE} Kubelet also needs some permissions: apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet:operate
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubelet:operate
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:anonymous
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubelet:operate
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubeadm:bootstrap-signer-kubeadm-config
namespace: kube-system
rules:
- apiGroups:
- ''
resourceNames:
- kubeadm-config
- kube-proxy
- kubelet-config
resources:
- configmaps
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeadm:bootstrap-signer-kubeadm-config
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:bootstrap-signer-kubeadm-config
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:bootstrap:${token_id}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubeadm:bootstrap-signer-kubeadm-config
rules:
- apiGroups:
- ''
resources:
- nodes
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeadm:bootstrap-signer-kubeadm-config
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubeadm:bootstrap-signer-kubeadm-config
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:bootstrap:${token_id} Create config maps based on existing config: sed "s/master-node/''/" /var/run/kubernetes/kube-proxy.yaml > /var/run/kubernetes/config.conf
kubectl delete cm -n kube-system kube-proxy |:
kubectl create cm -n kube-system --from-file=/var/run/kubernetes/config.conf kube-proxy
cp -f /var/run/kubernetes/kubelet.yaml /var/run/kubernetes/kubelet
kubectl delete cm -n kube-system kubelet-config |:
kubectl create cm -n kube-system --from-file=/var/run/kubernetes/kubelet kubelet-config On the worker nodes we need two Systemd service files:
Finally start components on worker node: systemctl restart kube-proxy
sh /var/run/kubernetes/join.sh The end result:
I know it is a bit complicated, but automation is the key here. I have created a Vagrant based setup with a few simple commands: My environment does a few other tricks, like creates a NFS share on master, configures network and DNS, and downloads dependencies to decrease starting time. I have spent a few days to figure these out :) Does it make sense to write this hand-book? Where should be the docs located? |
FWIW: https://kind.sigs.k8s.io/ can run a local multi-node cluster and runs a faster subset of the build.
Why would we use I'm not objecting to the cluster-up script changes, I leave that to @dims, but I do think cluster-up serves an important role as a minimum viable bootstrap without kubeadm etc. Whereas I'd expect fully-kubeadm when using kubeadm ... and I think we have docs for this. cc @neolit123 |
On the other hand I'm a storage engineer and I need a separated kernel on each node. I can agree that my use-case doesn't fit for everybody. The only significant change here is the enable boostrap part.
That's why I'm not sure the documentation makes any sense for the community. (And this part is located on my private repo) But I'm sure my change request should help other kernel space devs to create multi node clusters. Fix me if i'm wrong @BenTheElder, but kubeadm uses images not the raw binaries, so i have to distribute locally built images to the nodes or have to build images on nodes. |
That's a good use-case for other solutions indeed. We used to have a vagrant local-devel option in-tree before minikube, but at this point they're all out of tree. I'm not sure where we'd put this.
From your prior comments I thought you were spinning up nodes from source with all nodes and joining the additional nodes to local-up-cluster with |
I spin up the single node cluster with |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I created a new cluster by executing
local-up-cluster.sh
, and I tried to join additional nodes viakubeadm
. My goal with this is to be able to test anything on my local box on a distributed system at light speed fast. I had to change the script to solve all the joining problems, and it makes sense to extend local cluster capabilities for every developer.Here is the diff:
What do you think?
The text was updated successfully, but these errors were encountered: