- Versions of all the components used
- KVM Setup
- Setting up the VMs before installing the components
- Creating Certificates & Authentication to enable secure communication across all the Kubernetes components
- Downloading all the components RPM files
- Setting Up Etcd
- Setting up Flannel
- Setting up Kubernetes binaries on the Nodes
- Modifying the configuration files of the components
- Testing the cluster
- Troubleshooting
- References
Note: This setup documents running a two node (1 Master & 1 Worker) Kubernetes Cluster on LinuxOne cloud using LinuxOne supported linux distribution SUSE Linux Enterprise Server 12 SP3.
The cluster is based on docker as container run-time and the components are installed as systemd services with flannel as the overlay networking model. The versions of each components used are as follows
Component/ Infrastructure | Version |
---|---|
LinuxOne Instance flavour |
LinuxONE-Medium |
LinuxOne Instance OS |
SLES12SP3 |
SLES VMs |
SLES12SP3 |
Go |
1.10.2 |
Kubernetes |
1.9.6 |
Docker |
17.09.1-ce |
Etcd |
3.3.6 |
Flannel |
0.9.1 |
Refer to this gist to setup virtual machines using KVM on LinuxOne Community Cloud.
-
Create two virtual machines after setting up KVM as explained above and install SLES12 SP3 on both the nodes
-
While installing make sure to register the VM’s with the SLES server and install the following System Extensions or Add-Ons.
-
Containers Module 12 s390x (On both Master & Worker)
-
SUSE Linux Enterprise High Availability Extension 12 SP3 s390x (On Worker)
You might need to register on SLES website to get the registration code for installing. There is a free 60 day Evaluation version available here. For installing High Availability extension, you need to register on the SLES website apart from the previous registration. There is a 60 day evaluation version available here
-
-
SSH into the nodes and follow the below steps in the nodes specified across each step
-
Note down the IP addresses of both the nodes. In the context of this document, IP addresses of the nodes are as follows
Node IP Master
192.168.100.230
Worker
192.168.100.169
-
Make sure the firewall is disabled while installing the virtual machine, if not disable it by running
SuSEfirewall2 off
-
Make sure to set the hostname of both the machines to differentiate one machine from the other. Preferably name the machines as k8s-master for the Master node and k8s-worker for the Worker node. Hostname can be set using the command
hostnamectl set-hostname <hostname>
. -
Install docker on Worker node(s). To install docker on SLES, it is necessary to register the machine with the SLES server and download add-on module Containers Module 12 s390x as specified in step step. Use the following commands to install docker.
sudo zypper install -y docker sudo systemctl enable docker.service sudo systemctl start docker.service
-
Create the below folders in order to save certificates & configuration files later
-
On Master Node
sudo mkdir -p /srv/kubernetes sudo mkdir -p /var/lib/{kube-controller-manager,kube-scheduler} sudo mkdir /var/lib/flanneld/
-
On Worker Node(s)
sudo mkdir -p /srv/kubernetes sudo mkdir -p /var/lib/{kube-proxy,kubelet} sudo mkdir /var/lib/flanneld/
-
-
It is recommended to run Kubernetes without using swap memory. Disable swap space on all the nodes, by running
swapoff -a sudo sed -i 's/.*swap/#&/' /etc/fstab
Creating Certificates & Authentication to enable secure communication across all the Kubernetes components
Run all the following steps and thereby generate the files in the Master node, and then copy the specific mentioned certs and config files to the worker nodes.
cd /srv/kubernetes openssl genrsa -out ca-key.pem 2048 openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
cat > openssl.cnf <<EOF [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [v3_req] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = kubernetes DNS.2 = kubernetes.default DNS.3 = kubernetes.default.svc DNS.4 = kubernetes.default.svc.cluster.local IP.1 = 127.0.0.1 IP.2 = 192.168.100.230 # Master IP IP.3 = 100.65.0.1 #Service IP EOF
openssl genrsa -out apiserver-key.pem 2048 openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial \ -out apiserver.pem -days 7200 -extensions v3_req -extfile openssl.cnf cp apiserver.pem server.crt cp apiserver-key.pem server.key
openssl genrsa -out admin-key.pem 2048 openssl req -new -key admin-key.pem -out admin.csr -subj "/CN=admin" openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 7200
openssl genrsa -out kube-controller-manager-key.pem 2048 openssl req -new -key kube-controller-manager-key.pem -out kube-controller-manager.csr -subj "/CN=kube-controller-manager" openssl x509 -req -in kube-controller-manager.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out kube-controller-manager.pem -days 7200
openssl genrsa -out kube-scheduler-key.pem 2048 openssl req -new -key kube-scheduler-key.pem -out kube-scheduler.csr -subj "/CN=kube-scheduler" openssl x509 -req -in kube-scheduler.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out kube-scheduler.pem -days 7200
cat > worker-openssl.cnf << EOF [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [v3_req] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] IP.1 = 192.168.100.169 EOF
openssl genrsa -out kube-proxy-key.pem 2048 openssl req -new -key kube-proxy-key.pem -out kube-proxy.csr -subj "/CN=kube-proxy" openssl x509 -req -in kube-proxy.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out kube-proxy.pem -days 7200
Note: 'linux-xpuq' here refers to the hostname of the worker
openssl genrsa -out kubelet-key.pem 2048 openssl req -new -key kubelet-key.pem -out kubelet.csr -subj "/CN=system:node:linux-xpuq" openssl x509 -req -in kubelet.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out kubelet.pem -days 7200 -extensions v3_req -extfile worker-openssl.cnf
cat > etcd-openssl.cnf <<EOF [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = clientAuth,serverAuth subjectAltName = @alt_names [alt_names] IP.1 = 192.168.100.230 EOF
openssl genrsa -out etcd.key 2048 openssl req -new -key etcd.key -out etcd.csr -subj "/CN=etcd" -extensions v3_req -config etcd-openssl.cnf -sha256 openssl x509 -req -sha256 -CA ca.pem -CAkey ca-key.pem -CAcreateserial \ -in etcd.csr -out etcd.crt -extensions v3_req -extfile etcd-openssl.cnf -days 7200
openssl genrsa -out service-account-key.pem 2048 openssl req -new -key service-account-key.pem -out service-account.csr -subj "/CN=service-account" openssl x509 -req -in service-account.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out service-account.pem -days 7200
TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null) kubectl config set-cluster linux1.k8s --certificate-authority=/srv/kubernetes/ca.pem --embed-certs=true --server=https://192.168.100.230:6443 kubectl config set-credentials admin --client-certificate=/srv/kubernetes/admin.pem --client-key=/srv/kubernetes/admin-key.pem --embed-certs=true --token=$TOKEN kubectl config set-context linux1.k8s --cluster=linux1.k8s --user=admin kubectl config use-context linux1.k8s cat ~/.kube/config #Create config file
TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null) kubectl config set-cluster linux1.k8s --certificate-authority=/srv/kubernetes/ca.pem --embed-certs=true --server=https://192.168.100.230:6443 --kubeconfig=/var/lib/kube-controller-manager/kubeconfig kubectl config set-credentials kube-controller-manager --client-certificate=/srv/kubernetes/kube-controller-manager.pem --client-key=/srv/kubernetes/kube-controller-manager-key.pem --embed-certs=true --token=$TOKEN --kubeconfig=/var/lib/kube-controller-manager/kubeconfig kubectl config set-context linux1.k8s --cluster=linux1.k8s --user=kube-controller-manager --kubeconfig=/var/lib/kube-controller-manager/kubeconfig kubectl config use-context linux1.k8s --kubeconfig=/var/lib/kube-controller-manager/kubeconfig
TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null) kubectl config set-cluster linux1.k8s --certificate-authority=/srv/kubernetes/ca.pem --embed-certs=true --server=https://192.168.100.230:6443 --kubeconfig=/var/lib/kube-scheduler/kubeconfig kubectl config set-credentials kube-scheduler --client-certificate=/srv/kubernetes/kube-scheduler.pem --client-key=/srv/kubernetes/kube-scheduler-key.pem --embed-certs=true --token=$TOKEN --kubeconfig=/var/lib/kube-scheduler/kubeconfig kubectl config set-context linux1.k8s --cluster=linux1.k8s --user=kube-scheduler --kubeconfig=/var/lib/kube-scheduler/kubeconfig kubectl config use-context linux1.k8s --kubeconfig=/var/lib/kube-scheduler/kubeconfig
TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null) kubectl config set-cluster linux1.k8s --certificate-authority=/srv/kubernetes/ca.pem --embed-certs=true --server=https://192.168.100.230:6443 --kubeconfig=kubelet.kubeconfig kubectl config set-credentials kubelet --client-certificate=/srv/kubernetes/kubelet.pem --client-key=/srv/kubernetes/kubelet-key.pem --embed-certs=true --token=$TOKEN --kubeconfig=kubelet.kubeconfig kubectl config set-context linux1.k8s --cluster=linux1.k8s --user=kubelet --kubeconfig=kubelet.kubeconfig kubectl config use-context linux1.k8s --kubeconfig=kubelet.kubeconfig scp kubelet.kubeconfig root@192.168.100.169:/var/lib/kubelet/kubeconfig
TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null) kubectl config set-cluster linux1.k8s --certificate-authority=/srv/kubernetes/ca.pem --embed-certs=true --server=https://192.168.100.230:6443 --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy --client-certificate=/srv/kubernetes/kube-proxy.pem --client-key=/srv/kubernetes/kube-proxy-key.pem --embed-certs=true --token=$TOKEN --kubeconfig=kube-proxy.kubeconfig kubectl config set-context linux1.k8s --cluster=linux1.k8s --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig kubectl config use-context linux1.k8s --kubeconfig=kube-proxy.kubeconfig scp kube-proxy.kubeconfig root@192.168.100.169:/var/lib/kube-proxy/kubeconfig
In this setup, we will install all the components using RPM files build for SLES using SUSE Open Build Service from here
-
rpm -qpi file.rpm
→ Gives detailed information about the package -
rpm -qpl file.rpm
→ Shows all the files installed by the package -
rpm -qp --requires file.rpm
→ Lists all dependencies required by the package -
rpm -U file.rpm
→ Installs the package
cd ~/ wget https://download.opensuse.org/repositories/home:/mfriesenegger:/branches:/devel:/CaaSP:/Head:/ControllerNode/SLE_12_SP3/s390x/etcd-3.3.1-3.1.s390x.rpm wget https://download.opensuse.org/repositories/home:/mfriesenegger:/branches:/devel:/CaaSP:/Head:/ControllerNode/SLE_12_SP3/s390x/etcdctl-3.3.1-3.1.s390x.rpm rpm -U etcd-3.3.1-3.1.s390x.rpm rpm -U etcdctl-3.3.1-3.1.s390x.rpm
Modify the file /usr/lib/systemd/system/etcd.service
as shown below (Red indicates the modifications to the file)
[Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/var/lib/etcd/ Environment="ETCD_UNSUPPORTED_ARCH=s390x" EnvironmentFile=-/etc/sysconfig/etcd User=etcd # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/sbin/etcd --name=\"${ETCD_NAME}\" \ --data-dir=\"${ETCD_DATA_DIR}\" \ --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" \ --cert-file=\"${ETCD_CERT_FILE}\" \ --key-file=\"${ETCD_KEY_FILE}\" \ --peer-cert-file=\"${ETCD_PEER_CERT_FILE}\" \ --peer-key-file=\"${ETCD_PEER_KEY_FILE}\" \ --trusted-ca-file=\"${ETCD_TRUSTED_CA_FILE}\" \ --peer-trusted-ca-file=\"${ETCD_TRUSTED_CA_FILE}\" \ --peer-client-cert-auth \ --client-cert-auth \ --initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" \ --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" \ --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" \ --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" \ --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" \ --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\"" Restart=on-failure LimitNOFILE=65536 Nice=-10 IOSchedulingClass=best-effort IOSchedulingPriority=2 [Install] WantedBy=multi-user.target
Also initialize the variables in the configuration file /etc/sysconfig/etcd
as shown below
# [member] ETCD_NAME=master ETCD_DATA_DIR="/var/lib/etcd" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" ETCD_LISTEN_PEER_URLS="https://192.168.100.230:2380" # The value "http://127.0.0.1:2379" can also be used for ETCD_LISTEN_CLIENT_URLS, but it is not secure ETCD_LISTEN_CLIENT_URLS="https://192.168.100.230:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" # #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.230:2380" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_CLUSTER="master=https://192.168.100.230:2380" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-0" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.230:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" # #[proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[security] ETCD_CERT_FILE="/srv/kubernetes/etcd.crt" ETCD_KEY_FILE="/srv/kubernetes/etcd.key" ETCD_CLIENT_CERT_AUTH="true" ETCD_TRUSTED_CA_FILE="/srv/kubernetes/ca.pem" ETCD_PEER_CERT_FILE="/srv/kubernetes/etcd.crt" ETCD_PEER_KEY_FILE="/srv/kubernetes/etcd.key" ETCD_PEER_CLIENT_CERT_AUTH="true" #ETCD_PEER_TRUSTED_CA_FILE="" # #[logging] ETCD_DEBUG="true" # examples for -log-package-levels etcdserver=WARNING,security=DEBUG ETCD_LOG_PACKAGE_LEVELS="DEBUG"
Now, run the following commands to start etcd
sudo systemctl daemon-reload sudo systemctl enable etcd sudo systemctl start etcd
Flannel should be installed on all the nodes
cd ~/Downloads wget https://download.opensuse.org/repositories/home:/mfriesenegger:/branches:/devel:/CaaSP:/Head:/ControllerNode/SLE_12_SP3/s390x/flannel-0.9.1-5.2.s390x.rpm rpm -U flannel-0.9.1-5.2.s390x.rpm
This should be run only once and only on the Master node
etcdctl --endpoints https://192.168.100.230:2379 --cert-file /srv/kubernetes/etcd.crt --key-file /srv/kubernetes/etcd.key --ca-file /srv/kubernetes/ca.pem set /coreos.com/network/config '{ "Network": "100.64.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"} }'
Initialize the variables required for flanneld in the configuration
file /etc/sysconfig/flanneld
as shown below
# Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="https://192.168.100.230:2379" # ETCD Prefix for the -etcd-prefix argument FLANNEL_ETCD_KEY="/coreos.com/network" # Any additional options that you want to pass FLANNEL_OPTIONS="-subnet-file=/var/lib/flanneld/subnet.env \ -etcd-cafile=/srv/kubernetes/ca.pem \ -etcd-certfile=/srv/kubernetes/etcd.crt \ -etcd-keyfile=/srv/kubernetes/etcd.key -ip-masq=true"
Modify the docker configuration file /etc/sysconfig/docker
to
add extra arguments for docker executable as follows
## Path : System/Management ## Description : Extra cli switches for docker daemon ## Type : string ## Default : "" ## ServiceRestart : docker # DOCKER_OPTS="--bip=100.64.98.1/24 --mtu=1450 --iptables=false --ip-masq=false --ip-forward=true"
Then run the following commands
sudo systemctl daemon-reload sudo systemctl restart docker sudo systemctl enable flanneld sudo systemctl start flanneld
wget https://download.opensuse.org/repositories/home:/mfriesenegger:/branches:/devel:/CaaSP:/Head:/ControllerNode/SLE_12_SP3/s390x/kubernetes-common-1.9.6-6.1.s390x.rpm wget https://download.opensuse.org/repositories/home:/mfriesenegger:/branches:/devel:/CaaSP:/Head:/ControllerNode/SLE_12_SP3/s390x/kubernetes-master-1.9.6-6.1.s390x.rpm wget https://download.opensuse.org/repositories/home:/mfriesenegger:/branches:/devel:/CaaSP:/Head:/ControllerNode/SLE_12_SP3/s390x/kubernetes-client-1.9.6-6.1.s390x.rpm rpm -U kubernetes-common-1.9.6-6.1.s390x.rpm rpm -U kubernetes-master-1.9.6-6.1.s390x.rpm rpm -U kubernetes-client-1.9.6-6.1.s390x.rpm
wget https://download.opensuse.org/repositories/home:/mfriesenegger:/branches:/devel:/CaaSP:/Head:/ControllerNode/SLE_12_SP3/s390x/kubernetes-common-1.9.6-6.1.s390x.rpm wget https://download.opensuse.org/repositories/home:/mfriesenegger:/branches:/devel:/CaaSP:/Head:/ControllerNode/SLE_12_SP3/s390x/kubernetes-kubelet-1.9.6-6.1.s390x.rpm wget https://download.opensuse.org/repositories/home:/mfriesenegger:/branches:/devel:/CaaSP:/Head:/ControllerNode/SLE_12_SP3/s390x/kubernetes-node-1.9.6-6.1.s390x.rpm rpm -U kubernetes-common-1.9.6-6.1.s390x.rpm rpm -U kubernetes-kubelet-1.9.6-6.1.s390x.rpm rpm -U kubernetes-node-1.9.6-6.1.s390x.rpm
Modify the following configuration files in the directory /etc/kubernetes/
as shown below
### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=2" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, and proxy find the apiserver KUBE_MASTER="--master=https://192.168.100.230:6443"
### # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" # The port on the local server to listen on. # KUBE_API_PORT="--port=8080" # Port minions listen on # KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.100.230:2379" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=100.65.0.0/24" # default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota" # Add your own! KUBE_API_ARGS="--bind-address=0.0.0.0 \ --advertise-address=192.168.100.230 \ --anonymous-auth=false \ --apiserver-count=1 \ --authorization-mode=RBAC,AlwaysAllow \ --etcd-cafile=/srv/kubernetes/ca.pem \ --etcd-certfile=/srv/kubernetes/etcd.crt \ --etcd-keyfile=/srv/kubernetes/etcd.key \ --enable-swagger-ui=false \ --event-ttl=1h \ --kubelet-certificate-authority=/srv/kubernetes/ca.pem \ --kubelet-client-certificate=/srv/kubernetes/kubelet.pem \ --kubelet-client-key=/srv/kubernetes/kubelet-key.pem \ --kubelet-https=true \ --client-ca-file=/srv/kubernetes/ca.pem \ --runtime-config=api/all=true,batch/v2alpha1=true,rbac.authorization.k8s.io/v1alpha1=true \ --secure-port=6443 \ --storage-backend=etcd3 \ --tls-cert-file=/srv/kubernetes/apiserver.pem \ --tls-private-key-file=/srv/kubernetes/apiserver-key.pem \ --tls-ca-file=/srv/kubernetes/ca.pem"
### # kubernetes scheduler config # default config should be adequate # Add your own! KUBE_SCHEDULER_ARGS="--leader-elect=true \ --kubeconfig=/var/lib/kube-scheduler/kubeconfig"
### # The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own! KUBE_CONTROLLER_MANAGER_ARGS="--allocate-node-cidrs=true \ --attach-detach-reconcile-sync-period=1m0s \ --cluster-cidr=100.64.0.0/16 \ --cluster-name=k8s.virtual.local \ --leader-elect=true \ --root-ca-file=/srv/kubernetes/ca.pem \ --service-account-private-key-file=/srv/kubernetes/service-account-key.pem \ --use-service-account-credentials=true \ --kubeconfig=/var/lib/kube-controller-manager/kubeconfig \ --cluster-signing-cert-file=/srv/kubernetes/ca.pem \ --cluster-signing-key-file=/srv/kubernetes/ca-key.pem \ --service-cluster-ip-range=100.65.0.0/24 \ --configure-cloud-routes=false "
### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=2" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, and proxy find the apiserver KUBE_MASTER="--master=https://192.168.100.230:6443"
### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=192.168.100.169" # The port for the info server to serve on # KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=192.168.100.169" # Add your own! KUBELET_ARGS="--pod-manifest-path=/etc/kubernetes/manifests \ --kubeconfig=/var/lib/kubelet/kubeconfig \ --tls-cert-file=/srv/kubernetes/server.crt \ --tls-private-key-file=/srv/kubernetes/server.key \ --cert-dir=/var/lib/kubelet \ --container-runtime=docker \ --serialize-image-pulls=false \ --register-node=true \ --cluster-dns=100.64.0.10 \ --cluster-domain=cluster.local"
### # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="--cluster-cidr=100.64.0.0/16 \ --masquerade-all=true \ --kubeconfig=/var/lib/kube-proxy/kubeconfig \ --proxy-mode=iptables"
Now that we have deployed the cluster let’s test it.
Running
should return the version of both kubectl and kube-api-serverkubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.8", GitCommit:"c138b85178156011dc934c2c9f4837476876fb07", GitTreeState:"clean", BuildDate:"2018-05-21T19:01:12Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/s390x"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.8", GitCommit:"c138b85178156011dc934c2c9f4837476876fb07", GitTreeState:"clean", BuildDate:"2018-05-21T18:53:18Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/s390x"}
Running
should return the status of all the componentskubectl get componentstatus
NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"}
Running
should return the nodes sucessfully registered with the server and status of each node.kubectl get nodes
NAME STATUS ROLES AGE VERSION k8s-worker Ready <none> 6d v1.9.8
Let’s run an Ngnix app on the cluster.
kubectl run nginx --image=nginx --port=80 --replicas=3 kubectl get pods -o wide kubectl expose deployment nginx --type NodePort NODE_PORT=$(kubectl get svc nginx --output=jsonpath='{range .spec.ports[0]}{.nodePort}') curl http://192.168.100.169:${NODE_PORT} #The IP is of Worker node
-
If any of the Kubernetes component throws up an error, check the reason for the error by observing the logs of the service using
journalctl -fu <service name>
-
To debug a kubectl command, use the flag
-v=<log level>