Build a Kubernetes cluster using kubeadm via Ansible.
Switch branches/tags
Nothing to show
Clone or download
cdrage and kairen Upgrade to 1.12.1, remove preflight checks (#43)
Removes preflight checks, this is required in order to setup TLS
correctly or else the below error occurs:

```sh
fatal: [192.168.1.115]: FAILED! => {"changed": true, "cmd": "kubeadm join --ignore-preflight-errors --token bvj0xj.21nh46utwlcgt3vd 192.168.1.124:6443 --discovery-token-unsafe-skip-ca-verification", "delta": "0:00:00.030162", "end": "2018-10-17 15:02:51.657645", "msg": "non-zero return code", "rc": 3, "start": "2018-10-17 15:02:51.627483", "stderr": "[tlsBootstra
pToken: Invalid value: \"\": the bootstrap token is invalid, discovery: Invalid value: \"\": discoveryToken or discoveryFile must be set, discoveryTokenAPIServers: Invalid value: \"bvj0xj.21nh46utwlcgt3vd\": address bvj0xj.21nh46utwlcgt3vd: missing port in address]", "stderr_lines": ["[tlsBootstrapToken: Invalid value: \"\": the bootstrap token is invalid, discov
ery: Invalid value: \"\": discoveryToken or discoveryFile must be set, discoveryTokenAPIServers: Invalid value: \"bvj0xj.21nh46utwlcgt3vd\": address bvj0xj.21nh46utwlcgt3vd: missing port in address]"], "stdout": "[validation] WARNING: kubeadm doesn't fully support multiple API Servers yet", "stdout_lines": ["[validation] WARNING: kubeadm doesn't fully support mul
tiple API Servers yet"]}
fatal: [192.168.1.123]: FAILED! => {"changed": true, "cmd": "kubeadm join --ignore-preflight-errors --token bvj0xj.21nh46utwlcgt3vd 192.168.1.124:6443 --discovery-token-unsafe-skip-ca-verification", "delta": "0:00:00.021953", "end": "2018-10-17 15:02:51.697518", "msg": "non-zero return code", "rc": 3, "start": "2018-10-17 15:02:51.675565", "stderr": "[tlsBootstra
pToken: Invalid value: \"\": the bootstrap token is invalid, discovery: Invalid value: \"\": discoveryToken or discoveryFile must be set, discoveryTokenAPIServers: Invalid value: \"bvj0xj.21nh46utwlcgt3vd\": address bvj0xj.21nh46utwlcgt3vd: missing port in address]", "stderr_lines": ["[tlsBootstrapToken: Invalid value: \"\": the bootstrap token is invalid, discov
ery: Invalid value: \"\": discoveryToken or discoveryFile must be set, discoveryTokenAPIServers: Invalid value: \"bvj0xj.21nh46utwlcgt3vd\": address bvj0xj.21nh46utwlcgt3vd: missing port in address]"], "stdout": "[validation] WARNING: kubeadm doesn't fully support multiple API Servers yet", "stdout_lines": ["[validation] WARNING: kubeadm doesn't fully support mul
tiple API Servers yet"]}
```

We also upgrade to 1.12.1
Latest commit 1682162 Oct 18, 2018

README.md

Kubeadm Ansible Playbook

Build a Kubernetes cluster using Ansible with kubeadm. The goal is easily install a Kubernetes cluster on machines running:

  • Ubuntu 16.04
  • CentOS 7
  • Debian 9

System requirements:

  • Deployment environment must have Ansible 2.4.0+
  • Master and nodes must have passwordless SSH access

Usage

Add the system information gathered above into a file called hosts.ini. For example:

[master]
192.16.35.12

[node]
192.16.35.[10:11]

[kube-cluster:children]
master
node

Before continuing, edit group_vars/all.yml to your specified configuration.

For example, I choose to run flannel instead of calico, and thus:

# Network implementation('flannel', 'calico')
network: flannel

Note: Depending on your setup, you may need to modify cni_opts to an available network interface. By default, kubeadm-ansible uses eth1. Your default interface may be eth0.

After going through the setup, run the site.yaml playbook:

$ ansible-playbook site.yaml
...
==> master1: TASK [addon : Create Kubernetes dashboard deployment] **************************
==> master1: changed: [192.16.35.12 -> 192.16.35.12]
==> master1:
==> master1: PLAY RECAP *********************************************************************
==> master1: 192.16.35.10               : ok=18   changed=14   unreachable=0    failed=0
==> master1: 192.16.35.11               : ok=18   changed=14   unreachable=0    failed=0
==> master1: 192.16.35.12               : ok=34   changed=29   unreachable=0    failed=0

Download the admin.conf from the master node:

$ scp k8s@k8s-master:/etc/kubernetes/admin.conf .

Verify cluster is fully running using kubectl:

$ export KUBECONFIG=~/admin.conf
$ kubectl get node
NAME      STATUS    AGE       VERSION
master1   Ready     22m       v1.6.3
node1     Ready     20m       v1.6.3
node2     Ready     20m       v1.6.3

$ kubectl get po -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
etcd-master1                            1/1       Running   0          23m
...

Resetting the environment

Finally, reset all kubeadm installed state using reset-site.yaml playbook:

$ ansible-playbook reset-site.yaml