Deploy Kubernetes on-premise. K:8ball:s!
Deploy on-premise Kubernetes clusters with Kubespray. For on-premise, bare metal or virtual machines.
~~~~~~~~~~~~~~~~~~~~~~~
( K8s v1.21.6 )
~~~~~~~~~~~~~~~~~~~~~~~
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
Deploy on-premises Kubernetes clusters on virtual machines or baremetal (i.e. physical servers) using Kubespray and Ansible. Whether you're in your datacenter or on your laptop, you can build Kubernetes clusters for evaluation, development or production. All you need to bring to the table is a few machines to run the cluster.
Kubernetes v1.21.6
Kubespray v2.14.2
Kubernetes Node Operating Systems Supported:
- Ubuntu 16.04 Xenial
- Ubuntu 18.04 Bionic
- CentOS 7
- CentOS 8
This project provides a very simple deployment process for Kubernetes in the datacenter, on-premise, local vm's, etc. System setup, disk prep, easy rbac and default storage all provided. Kubespray, which is built on Ansible, automates cluster deployments and provides for flexibility in configuration.
Estimated time to complete: 1 hr
General requirements:
- Control Node: Where the Kubespray commands are run (i.e. laptop or jump host). MacOS High Sierra, RedHat/CentOS 7 or 8 and Ubuntu Xenial all tested. Python is a requirement.
- Cluster VM Provisioning: Minimum of one, but at least three are recommended. Physical or virtual. Recommended minimum of 2gb ram per node for evaluation clusters. For a ready to use Vagrant environment, clone https://github.com/rcompos/zero and run
vagrant up yolo-1 yolo-2 yolo-3
. - Clueter Operating Systems: Ubuntu 16.04, 18.04 and RedHat/CentOS 7, 8
- Container Storage Volume: Mandatory additional physical or virtual disk volume. i.e. /dev/sdc. This is the Docker volume.
- Persistent Storage Volume: Optional additional physical or virtual disk volume. i.e. /dev/sdd. This additional storage may be used for distributed filesystems running in-cluster, such as OpenEBS or Gluster.
- Hostname resolution: Ensure that cluster machine hostnames are resolvable in DNS or are listed in local hosts file. The control node and all cluster vm's must have DNS resolution or /etc/hosts entries. IP addresses may be used.
- Helm 3: Helm v3.0.0 Tiller install in cluster. Tillerless Helm not supported. See file helm/install-helm3-tiller.sh.
Prepare control node by installing requirements. A laptop or desktop computer will be sufficient. A jump host is fine too.
-
Install Packages
a. Install Python (v3) as requirement of Ansible.
MacOS:
$ brew install python
RedHat 7 or CentOS 7:Python 2.7.5 installed by default
Ubuntu:$ sudo apt install python python-pip
b. Use Python package manager pip to install required packages on control node including Ansible.
$ sudo -H pip install --upgrade pip
$ sudo -H pip install -r requirements.txt
$ sudo -H pip install kubespray
c. Debian or Ubuntu control node also need in addition to previous steps:
$ sudo apt-get install sshpass
-
Clone Repo
Clone kubespray-and-pray repository in home directory. Substitute actual repo url for <RepositoryURL>.
$ cd; git clone <RepositoryURL>
-
SSH key
A SSH key is required at ~/.ssh/id_rsa.pub.
If you don't have a SSH key one can be generated as follows:
$ ssh-keygen -t rsa
A Kubernetes cluster can be rapidly deployed with the following steps. See further sections for details of each step.
-
Deploy K8s cluster on virtual or physical machines
Prepare directory (inventory/cluster-name) with inventory.cfg. Deploy cluster. Substitute actual cluster name for cluster-name. When prompted for SSH password, entire the ssh pasword for the operating system user.
$ ./kap.sh -i cluster-name -o username
-
Kubernetes Access Controls
Insecure permissions for development only! Use RBAC for production environments.
$ ansible-playbook kubespray-08-dashboard-permissive.yml
The Kubernetes cluster topology is defined as masters, nodes and etcds.
- Masters are cluster masters running the Kubernetes API service.
- Nodes are worker nodes where pods will run.
- Etcds are etcd cluster members, which serve as the state database for the Kubernetes cluster.
Custom ansible groups may be included, such as gluster, openebs or trident.
The top lines with ansible_ssh_host and ip values are required since machines may have multiple network addresses. Change the ansible_ssh_host and ip addresses in the file to actual ip addresses. Lines or partial lines may be commented out with the pound sign (#).
Save your configuration under the inventory directory, in a dedicated directory named for the cluster.
The following is an example inventory.cfg defining a Kubernetes cluster. There are three members (all) including one master (kube-master), three etcd members (etcd) and three worker nodes (kube-node). This file is from the upstream Kubespray repository kubespray/inventory/sample/hosts.ini.
node1 ansible_ssh_host=192.168.1.50 ip=192.168.1.50
node2 ansible_ssh_host=192.168.1.51 ip=192.168.1.51
node3 ansible_ssh_host=192.168.1.52 ip=192.168.1.52
[all]
node1
node2
node3
[kube-master]
node1
[etcd]
node1
node2
node3
[kube-node]
node1
node2
node3
[kube-ingress]
node1
[gluster] # Custom group or OpenEBS
node1
node2
node3
[k8s-cluster:children]
kube-node
kube-master
Perform the following steps on the control node where ansible command will be run from. This might be your laptop or a jump host. The cluster machines must already exist and be responsive to SSH.
-
Kubernetes Cluster Topology
Define your desired cluster topology Ansible inventory and variables files. Create new directory under inventory by copying one of the example directories. Update inventory.cfg and other files. Then specify this directory in the deployment step.
Kubespray cluster configuration: Edit Kubespray group variables in all.yml and k8s-cluster.yml to configure cluster to your needs. Substitute your cluster name for my-cluster.
inventory/my-cluster/all.yml
inventory/my-cluster/k8s-cluster.ymlModify inventory file with editor such as vi or nano.
$ cd ~/kubespray-and-pray
$ vi inventory/my-cluster/inventory.cfg
Multiple network adapters: If multiple network adapters are present on any node(s), Ansible will use the value provided as ansible_ssh_host and/or ip for each node. For example: k8s0 ansible_ssh_host=10.117.31.20 ip=10.117.31.20.
Optional hyper-converged storage: For development clusters only. Define Kubernetes cluster node members to be part of Heketi GlusterFS hyper-converged storage in inventory group gluster.
-
Deploy Kubernetes Cluster
Run script to deploy Kubernetes cluster to machines specified in inventory/default/inventory.cfg by default and optionally an entire directory such as inventory/my-cluster. If necessary, specify a user name to connect to via SSH to all cluster machines, a raw block device for container storage and the cluster inventory file.
Deployment User solidfire is used in this example. A user account must already exist on the cluster nodes, and must have sudo privileges and must be accessible with password or key. Supply the user's SSH password when prompted, then at second prompt press enter to use SSH password as sudo password. Note: If you specify a different remote user, then you must manually update the ansible.cfg file.
Optional Container Volume To create a dedicated Docker container logical volume on an available raw disk volume, specify optional argument -b for block_device, such as /dev/sdd. Otherwise default device is /dev/sdc. If default block device not found, the /var/lib/docker directory will by default, reside under the local root filesystem.
Inventory Directory The location of the cluster inventory is specified with option -i. The following example looks in kubespray-and-pray/inventory/my-cluster for the inventory.ini file.
Example: ./kap.sh -o myuser -b /dev/sdb -i my-cluster
Optional arguments for kap.sh are as follows. If no option is specified the default values will be used.
Flag Description Default -o SSH username solidfire -b Block device for containers /dev/sdc -i Inventory directory under inventory default -s Silence prompt Ansible SSH password Run script to deploy Kubernetes cluster to all nodes with default values. Specify actual inventory directory in place of my-cluster. This directory is located in the inventory directory (i.e. kubespray-and-pray/inventory/my-cluster).
$ ./kap.sh -i my-cluster
Congratulations! Your cluster should be running. Log onto a master node and run kubectl get nodes
to validate.
WARNING... Insecure permissions for development only!
MORE WARNING: The following policy allows ALL service accounts to act as cluster administrators. Any application running in a container receives service account credentials automatically, and could perform any action against the API, including viewing secrets and modifying permissions. This is not a recommended policy... On other hand, works like charm for dev!
References:
https://kubernetes.io/docs/admin/authorization/rbac
-
Kubernetes Cluster Permissions
From control node, run script to configure open permissions. Make note of dashboard port. Run command from kubespray-and-pray directory.
$ ansible-playbook dashboard-permissive.yml
-
Access Kubernetes Dashboard
From web browser, access dashboard with following url. Use dashboard_port from previous command. When prompted to login, choose Skip.
https://master-ip:dashboard-port
Validate cluster functionality by deploying an application. Run on master or with appropriate ~/.kube/config.
-
Deploy Helm Package
Install Helm package for Minio with 20Gi volume. Modify volume size as needed. Run from master or with appropriate ~/.kube/config.
# helm install stable/minio -n minio --namespace minio --set service.type=NodePort --set persistence.size=11Gi
-
Get Port
Get port under PORT(S). Make note of the second port value.
# kubectl get svc minio -n minio
-
View Service
Use any node IP address and the node port from previous step.
URL: http://<node_ip>:<node_port>
https://github.com/kubernetes/kubernetes/
https://github.com/kubernetes-incubator/kubespray/
https://hub.docker.com/r/heketi/heketi/tags/
https://docs.gluster.org/en/v3/Install-Guide/Install/
https://github.com/gluster/gluster-containers/
https://github.com/heketi/heketi/releases/
https://download.gluster.org/pub/gluster/glusterfs/4.0/
https://heptio.github.io/ark/