Skip to content

Commit

Permalink
Development on Raspberry cluster (#6) (#7)
Browse files Browse the repository at this point in the history
* Update master (#8)

* fix(contrib/metallb): adds missing become: true in role (kubernetes-sigs#4356)

On CoreOS, without this, it fails to kubectl apply MetalLB due to lack of privileges.

* Fix kubernetes-sigs#4237: update kube cert path (kubernetes-sigs#4354)

* Use sample inventory file in doc (kubernetes-sigs#4052)

* Revert "Fix kubernetes-sigs#4237: update kube cert path (kubernetes-sigs#4354)" (kubernetes-sigs#4369)

This reverts commit ea7a6f1.

This change modified the certs dir for Kubernetes, but did not move the directories for existing clusters.

* Fix support for ansible 2.7.9 (kubernetes-sigs#4375)

* Use wide for netchecker debug output (kubernetes-sigs#4383)

* Added support of bastion host for reset.yaml (kubernetes-sigs#4359)

* Added support of bastion host for reset.yaml

* Empty commit to triger CI

* Use proxy_env with kubeadm phase commands (kubernetes-sigs#4325)

* clarify that kubespray now supports kubeadm (fixes kubernetes-sigs#4089) (kubernetes-sigs#4366)

* Reduce jinja2 filters in coredns templates (kubernetes-sigs#4390)

* Upgrade to k8s 1.13.5

* Increase CPU flavor for CI (kubernetes-sigs#4389)

* Fix CA cert environment variable for ectd v3 (kubernetes-sigs#4381)

* Added livenessProbe for local nginx apiserver proxy liveness probe (kubernetes-sigs#4222)

* Added configurable local apiserver proxy liveness probe

* Enable API LB healthcheck by default

* Fix template spacing and moved healthz location to nginx http section

* Fix healthcheck listen address to allow kubelet request healthcheck

* Default values for variable dns_servers and dns_domain  are set in two files: (kubernetes-sigs#3999)

values from inventory in roles/kubespray-defaults/defaults/main.yml
hardcoded values in roles/container-engine/defaults/main.yml

dns_servers set empty in roles/container-engine/defaults/main.yml and skydns_server not set in docker_dns_servers variables
also set default value for manual_dns_serve

another variables in roles/container-engine/defaults not need to set

* Fix bootsrap-os role, failing to create remote_tmp (kubernetes-sigs#4384)

* Fix bootsrap-os role, failing to create remote_tmp

* use ansible_remote_tmp hostvar

* Use static files in KubeDNS templating task (kubernetes-sigs#4379)

This commit adapts the "Lay Down KubeDNS Template" task to use the static
files moved by pull request [1]

[1] kubernetes-sigs#4341

* Fix supplementary_addresses rendering error (kubernetes-sigs#4403)

* Corrected cloud name (kubernetes-sigs#4316)

The correct name is Packet, not Packet Host.

* adapt inventory script to python 2.7 version (kubernetes-sigs#4407)

* Calico felix - Fix jinja2 boolean condition (kubernetes-sigs#4348)

* Fix jinja2 boolean condition

* Convert all felix variable to booleans instead.

* Development (#10)

* Raspberry configuration : ARM is to rollback kubernetes v1.12.5

Note that flannel works on amd64, arm, arm64 and ppc64le.

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

Create setup_playbook.sh for ansible-architecture armv7l (RasPi)

* Trusted Ansible repository

* etcdctl must be manually installed on node from github.com/etcd-io/etcd/tree/release-3.1

* Update README.md

* checksums

* Bastion PI Readme FAQ

* armv7l -> arm64 compatibility mode with Pi3

* Git releases search for architectures binaries

* declare PI=pi # replace 'pi' with 'ubuntu' or any other user

* SSH permit root login
Development convenience script : $ curl -fsSL https://get.docker.com -o 
get-docker.sh $ sudo sh get-docker.sh

* Update README.md

* Classic server configuration
kubernetes-sigs/kubesrpay/issues/4293

* Bastion sudoers

* Update README.md

* - Package preinstall tasks sudo -> become: yes | no - Python 3 sudo pip3 install -r requirements.txt

* Ignore APT cache update errors [concurency lock issue](ansible/ansible#47322)

* kubernetes-sigs#2767

* Update setup_playbook.sh

* Bionic python3-dev

* Pip3

* Update master (#8) (#9)

* fix(contrib/metallb): adds missing become: true in role (kubernetes-sigs#4356)

On CoreOS, without this, it fails to kubectl apply MetalLB due to lack of privileges.

* Fix kubernetes-sigs#4237: update kube cert path (kubernetes-sigs#4354)

* Use sample inventory file in doc (kubernetes-sigs#4052)

* Revert "Fix kubernetes-sigs#4237: update kube cert path (kubernetes-sigs#4354)" (kubernetes-sigs#4369)

This reverts commit ea7a6f1.

This change modified the certs dir for Kubernetes, but did not move the directories for existing clusters.

* Fix support for ansible 2.7.9 (kubernetes-sigs#4375)

* Use wide for netchecker debug output (kubernetes-sigs#4383)

* Added support of bastion host for reset.yaml (kubernetes-sigs#4359)

* Added support of bastion host for reset.yaml

* Empty commit to triger CI

* Use proxy_env with kubeadm phase commands (kubernetes-sigs#4325)

* clarify that kubespray now supports kubeadm (fixes kubernetes-sigs#4089) (kubernetes-sigs#4366)

* Reduce jinja2 filters in coredns templates (kubernetes-sigs#4390)

* Upgrade to k8s 1.13.5

* Increase CPU flavor for CI (kubernetes-sigs#4389)

* Fix CA cert environment variable for ectd v3 (kubernetes-sigs#4381)

* Added livenessProbe for local nginx apiserver proxy liveness probe (kubernetes-sigs#4222)

* Added configurable local apiserver proxy liveness probe

* Enable API LB healthcheck by default

* Fix template spacing and moved healthz location to nginx http section

* Fix healthcheck listen address to allow kubelet request healthcheck

* Default values for variable dns_servers and dns_domain  are set in two files: (kubernetes-sigs#3999)

values from inventory in roles/kubespray-defaults/defaults/main.yml
hardcoded values in roles/container-engine/defaults/main.yml

dns_servers set empty in roles/container-engine/defaults/main.yml and skydns_server not set in docker_dns_servers variables
also set default value for manual_dns_serve

another variables in roles/container-engine/defaults not need to set

* Fix bootsrap-os role, failing to create remote_tmp (kubernetes-sigs#4384)

* Fix bootsrap-os role, failing to create remote_tmp

* use ansible_remote_tmp hostvar

* Use static files in KubeDNS templating task (kubernetes-sigs#4379)

This commit adapts the "Lay Down KubeDNS Template" task to use the static
files moved by pull request [1]

[1] kubernetes-sigs#4341

* Fix supplementary_addresses rendering error (kubernetes-sigs#4403)

* Corrected cloud name (kubernetes-sigs#4316)

The correct name is Packet, not Packet Host.

* adapt inventory script to python 2.7 version (kubernetes-sigs#4407)

* Calico felix - Fix jinja2 boolean condition (kubernetes-sigs#4348)

* Fix jinja2 boolean condition

* Convert all felix variable to booleans instead.

* Set up k8s-cluster DNS configuration

* kube-proxy=iptables
initial dns setup=coredns

* Update to v1.13.5 checksums

* create user priv escalate

* weave network
ansible * --ask-become-pass

* fix up item.item dict object error

* Let python unversioned cmd

* Update 0060-resolvconf.yml

* Update install_host.yml

* my cluster configuration using 
- docker-ce (scale.yml's suppported)
- cri-o (light weight as raspberry's)

* Update hosts.ini

Raspberries 3 B+ and A+
  • Loading branch information
b23prodtm committed Apr 3, 2019
1 parent c652145 commit 5efe3c6
Show file tree
Hide file tree
Showing 50 changed files with 285 additions and 248 deletions.
6 changes: 3 additions & 3 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,16 @@ RUN mkdir /kubespray
WORKDIR /kubespray
RUN apt update -y && \
apt install -y \
libssl-dev python3-dev sshpass apt-transport-https jq \
ca-certificates curl gnupg2 software-properties-common python3-pip
libssl-dev python-dev sshpass apt-transport-https jq \
ca-certificates curl gnupg2 software-properties-common python-pip
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install docker-ce -y
COPY . .
RUN /usr/bin/python3 -m pip3 install pip3 -U && /usr/bin/python3 -m pip3 install -r tests/requirements.txt && python3 -m pip3 install -r requirements.txt
RUN /usr/bin/python -m pip install pip -U && /usr/bin/python -m pip install -r tests/requirements.txt && python -m pip install -r requirements.txt
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.11.3/bin/linux/amd64/kubectl \
&& chmod a+x kubectl && cp kubectl /usr/local/bin/kubectl

114 changes: 71 additions & 43 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,29 +28,28 @@ Ansible v2.7.0's failing and/or produce unexpected results due to [ansible/ansib

#### Usage

# Install pip3 [from python](https://pip.readthedocs.io/en/stable/installing/)
sudo python3 get-pip.py
# Install pip [from python](https://pip.readthedocs.io/en/stable/installing/)
sudo python get-pip.py

# Install dependencies from ``requirements.txt``
sudo pip3 install -r requirements.txt
sudo pip install -r requirements.txt

# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster

# Update Ansible inventory file with inventory builder . Single master IP is possible, see nodes with bastion
declare -a IPS=(192.168.0.16 192.168.0.17)
CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
CONFIG_FILE=inventory/mycluster/hosts.ini python contrib/inventory_builder/inventory.py ${IPS[@]}
cat inventory/mycluster/hosts.ini
# bastion single master looks like `raspberrypi ansible_ssh_host=192.168.0.16 ip=192.168.0.16` ansible_host=192.168.0.16 ansible_user=pi" # replace 'pi' with 'ubuntu' or any other user
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml

# You can ssh-copy-id to Ansible inventory hosts permanently for the pi user
declare PI=pi # replace 'pi' with 'ubuntu' or any other user
for ip in ${IPS[@]}; do ssh-copy-id $PI@$ip; done
# Enable SSH interface and PermitRootLogin over ssh in Raspberry
for ip in ${IPS[@]}; do
# You can ssh-copy-id to Ansible inventory hosts permanently for the pi user
ssh-copy-id $PI@$ip;
ssh $PI@$ip sudo bash -c "echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config";
ssh $PI@$ip cat /etc/ssh/sshd_config | grep PermitRootLogin;
# To install etcd on nodes, Go lang is needed
Expand All @@ -59,44 +58,40 @@ Ansible v2.7.0's failing and/or produce unexpected results due to [ansible/ansib
ssh $PI@$ip sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367;
# deb http://ppa.launchpad.net/ansible/ansible/ubuntu trusty main

# Get docker-ce (Read Ubuntu LTS https://docs.docker.com/install/linux/docker-ce/ubuntu/)
ssh $PI@$pi sudo apt-get remove docker docker-engine docker.io containerd runc -y;
# Install packages to allow apt to use a repository over HTTPS
ssh $PI@$pi sudo apt-get update && sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y;
# Add Docker’s official GPG key
ssh $PI@$pi curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -;
# Use the following command to set up the stable repository.
ssh $PI@$pi sudo add-apt-repository "deb [arch=arm64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable";

# Install Docker Community Edition
ssh $PI@$pi sudo apt-get update && sudo apt-get install docker-ce -y;
# Install the latest version of Docker CE and containerd
ssh $PI@$pi sudo apt-get install docker-ce-cli containerd.io -y;

# The kube user which owns k8s daemons must be added to Ubuntu group.
ssh $PI@$pi sudo usermod -a -G ubuntu kube;

# disable firewall for the setup
ssh $PI@$pi sudo ufw disable;
done

# Adjust the ansible_memtotal_mb to your Raspberry specs
cat roles/kubernetes/preinstall/tasks/0020-verify-settings.yml | grep -b2 'that: ansible_memtotal_mb'

# Shortcut to actually set up the playbook on hosts:
scripts/setup_playbook.sh cluster.yml
# Displays help scripts/setup_playbook.sh --help
scripts/my_playbook.sh cluster.yml

# Displays help scripts/my_playbook.sh --help
# or you can use the extended version as well
# scripts/setup_playbook.sh -i inventory/mycluster/hosts.ini cluster.yml
# scripts/my_playbook.sh -i inventory/mycluster/hosts.ini cluster.yml

for ip in ${IPS[@]}; do
# --setup-firewall opens default kubernetes ports in firewalld
scripts/my_playbook.sh --setup-firewall $PI@$pi
ssh $PI@$pi sudo ufw enable;
done

See [Ansible](docs/ansible.md) documentation. Ansible uses tags to define TASK groups management.

>Note: When Ansible's already installed via system packages on the control machine, other python packages installed via `sudo pip3 install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
>Note: When Ansible's already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
As a consequence, `ansible-playbook` command will fail with:
```
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
```
probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault").

One way of solving this would be to uninstall the Ansible package and then, to install it via pip3 but it is not always possible.
A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` environment variables respectively to the `ansible/modules` and `ansible/module_utils` subdirectories of pip3 packages installation location, which can be found in the Location field of the output of `pip3 show [package]` before executing `ansible-playbook`.
One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` environment variables respectively to the `ansible/modules` and `ansible/module_utils` subdirectories of pip packages installation location, which can be found in the Location field of the output of `pip show [package]` before executing `ansible-playbook`.

#### Known issues :
See [docs](./docs/ansible.md)
Expand All @@ -107,7 +102,7 @@ See [docs](./docs/ansible.md)
from ruamel.yaml import YAML
```
Please install inventory builder python libraries.
> sudo pip3 install -r contrib/inventory_builder/requirements.txt
> sudo pip install -r contrib/inventory_builder/requirements.txt
- CGROUPS_MEMORY missing to use ```kubeadm init```

Expand All @@ -122,13 +117,7 @@ E.g. : Raspberry Ubuntu Preinstalled server uses u-boot, then in ssh session run
sed "$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1/" /boot/firmware/cmdline.txt | sudo tee /boot/firmware/cmdline.txt
reboot

I see the msg: "Timed out (12s) waiting for privileges escalation"

The ansible_user or --become_user must gain root privileges without password prompt. That's simply to edit the sudoers and add NOPASSWD: ALL to %admin and %sudo user group. E.g. from ansible host shell :

ssh <ansible_user>@<bastion-ip> 'sudo visudo; sudo reboot'

- I may not be able to build a playbook on Arm, armv7l architectures Issues with systems such as Rasbian 9 and the Raspberries first and second generation. There are some issue kubernetes-sigs/kubespray#4261 to obtain 32 bits binary compatibility on those systems. Please post a comment if you find a way to enable 32 bits support for the k8s stack.
- I may not be able to build a playbook on Arm, armv7l architectures Issues with systems such as Rasbian 9 and the Raspberries first and second generation. There's [some issue](http://github.com/kubernetes-sigs/kubespray/issues/4261) to obtain 32 bits binary compatibility on those systems. Please post a comment if you find a way to enable 32 bits support for the k8s stack.

- Kubeadm 1.10.1 known to feature arm64 binary in googlestorage.io

Expand All @@ -138,10 +127,8 @@ The ansible_user or --become_user must gain root privileges without password pro

ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml -b -v --become-user=root --private-key=~/.ssh/id_rsa

- ```scripts/setup_playbook.sh```
command will fail with:

TASK [kubernetes/preinstall : Stop if ip var does not match local ips]
- ```scripts/my_playbook.sh```
+TASK [kubernetes/preinstall : Stop if ip var does not match local ips]

fatal: [raspberrypi]: FAILED! => {
"assertion": "ip in ansible_all_ipv4_addresses",
Expand All @@ -152,6 +139,31 @@ ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml -b -v --become-use

The host *ip* set in ```inventory/<mycluster>/hosts.ini``` isn't the docker network interface (iface). Run with ssh@... terminal : ```ifconfig docker0``` to find the ipv4 address that's attributed to the docker0 iface. E.g. _172.17.0.1_

+fatal: [raspberrypi]: FAILED! => {"changed": true, "cmd": ["timeout", "-k", "600s", "600s", "/usr/local/bin/kubeadm", "init", "--config=/etc/kubernetes/kubeadm-config.yaml"

That's if you have specified only a single machine-ip in hosts.ini.

+TASK [kubernetes/preinstall : Stop if either kube-master, kube-node or etcd is empty] **************************************************************************
Wednesday 03 April 2019 16:07:14 +0200 (0:00:00.203) 0:00:40.395 *******
ok: [raspberrypi] => (item=kube-master) => {
"changed": false,
"item": "kube-master",
"msg": "All assertions passed"
}
failed: [raspberrypi] (item=kube-node) => {
"assertion": "groups.get('kube-node')",
"changed": false,
"evaluated_to": false,
"item": "kube-node",
"msg": "Assertion failed"
}
ok: [raspberrypi] => (item=etcd) => {
"changed": false,
"item": "etcd",
"msg": "All assertions passed"
}
The inventory/<mycluster>/hosts.ini file [kube-node] or [kube-master] was empty. They cannot be the same. That assertion means that a kubernetes cluster is made of at least one kube-master and one kube-node.

- Error: open /etc/ssl/etcd/ssl/admin-<hostname>.pem: permission denied

The file located at /etc/ssl/etcd's owned by another user than Ubuntu and cannot be accessed by Ansible. Please change the file owner:group to ```ubuntu:ubuntu``` or the *ansible_user* or your choice.
Expand All @@ -161,19 +173,35 @@ The file located at /etc/ssl/etcd's owned by another user than Ubuntu and cannot
- E: Unable to locate package unzip
- ERROR: Service 'app' failed to build
> The command ```bin/sh -c apt-get update -yqq && apt-get install -yqq --no-install-recommends git zip unzip && rm -rf /var/lib/apt/lists' returned a non-zero code: 100```
Kubernetes container manager failed to resolve package reposirory hostnames. That's related to the cluster DNS misconfiguration. Read the [DNS Stack](docs/dns-stack.md) documentation. You may opt in for a dnsmasq_kubedns dns mode, your master host must have access to the internet. Default Google DNS IPs are 8.8.8.8 and 8.8.4.4.
Kubernetes container manager failed to resolve package reposirory hostnames. That's related to the cluster DNS misconfiguration. Read the [DNS Stack](docs/dns-stack.md) documentation. You may opt in for a dnsmasq_kubedns dns mode, your master host must have access to the internet. Default Google DNS IPs are 8.8.8.8 and 8.8.4.4. A DNS service must be running, see below.

- How much memory is left free on my master host ?
If you don't know how much memory's available for the master host kubernetes-apps, run the following command that displays live memory usage :

ssh $PI@$pi top
# Ctrl-C to stop monitoring

- Timeout (12s) waiting for privilege escalation prompt
Try increasing the timeout settings, you may want to run ansible with
``--timeout=45`` and add ``--ask-become-pass`` (that's asking sudo password).

If the error still happens, the ansible roles/ specific TASK configuration should set up the privileges escalation. Please contact the system administrator and [fill in an issue](https://github.com/kubernetes-sigs/kubespray/issues) about the TASK that must be fixed up.

- How to open firewall ports for <master-node-ip> ?

./scripts/my_playbook.sh --firewall-setup $PI@<master-node-ip>

### Vagrant

For Vagrant we need to install python dependencies for provisioning tasks.
Check if Python3 and pip3 are installed:
Check if python and pip are installed:

python3 -V && pip3 -V
python -V && pip -V

If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install the necessary requirements

sudo pip3 install -r requirements.txt
sudo pip install -r requirements.txt
vagrant up

Documents
Expand Down
8 changes: 4 additions & 4 deletions contrib/dind/roles/dind-host/tasks/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@
{{ distro_raw_setup_done }} && echo SKIPPED && exit 0
until [ "$(readlink /proc/1/exe)" = "{{ distro_pid1_exe }}" ] ; do sleep 1; done
{{ distro_raw_setup }}
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
delegate_to: "{{ item._ansible_item_label|default(item) }}"
with_items: "{{ containers.results }}"
register: result
changed_when: result.stdout.find("SKIPPED") < 0
Expand All @@ -62,7 +62,7 @@
until test -S /var/run/dbus/system_bus_socket; do sleep 1; done
systemctl disable {{ distro_agetty_svc }}
systemctl stop {{ distro_agetty_svc }}
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
delegate_to: "{{ item._ansible_item_label|default(item) }}"
with_items: "{{ containers.results }}"
changed_when: false

Expand All @@ -74,13 +74,13 @@
mv -b /etc/machine-id.new /etc/machine-id
cmp /etc/machine-id /etc/machine-id~ || true
systemctl daemon-reload
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
delegate_to: "{{ item._ansible_item_label|default(item) }}"
with_items: "{{ containers.results }}"

- name: Early hack image install to adapt for DIND
raw: |
rm -fv /usr/bin/udevadm /usr/sbin/udevadm
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
delegate_to: "{{ item._ansible_item_label|default(item) }}"
with_items: "{{ containers.results }}"
register: result
changed_when: result.stdout.find("removed") >= 0
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
#!/bin/bash
# NOTE: if you change HOST_PREFIX, you also need to edit ./hosts [containers] section
HOST_PREFIX=kube-node python3 contrib/inventory_builder/inventory.py {% for ip in addresses %} {{ ip }} {% endfor %}
HOST_PREFIX=kube-node python contrib/inventory_builder/inventory.py {% for ip in addresses %} {{ ip }} {% endfor %}
2 changes: 1 addition & 1 deletion contrib/inventory_builder/inventory.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Expand Down
2 changes: 1 addition & 1 deletion contrib/metallb/roles/provision/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
kube:
name: "MetalLB"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/{{ item.item }}"
filename: "{{ kube_config_dir }}/{{ item }}"
state: "{{ item.changed | ternary('latest','present') }}"
become: true
with_items: "{{ rendering.results }}"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
name: glusterfs
namespace: default
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.dest}}"
resource: "{{item.type}}"
filename: "{{kube_config_dir}}/{{item.dest}}"
state: "{{item.changed | ternary('latest','present') }}"
with_items: "{{ gluster_pv.results }}"
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined
24 changes: 24 additions & 0 deletions docs/ansible.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,3 +181,27 @@ bastion ansible_ssh_host=x.x.x.x

For more information about Ansible and bastion hosts, read
[Running Ansible Through an SSH Bastion Host](http://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)

Docker-CE
-----------
Let's install the Community Edition as container-manager in each of your cluster machines. Here's how, to implement a for-loop in bash scripts.
```
# You can ssh-copy-id to Ansible inventory hosts permanently for the pi user
declare PI=pi # replace 'pi' with 'ubuntu' or any other user
# Enable SSH interface and PermitRootLogin over ssh in Raspberry
for ip in ${IPS[@]}; do
# Get docker-ce (Read Ubuntu LTS https://docs.docker.com/install/linux/docker-ce/ubuntu/)
ssh $PI@$pi sudo apt-get remove docker docker-engine docker.io containerd runc -y;
# Install packages to allow apt to use a repository over HTTPS
ssh $PI@$pi sudo apt-get update && sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y;
# Add Docker’s official GPG key
ssh $PI@$pi curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -;
# Use the following command to set up the stable repository.
ssh $PI@$pi sudo add-apt-repository "deb [arch=arm64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable";
# Install Docker Community Edition
ssh $PI@$pi sudo apt-get update && sudo apt-get install docker-ce -y;
# Install the latest version of Docker CE and containerd
ssh $PI@$pi sudo apt-get install docker-ce-cli containerd.io -y;
done
```
4 changes: 2 additions & 2 deletions docs/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,13 @@ to create or modify an Ansible inventory. Currently, it is limited in
functionality and is only used for configuring a basic Kubespray cluster inventory, but it does
support creating inventory file for large clusters as well. It now supports
separated ETCD and Kubernetes master roles from node role if the size exceeds a
certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` help for more information.
certain threshold. Run `python contrib/inventory_builder/inventory.py help` help for more information.

Example inventory generator usage:

cp -r inventory/sample inventory/mycluster
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
CONFIG_FILE=inventory/mycluster/hosts.ini python contrib/inventory_builder/inventory.py ${IPS[@]}

Starting custom deployment
--------------------------
Expand Down

0 comments on commit 5efe3c6

Please sign in to comment.