Skip to content

Commit

Permalink
Host Acces Point on Raspberry 3 B+ (#17)
Browse files Browse the repository at this point in the history
* Development (#10)

* Raspberry configuration : ARM is to rollback kubernetes v1.12.5

Note that flannel works on amd64, arm, arm64 and ppc64le.

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

Create setup_playbook.sh for ansible-architecture armv7l (RasPi)

* Trusted Ansible repository

* etcdctl must be manually installed on node from github.com/etcd-io/etcd/tree/release-3.1

* Update README.md

* checksums

* Bastion PI Readme FAQ

* armv7l -> arm64 compatibility mode with Pi3

* Git releases search for architectures binaries

* declare PI=pi # replace 'pi' with 'ubuntu' or any other user

* SSH permit root login
Development convenience script : $ curl -fsSL https://get.docker.com -o 
get-docker.sh $ sudo sh get-docker.sh

* Classic server configuration
kubernetes-sigs/kubesrpay/issues/4293

* Bastion sudoers

* Update README.md

* - Package preinstall tasks sudo -> become: yes | no - Python 3 sudo pip3 install -r requirements.txt

* Ignore APT cache update errors [concurency lock issue](ansible/ansible#47322)

* kubernetes-sigs#2767

* Update setup_playbook.sh

* Bionic python3-dev
Pip3

* Update master (#8) (#9)

* fix(contrib/metallb): adds missing become: true in role (kubernetes-sigs#4356)

On CoreOS, without this, it fails to kubectl apply MetalLB due to lack of privileges.

* Fix kubernetes-sigs#4237: update kube cert path (kubernetes-sigs#4354)

* Use sample inventory file in doc (kubernetes-sigs#4052)

* Revert "Fix kubernetes-sigs#4237: update kube cert path (kubernetes-sigs#4354)" (kubernetes-sigs#4369)

This reverts commit ea7a6f1.

This change modified the certs dir for Kubernetes, but did not move the directories for existing clusters.

* Fix support for ansible 2.7.9 (kubernetes-sigs#4375)

* Use wide for netchecker debug output (kubernetes-sigs#4383)

* Added support of bastion host for reset.yaml (kubernetes-sigs#4359)

* Empty commit to triger CI

* Use proxy_env with kubeadm phase commands (kubernetes-sigs#4325)

* clarify that kubespray now supports kubeadm (fixes kubernetes-sigs#4089) (kubernetes-sigs#4366)

* Reduce jinja2 filters in coredns templates (kubernetes-sigs#4390)

* Upgrade to k8s 1.13.5

* Increase CPU flavor for CI (kubernetes-sigs#4389)

* Fix CA cert environment variable for ectd v3 (kubernetes-sigs#4381)

* Added livenessProbe for local nginx apiserver proxy liveness probe (kubernetes-sigs#4222)

* Added configurable local apiserver proxy liveness probe

* Enable API LB healthcheck by default

* Fix template spacing and moved healthz location to nginx http section

* Fix healthcheck listen address to allow kubelet request healthcheck

* Default values for variable dns_servers and dns_domain  are set in two files: (kubernetes-sigs#3999)

values from inventory in roles/kubespray-defaults/defaults/main.yml
hardcoded values in roles/container-engine/defaults/main.yml

dns_servers set empty in roles/container-engine/defaults/main.yml and skydns_server not set in docker_dns_servers variables
also set default value for manual_dns_serve

another variables in roles/container-engine/defaults not need to set

* Fix bootsrap-os role, failing to create remote_tmp (kubernetes-sigs#4384)

* use ansible_remote_tmp hostvar

* Use static files in KubeDNS templating task (kubernetes-sigs#4379)

This commit adapts the "Lay Down KubeDNS Template" task to use the static
files moved by pull request [1]

[1] kubernetes-sigs#4341

* Fix supplementary_addresses rendering error (kubernetes-sigs#4403)

* Corrected cloud name (kubernetes-sigs#4316)

The correct name is Packet, not Packet Host.

* adapt inventory script to python 2.7 version (kubernetes-sigs#4407)

* Calico felix - Fix jinja2 boolean condition (kubernetes-sigs#4348)

* Fix jinja2 boolean condition

* Convert all felix variable to booleans instead.

* Set up k8s-cluster DNS configuration

* kube-proxy=iptables
initial dns setup=coredns

* Update to v1.13.5 checksums

* create user priv escalate

* weave network
ansible * --ask-become-pass

* fix up item.item dict object error

* Let python unversioned cmd

* Update 0060-resolvconf.yml

* Update install_host.yml

* Add PPA repos https://github.com/kubernetes-sigs/cri-o (crio) https://github.com/kubernetes-sigs/cri-tools (crictl)

* checksums
Raspberries 3 B+ and A+

* rapsi A : mem config

* Help files and scripts

* Safe Calico Network
 Get current version of calico cluster version: async time increase,
* Quick start scripts Guidelines

* WIP Dashboard 
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

* Host AP : HOSTAPD service ISC DHCP service IP MASQUERADE ifw rules [Gatewayed] hosts (bastion-ssh-config)
internet sharing /bridge
* Ubuntu before 1804 Bridge connection
Country code selection
* Netplan.io manager
* Strong encryption keys  https://www.ibm.com/developerworks/library/l-wifiencrypthostapd/index.html
* Timeouts
* Stateful DHCPv6
Don't mix interfaces dhcpd subnet leases. Define subnet for eth0 segment to retrieve expected server addresses.
Python3 script bastion host access point
* Set up DHCP wi-fi clients, and redeem ip sub-network wired internet (dhclient)
Script environment variables and rc.local
  • Loading branch information
b23prodtm committed May 11, 2019
1 parent 1aabddc commit 6444e24
Show file tree
Hide file tree
Showing 82 changed files with 1,377 additions and 344 deletions.
195 changes: 103 additions & 92 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/docs/img/kubernetes-logo.png)

Deploy a Production Ready Kubernetes Cluster
============================================

If you have questions, check the [documentation](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/)
Expand Down Expand Up @@ -33,49 +35,18 @@ To deploy the cluster you can use :
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster

# Update Ansible inventory file with inventory builder . Single master IP is possible, see nodes with bastion
declare -a IPS=(192.168.0.16 192.168.0.17)
CONFIG_FILE=inventory/mycluster/hosts.ini python contrib/inventory_builder/inventory.py ${IPS[@]}
cat inventory/mycluster/hosts.ini
# bastion single master looks like `raspberrypi ansible_ssh_host=192.168.0.16 ip=192.168.0.16` ansible_host=192.168.0.16 ansible_user=pi" # replace 'pi' with 'ubuntu' or any other user
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml

declare PI=pi # replace 'pi' with 'ubuntu' or any other user
for ip in ${IPS[@]}; do
# You can ssh-copy-id to Ansible inventory hosts permanently for the pi user
ssh-copy-id $PI@$ip;
ssh $PI@$ip sudo bash -c "echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config";
ssh $PI@$ip cat /etc/ssh/sshd_config | grep PermitRootLogin;
# To install etcd on nodes, Go lang is needed
ssh $PI@$ip sudo apt-get install golang -y;
# Ansible is reported as a trusted repository
ssh $PI@$ip sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367;
# deb http://ppa.launchpad.net/ansible/ansible/ubuntu trusty main

# The kube user which owns k8s daemons must be added to Ubuntu group.
ssh $PI@$pi sudo usermod -a -G ubuntu kube;

# disable firewall for the setup
ssh $PI@$pi sudo ufw disable;
done

# Adjust the ansible_memtotal_mb to your Raspberry specs
cat roles/kubernetes/preinstall/tasks/0020-verify-settings.yml | grep -b2 'that: ansible_memtotal_mb'

# Shortcut to actually set up the playbook on hosts:
scripts/my_playbook.sh cluster.yml

# Displays help scripts/my_playbook.sh --help
# or you can use the extended version as well
# scripts/my_playbook.sh -i inventory/mycluster/hosts.ini cluster.yml

for ip in ${IPS[@]}; do
# --setup-firewall opens default kubernetes ports in firewalld
scripts/my_playbook.sh --setup-firewall $PI@$pi
ssh $PI@$pi sudo ufw enable;
done
# Setup cluster inventory file with inventory builder . Single master cluster is possible.
scripts/my_cluster.sh

# Setup cluster playbook (two phases avoid too long tasks to "stall“, out of resources)
scripts/my_playbook.sh --timeout=120 cluster.yml --skip-tags=apps,resolvconf
scripts/my_playbook.sh --timeout=120 cluster.yml --tags=apps,resolvconf

# Start Dashboard and kubernetes controllers
scripts/start_dashboard.sh

### Accessing the dashboard
Available from the master host (e.g. raspberrypib), through proxy at locahost:8001 [https://first_master:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login](https://localhost:8001api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login).

See [Ansible](docs/ansible.md) documentation. Ansible uses tags to define TASK groups management.

Expand All @@ -92,18 +63,23 @@ A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` en
#### Known issues :
See [docs](./docs/ansible.md)

> *PROBLEM*
- ModuleNotFoundError: No module named 'ruamel'
```Traceback (most recent call last):
File "contrib/inventory_builder/inventory.py", line 36, in <module>
from ruamel.yaml import YAML
```
> *SOLUTION*
Please install inventory builder python libraries.
> sudo pip install -r contrib/inventory_builder/requirements.txt

sudo pip install -r contrib/inventory_builder/requirements.txt

> *PROBLEM*
- CGROUPS_MEMORY missing to use ```kubeadm init```

[ERROR SystemVerification]: missing cgroups: memory

> *SOLUTION*
The Linux kernel must be loaded with special cgroups enabled. Add the following to the kernel parameters:

cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
Expand All @@ -113,58 +89,93 @@ E.g. : Raspberry Ubuntu Preinstalled server uses u-boot, then in ssh session run
sed "$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1/" /boot/firmware/cmdline.txt | sudo tee /boot/firmware/cmdline.txt
reboot

- I may not be able to build a playbook on Arm, armv7l architectures Issues with systems such as Rasbian 9 and the Raspberries first and second generation. There's [some issue](http://github.com/kubernetes-sigs/kubespray/issues/4261) to obtain 32 bits binary compatibility on those systems. Please post a comment if you find a way to enable 32 bits support for the k8s stack.

- Kubeadm 1.10.1 known to feature arm64 binary in googlestorage.io
> *PROBLEM*
- I may not be able to build a playbook on Arm, armv7l architectures Issues with systems such as Rasbian 9 and the Raspberries first and second generation.
> *POSSIBLE ANSWER*
There's [some issue](http://github.com/kubernetes-sigs/kubespray/issues/4261) to obtain 32 bits binary compatibility on those systems. Please post a comment if you find a way to enable 32 bits support for the k8s stack.

> *PROBLEM*
- When you see the Error : no PUBKEY ... could be received from GPG Look at https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#latest-releases-via-apt-debian
> *ANSWER*
Deploy Kubespray with Ansible Playbook to raspberrypi The option -b is required, as for example writing SSL keys in /etc/, installing packages and interacting with various systemd daemons. Without -b argument the playbook would fall to start !

ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml -b -v --become-user=root --private-key=~/.ssh/id_rsa

- ```scripts/my_playbook.sh cluster.yml```
> *PROBLEM*
+ TASK [kubernetes/preinstall : Stop if ip var does not match local ips]

fatal: [raspberrypi]: FAILED! => {
"assertion": "ip in ansible_all_ipv4_addresses",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}

> *ANSWER*
The host *ip* set in ```inventory/<mycluster>/hosts.ini``` isn't the docker network interface (private). Run with ssh@... terminal : ```ifconfig``` to find the ipv4 address that's attributed to the eth0/wlan0 iface. E.g. _10.3.0.1_ (public network)
> *PROBLEM*
+ fatal: "cmd": ["timeout", "-k", "600s", "600s", "/usr/local/bin/kubeadm", "init", "--config=/etc/kubernetes/kubeadm-config.yaml"
+ TASK [kubernetes/preinstall : Stop if either kube-master, kube-node or etcd is empty]

**************************************************************************
Wednesday 03 April 2019 16:07:14 +0200 (0:00:00.203) 0:00:40.395 *******
ok: [raspberrypi] => (item=kube-master) => {
"changed": false,
"item": "kube-master",
"msg": "All assertions passed"
}
failed: [raspberrypi] (item=kube-node) => {
"assertion": "groups.get('kube-node')",
"changed": false,
"evaluated_to": false,
"item": "kube-node",
"msg": "Assertion failed"
}
ok: [raspberrypi] => (item=etcd) => {
"changed": false,
"item": "etcd",
"msg": "All assertions passed"
}

> *ANSWER*
The inventory/<mycluster>/hosts.ini file [kube-node] or [kube-master] was empty. They cannot be the same. That assertion means that a kubernetes cluster is made of at least one kube-master and one kube-node.
> *PROBLEM*
+ Error: open /etc/ssl/etcd/ssl/admin-<hostname>.pem: permission denied
> *ANSWER*
The file located at /etc/ssl/etcd's owned by another user than Ubuntu and cannot be accessed by Ansible. Please change the file owner:group to ```ubuntu:ubuntu``` or the *ansible_user* or your choice.

ssh <ansible_user>@<bastion-ip> 'sudo chown kube:ubuntu -R /etc/ssl/etcd/'

> *PROBLEM*
+ E: Unable to locate package unzip
+ ERROR: Service 'app' failed to build
> *ANSWER*
The command ```bin/sh -c apt-get update -yqq && apt-get install -yqq --no-install-recommends git zip unzip && rm -rf /var/lib/apt/lists' returned a non-zero code: 100```
Kubernetes container manager failed to resolve package reposirory hostnames. That's related to the cluster DNS misconfiguration. Read the [DNS Stack](docs/dns-stack.md) documentation. You may opt in for a google nameserver, your master host must have access to the internet. Default Google DNS IPs are 8.8.8.8 and 8.8.4.4. A CoreDNS service must be running, see below abput the ```top``` command.

> *PROBLEM*
+ Timeout (12s) waiting for privilege escalation prompt
Try increasing the timeout settings, you may want to run ansible with
``--timeout=45`` and add ``--ask-become-pass`` (that's asking sudo password).
> *POSSIBLE SOLUTION*
If the error still happens, the ansible roles/ specific TASK configuration should set up the privileges escalation. Please contact the system administrator and [fill in an issue](https://github.com/kubernetes-sigs/kubespray/issues) about the TASK that must be fixed up.

> *ISSUE*
- How much memory is left free on my master host ?
> *ANSWER*
If you don't know how much memory's available for the master host kubernetes-apps, run the following command that displays live memory usage :

- Deploy Kubespray with Ansible Playbook to raspberrypi The option -b is required, as for example writing SSL keys in /etc/, installing packages and interacting with various systemd daemons. Without -b argument the playbook would fall to start !

ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml -b -v --become-user=root --private-key=~/.ssh/id_rsa

- ```scripts/my_playbook.sh```
+TASK [kubernetes/preinstall : Stop if ip var does not match local ips]

fatal: [raspberrypi]: FAILED! => {
"assertion": "ip in ansible_all_ipv4_addresses",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}

The host *ip* set in ```inventory/<mycluster>/hosts.ini``` isn't the docker network interface (iface). Run with ssh@... terminal : ```ifconfig docker0``` to find the ipv4 address that's attributed to the docker0 iface. E.g. _172.17.0.1_

+fatal: [raspberrypi]: FAILED! => {"changed": true, "cmd": ["timeout", "-k", "600s", "600s", "/usr/local/bin/kubeadm", "init", "--config=/etc/kubernetes/kubeadm-config.yaml"

That's if you have specified only a single machine-ip in hosts.ini.

+TASK [kubernetes/preinstall : Stop if either kube-master, kube-node or etcd is empty] **************************************************************************
Wednesday 03 April 2019 16:07:14 +0200 (0:00:00.203) 0:00:40.395 *******
ok: [raspberrypi] => (item=kube-master) => {
"changed": false,
"item": "kube-master",
"msg": "All assertions passed"
}
failed: [raspberrypi] (item=kube-node) => {
"assertion": "groups.get('kube-node')",
"changed": false,
"evaluated_to": false,
"item": "kube-node",
"msg": "Assertion failed"
}
ok: [raspberrypi] => (item=etcd) => {
"changed": false,
"item": "etcd",
"msg": "All assertions passed"
}
The inventory/<mycluster>/hosts.ini file [kube-node] or [kube-master] was empty. They cannot be the same. That assertion means that a kubernetes cluster is made of at least one kube-master and one kube-node.

- Error: open /etc/ssl/etcd/ssl/admin-<hostname>.pem: permission denied
ssh <ansible_user>@<bastion-ip> top
# Ctrl-C to stop monitoring

The file located at /etc/ssl/etcd's owned by another user than Ubuntu and cannot be accessed by Ansible. Please change the file owner:group to ```ubuntu:ubuntu``` or the *ansible_user* or your choice.
> *ISSUE*
- How to open firewall ports for <master-node-ip> ?
> *ANSWER*
ssh <ansible_user>@<bastion-ip> 'sudo chown ubuntu:ubuntu -R /etc/ssl/etcd/'
./scripts/my_playbook.sh --firewall-setup <ansible_user>@<bastion-ip>

- E: Unable to locate package unzip
- ERROR: Service 'app' failed to build
Expand All @@ -174,7 +185,7 @@ Kubernetes container manager failed to resolve package reposirory hostnames. Tha
- How much memory is left free on my master host ?
If you don't know how much memory's available for the master host kubernetes-apps, run the following command that displays live memory usage :

ssh $PI@$pi top
ssh $PI@$ip top
# Ctrl-C to stop monitoring

- Timeout (12s) waiting for privilege escalation prompt
Expand Down
66 changes: 66 additions & 0 deletions SCHEME
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# #
# # # # ##### ###### #### ##### ##### ## # #
# # # # # # # # # # # # # # # #
### # # ##### ##### #### # # # # # # #
# # # # # # # # ##### ##### ###### #
# # # # # # # # # # # # # # #
# # #### ##### ###### #### # # # # # #


==============
- ISP ROUTER -
_( )_( )_
(_ W A N _)
(_) (__)
==============
|
| Home network
| ,--./,-.
| / # /
L---- | : iMac
| \ \
| `._,._,'
S L Ansible - ssh
S
H
| DMZ IP - Bastion Host
| (eth0)
| .\V/,
| ()_()_)
L ---- (.(_)()_) raspberrypib+
(_(_).)'
`'"'`
L ufw - netplan - isc-dhcp-server
Private |
Network I
(br0) P
V
4
|
L (((( HOSTAPd ))))

O
o
o Gatewayed Host(s)
O
o
o

etcd
.\V/,
__v_ Private ()_()_)
K8s (____\/{ docker IP (.(_)()_) raspberrypia+
(_(_).)'
`'"'`
Calico | (wlan0)
K 8 s L (((( wpa_supplicant ))))
K
8
S
| (wlan0)
L (((( wpa_supplicant ))))
.\V/,
Private ()_()_)
IP (.(_)()_) raspberrypia+
(_(_).)'
`'"'`
5 changes: 3 additions & 2 deletions cluster.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@
- hosts: bastion[0]
gather_facts: False
roles:
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }

- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
Expand Down Expand Up @@ -48,6 +48,7 @@
roles:
- { role: kubespray-defaults}
- { role: kubernetes/preinstall, tags: preinstall }
- { role: download, tags: download, when: "not skip_downloads and container_manager == 'crio'" }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
- { role: download, tags: download, when: "not skip_downloads" }
environment: "{{proxy_env}}"
Expand Down
2 changes: 1 addition & 1 deletion contrib/dind/roles/dind-host/tasks/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
- /lib/modules:/lib/modules
- "{{ item }}:/dind/docker"
register: containers
with_items: "{{groups.containers}}"
with_items: "{{ groups.containers }}"
tags:
- addresses

Expand Down
2 changes: 1 addition & 1 deletion contrib/inventory_builder/inventory.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def __init__(self, changed_hosts=None, config_file=None):
try:
self.hosts_file = open(config_file, 'r')
self.yaml_config = yaml.load(self.hosts_file)
except FileNotFoundError:
except IOError:
pass

if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
Expand Down
4 changes: 2 additions & 2 deletions contrib/metallb/roles/provision/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@
- name: "Kubernetes Apps | Install and configure MetalLB"
kube:
name: "MetalLB"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/{{ item }}"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/{{ item.item }}"
state: "{{ item.changed | ternary('latest','present') }}"
become: true
with_items: "{{ rendering.results }}"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@
kube:
name: glusterfs
namespace: default
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.type}}"
filename: "{{kube_config_dir}}/{{item.dest}}"
state: "{{item.changed | ternary('latest','present') }}"
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.item.type }}"
filename: "{{ kube_config_dir }}/{{ item.item.dest }}"
state: "{{ item.changed | ternary('latest','present') }}"
with_items: "{{ gluster_pv.results }}"
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
- name: "Kubernetes Apps | Install and configure Heketi Bootstrap"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/heketi-bootstrap.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Wait for heketi bootstrap to complete."
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
- name: "Create heketi storage."
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/heketi-storage-bootstrap.json"
state: "present"
vars:
Expand Down

0 comments on commit 6444e24

Please sign in to comment.