Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
* 'master' of https://github.com/kubernetes-sigs/kubespray:
  Add missing coredns tag. (kubernetes-sigs#5054)
  Bump minimum K8S version to 1.14 (kubernetes-sigs#5055)
  multus | fix use last version (kubernetes-sigs#5041)
  Fix variable for rbd_provisioner_user_secret (kubernetes-sigs#5042)
  go to k8s 1.15.2, update nodelocaldns to latest bugfix release (kubernetes-sigs#5048)
  Refactor calico route reflector to run in k8s cluster (kubernetes-sigs#4975)
  Fix check for removing etcd member (kubernetes-sigs#5051)
  Refactor remove node to allow removing dead nodes and etcd members (kubernetes-sigs#5009)
  Allow etcd member join by checking cluster health only on first etcd (kubernetes-sigs#5032)
  Ansible version bump for CVE-2019-10156 (kubernetes-sigs#5050)
  Add ability to setup virtual ip for ingress-controller (kubernetes-sigs#5044)
  Optionally refresh kubeadm token every time (kubernetes-sigs#5045)
  Upgrade Cilium network plugin to v1.5.5. (kubernetes-sigs#5014)
  Optionally refresh kubeadm token every time (kubernetes-sigs#5043)
  • Loading branch information
erulabs committed Aug 9, 2019
2 parents d280b93 + 56fa467 commit 30b3278
Show file tree
Hide file tree
Showing 45 changed files with 852 additions and 475 deletions.
6 changes: 3 additions & 3 deletions README.md
Expand Up @@ -109,15 +109,15 @@ Supported Components
--------------------

- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.15.1
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.15.2
- [etcd](https://github.com/coreos/etcd) v3.3.10
- [docker](https://www.docker.com/) v18.06 (see note)
- [cri-o](http://cri-o.io/) v1.11.5 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v0.8.1
- [calico](https://github.com/projectcalico/calico) v3.7.3
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.3.0
- [cilium](https://github.com/cilium/cilium) v1.5.5
- [contiv](https://github.com/contiv/install) v1.2.1
- [flanneld](https://github.com/coreos/flannel) v0.11.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.5
Expand All @@ -134,7 +134,7 @@ Note: The list of validated [docker versions](https://github.com/kubernetes/kube

Requirements
------------
- **Minimum required version of Kubernetes is v1.13**
- **Minimum required version of Kubernetes is v1.14**
- **Ansible v2.7.8 (or newer, but [not 2.8.x](https://github.com/kubernetes-sigs/kubespray/issues/4778)) and python-netaddr is installed on the machine
that will run Ansible commands**
- **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
Expand Down
18 changes: 9 additions & 9 deletions cluster.yml
Expand Up @@ -19,14 +19,14 @@
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}

- hosts: k8s-cluster:etcd:calico-rr
- hosts: k8s-cluster:etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
roles:
- { role: kubespray-defaults}
- { role: bootstrap-os, tags: bootstrap-os}

- hosts: k8s-cluster:etcd:calico-rr
- hosts: k8s-cluster:etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
Expand All @@ -46,7 +46,7 @@
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: not etcd_kubeadm_enabled| default(false)

- hosts: k8s-cluster:calico-rr
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
Expand Down Expand Up @@ -79,6 +79,12 @@
- { role: kubernetes/kubeadm, tags: kubeadm}
- { role: network_plugin, tags: network }

- hosts: calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: ['network', 'calico_rr']}

- hosts: kube-master[0]
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
Expand All @@ -95,12 +101,6 @@
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }

- hosts: calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: network }

- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
Expand Down
2 changes: 1 addition & 1 deletion contrib/metallb/README.md
Expand Up @@ -2,7 +2,7 @@
```
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
```
This playbook aims to automate [this](https://metallb.universe.tf/tutorial/layer2/tutorial). It deploys MetalLB into kubernetes and sets up a layer 2 loadbalancer.
This playbook aims to automate [this](https://metallb.universe.tf/concepts/layer2/). It deploys MetalLB into kubernetes and sets up a layer 2 loadbalancer.

## Install
```
Expand Down
9 changes: 5 additions & 4 deletions docs/calico.md
Expand Up @@ -119,13 +119,13 @@ recommended here:

You need to edit your inventory and add:

* `calico-rr` group with nodes in it. At the moment it's incompatible with
`kube-node` due to BGP port conflict with `calico-node` container. So you
should not have nodes in both `calico-rr` and `kube-node` groups.
* `calico-rr` group with nodes in it. `calico-rr` can be combined with
`kube-node` and/or `kube-master`. `calico-rr` group also must be a child
group of `k8s-cluster` group.
* `cluster_id` by route reflector node/group (see details
[here](https://hub.docker.com/r/calico/routereflector/))

Here's an example of Kubespray inventory with route reflectors:
Here's an example of Kubespray inventory with standalone route reflectors:

```
[all]
Expand Down Expand Up @@ -154,6 +154,7 @@ node5
[k8s-cluster:children]
kube-node
kube-master
calico-rr
[calico-rr]
rr0
Expand Down
23 changes: 15 additions & 8 deletions docs/getting-started.md
Expand Up @@ -51,20 +51,27 @@ You may want to add worker, master or etcd nodes to your existing cluster. This
Remove nodes
------------

You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again.

Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).

ansible-playbook -i inventory/mycluster/hosts.yml remove-node.yml -b -v \
--private-key=~/.ssh/private_key

Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node you want to delete.
You may want to remove **master**, **worker**, or **etcd** nodes from your
existing cluster. This can be done by re-running the `remove-node.yml`
playbook. First, all specified nodes will be drained, then stop some
kubernetes services and delete some certificates,
and finally execute the kubectl command to delete these nodes.
This can be combined with the add node function. This is generally helpful
when doing something like autoscaling your clusters. Of course, if a node
is not working, you can remove the node and install it again.

Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node(s) you want to delete.
```
ansible-playbook -i inventory/mycluster/hosts.yml remove-node.yml -b -v \
--private-key=~/.ssh/private_key \
--extra-vars "node=nodename,nodename2"
```

If a node is completely unreachable by ssh, add `--extra-vars reset_nodes=no`
to skip the node reset step. If one node is unavailable, but others you wish
to remove are able to connect via SSH, you could set reset_nodes=no as a host
var in inventory.

Connecting to Kubernetes
------------------------

Expand Down
1 change: 1 addition & 0 deletions inventory/local/hosts.ini
Expand Up @@ -12,3 +12,4 @@ node1
[k8s-cluster:children]
kube-node
kube-master
calico-rr
1 change: 1 addition & 0 deletions inventory/sample/group_vars/k8s-cluster/addons.yml
Expand Up @@ -80,6 +80,7 @@ rbd_provisioner_enabled: false
# Nginx ingress controller deployment
ingress_nginx_enabled: false
# ingress_nginx_host_network: false
ingress_publish_status_address: ""
# ingress_nginx_nodeselector:
# beta.kubernetes.io/os: "linux"
# ingress_nginx_tolerations:
Expand Down
2 changes: 1 addition & 1 deletion inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml
Expand Up @@ -20,7 +20,7 @@ kube_users_dir: "{{ kube_config_dir }}/users"
kube_api_anonymous_auth: true

## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.15.1
kube_version: v1.15.2

# kubernetes image repo define
kube_image_repo: "gcr.io/google-containers"
Expand Down
3 changes: 3 additions & 0 deletions inventory/sample/inventory.ini
Expand Up @@ -28,6 +28,9 @@
# node5
# node6

[calico-rr]

[k8s-cluster:children]
kube-master
kube-node
calico-rr
15 changes: 8 additions & 7 deletions remove-node.yml
@@ -1,6 +1,7 @@
---
- hosts: localhost
become: no
gather_facts: no
tasks:
- name: "Check ansible version >=2.7.8"
assert:
Expand All @@ -12,12 +13,8 @@
vars:
ansible_connection: local

- hosts: all
vars:
ansible_ssh_pipelining: true
gather_facts: true

- hosts: "{{ node | default('etcd:k8s-cluster:calico-rr') }}"
gather_facts: no
vars_prompt:
name: "delete_nodes_confirmation"
prompt: "Are you sure you want to delete nodes state? Type 'yes' to delete nodes."
Expand All @@ -31,16 +28,20 @@
when: delete_nodes_confirmation != "yes"

- hosts: kube-master
gather_facts: no
roles:
- { role: kubespray-defaults }
- { role: remove-node/pre-remove, tags: pre-remove }

- hosts: "{{ node | default('kube-node') }}"
gather_facts: no
roles:
- { role: kubespray-defaults }
- { role: reset, tags: reset }
- { role: reset, tags: reset, when: reset_nodes|default(True) }

- hosts: kube-master
# Currently cannot remove first master or etcd
- hosts: "{{ node | default('kube-master[1:]:etcd[:1]') }}"
gather_facts: no
roles:
- { role: kubespray-defaults }
- { role: remove-node/post-remove, tags: post-remove }
2 changes: 1 addition & 1 deletion requirements.txt
@@ -1,4 +1,4 @@
ansible==2.7.8
ansible==2.7.12
jinja2==2.10.1
netaddr==0.7.19
pbr==5.2.0
Expand Down
35 changes: 29 additions & 6 deletions roles/download/defaults/main.yml
Expand Up @@ -49,7 +49,7 @@ download_delegate: "{% if download_localhost %}localhost{% else %}{{ groups['kub
image_arch: "{{host_architecture | default('amd64')}}"

# Versions
kube_version: v1.15.1
kube_version: v1.15.2
kubeadm_version: "{{ kube_version }}"
etcd_version: v3.3.10

Expand All @@ -73,10 +73,10 @@ cni_version: "v0.8.1"
weave_version: 2.5.2
pod_infra_version: 3.1
contiv_version: 1.2.1
cilium_version: "v1.3.0"
cilium_version: "v1.5.5"
kube_ovn_version: "v0.6.0"
kube_router_version: "v0.2.5"
multus_version: "v3.1.autoconf"
multus_version: "v3.2.1"

crictl_version: "v1.15.0"

Expand Down Expand Up @@ -105,49 +105,61 @@ crictl_checksums:
# Checksums
hyperkube_checksums:
arm:
v1.15.2: eeaa8e071541c7bcaa186ff1d2919d076b27ef70c9e9df70f910756eba55dc99
v1.15.1: fc5af96fd9341776d84c38675be7b8045dee20af327af9331972c422a4109918
v1.15.0: d923c781031bfd97d0fbe50311e4d7c3616aa5b6d466b99049931f09d73d07b9
v1.14.5: 860b84dd32611a6008fe20fb998a2fc0a25ff44067eae556224827d05429c91e
v1.14.4: 429a10369b2ef35a9c2d662347277339d53fa66ef55ffeabcc7d9b850e31056d
v1.14.3: 3fac785261bcf79f7a80b12c4a1dda893ce8c0879caf57b36d4701730671b574
v1.14.2: 6929a59850c8702c04d62cd343d1143b17456da040f32317e09f8c25a08d2346
v1.14.1: 839a4abfeafbd5f5ab057ad0e8a0b0b488b3cde14a646eba040a7f579875f565
v1.14.0: d090b1da23564a7e9bb8f1f4264f2116536c52611ae203fe2ca13eaad0a8003e
arm64:
v1.15.2: c4cf69f52c7013faee9d54e0f376e0732a4a7b0f7ffc7241e9b7e28bad0ac77f
v1.15.1: 80ed372c5f6c5178df88616175310057c06bdc9d0905953814a1927eb3aaa657
v1.15.0: 824af7d925b87a5ade63575b98b59ee81005fc76eac1dc399602308d7a60bc3c
v1.14.5: 90c77847d64eb857c8e686e8593fe7a9e505bcbf960b0407217255827a9da59a
v1.14.4: 9e0b4fde88a07c705e0937cd3161392684e3ca08535d14a99ae3b86bbf4c56b3
v1.14.3: f29211d668cbcf1aa415dfa64aad95ffc53b5410482a23cddb680caec4e907a3
v1.14.2: 959fb7d9c17fc8f7cb1a69920aaf08aefd62c0fbf6b5bdc46250f147ea6a5cd4
v1.14.1: d5236efc2547fd07c7cc2ed9345dfbcd1204385847ca686cf1c62d15056de399
v1.14.0: 708e00a41f6516d525dee00c91ebe3c3bf2feaf9b7f0af7689487e3e17e356c2
amd64:
v1.15.2: ab885606438748eb89a7738e219f5353d94c40c63a4935a539ce89760280f065
v1.15.1: 22b7b1e7f5f2a452d62e0ca4c2cba67119c51e04219aaeaf8452825f9177069e
v1.15.0: 3cc72cc58517b97c608c7a59a20255675bc70f07217c9e11e58cac7746139283
v1.14.5: 2c3410518980b8705ba9b7b708076a206f2bde37cb8bf5ba8f15c32c697f4d97
v1.14.4: 5f31434f3a884257a7b0e3178fc869720a7526c8637af5713d23433ddf2592dd
v1.14.3: 6c6cb5c118b2129ba4e56697f42567be3587eb636a477cd342b69f87b3b049d1
v1.14.2: 05546057f2053e085fa8387ab82581c95fe4195cd783408ccbb4fc3487c50176
v1.14.1: fb34b98da9325feca8daa09bb934dbe6a533aad69c2a5599bbed81b99bb9c267
v1.14.0: af8b04504365dbe4ce6a1772f42eb390d4221a21149b522fc8a0c4b1cd3d97aa
kubeadm_checksums:
arm:
v1.15.2: 4b35ad0031c08a83de7c8d9f9bbed6a30d93a5c74e16ea9e6211ad2e0e12bdd1
v1.15.1: 855abd520291dcef0577a1a2ef87a70f522fd2b22603a12abcd86c2f7ec9c022
v1.15.0: 9464030a1d4e101de5f47348f3514d5a9eb95cbce2e5e31f53ada1ca485cf75e
v1.14.5: 0bb551f7468de2fa6f98ce60653495327be052364ac9f9e8917a4d1ad864412b
v1.14.4: 36835488d7187406690ee6aa4b3c9c54855cb5c55d786d0574a508b955fe3a46
v1.14.3: 270b8c346aeaa309d11d65695c4a90f6bff5b1ea14bdec3c417ca2dfb3de0db3
v1.14.2: d2a59269aa68a4bace2a80b247b6f9a82f0542ec3004185fb0ba86e181fdfb29
v1.14.1: 4bd111411208f1270ed3af8780b87d24a3c17c9fdbe4b0f8c7a9a21cd765543e
v1.14.0: 11f2cfa8bf7ee177dbac8073ab0f039dc265536baaa8dc0c4dea699f981f6fd1
arm64:
v1.15.2: d3b6ee2048b366726ca366d2db4c46b2cacc38e8ec09cc35781d16593753d930
v1.15.1: 44fbfad0f1026d249fc4f365f1e9562cd52d75360d4d1032731122ba5a4d57dc
v1.15.0: fe3c79070814fe847a23209b1027672fe5c5e7e5c9611e329225058926836f96
v1.14.5: 7dd1195d16980c4c888d13e49d97c3513f668e192bf2778bc0f0516e0f7fe2ac
v1.14.4: 60745b3ac761d3aa55ab9a24677ecf4e7f48b5abed34c725047a174456e5a79b
v1.14.3: 8edcc07c65f81eea3fc47cd237dd6560c6907c5e0ca52d71eab53ca1164e7d01
v1.14.2: bff0712b87796509129aa802ad3ac25b8cc83af01762b22b4dcca8dbdb26b520
v1.14.1: 5cf05464168e45ee4719264a267c65f9319fae1ceb9923fedab97a9d6a629e0b
v1.14.0: 7ed9d706e50cd6d3fc618a7af3d19b691b8a5343ddedaeccb4ea09af3ecfae2c
amd64:
v1.15.2: fe2a13a1dea73249560ea44ab54c0359a9722e9c66832f6bcad86798438cba2f
v1.15.1: 3d42441ae177826f1181e559cd2a729464ca8efadef196cfa0e8053a615333b5
v1.15.0: fc4aa44b96dc143d7c3062124e25fed671cab884ebb8b2446edd10abb45e88c2
v1.14.5: b3e840f7816f64e071d25f8a90b984eecd6251b68e568b420d85ef0a4dd514bb
v1.14.4: 291790a1cef82c4de28cc3338a199ca8356838ca26f775f2c2acba165b633d9f
v1.14.3: 026700dfff3c78be1295417e96d882136e5e1f095eb843e6575e57ef9930b5d3
v1.14.2: 77510f61352bb6e537e70730b670627963f2c314fbd36a644b0c435b97e9705a
Expand Down Expand Up @@ -237,8 +249,10 @@ contiv_ovs_image_repo: "docker.io/contiv/ovs"
contiv_ovs_image_tag: "latest"
cilium_image_repo: "docker.io/cilium/cilium"
cilium_image_tag: "{{ cilium_version }}"
cilium_init_image_repo: "docker.io/library/busybox"
cilium_init_image_tag: "1.28.4"
cilium_init_image_repo: "docker.io/cilium/cilium-init"
cilium_init_image_tag: "2019-04-05"
cilium_operator_image_repo: "docker.io/cilium/operator"
cilium_operator_image_tag: "{{ cilium_version }}"
kube_ovn_db_image_repo: "index.alauda.cn/alaudak8s/kube-ovn-db"
kube_ovn_node_image_repo: "index.alauda.cn/alaudak8s/kube-ovn-node"
kube_ovn_cni_image_repo: "index.alauda.cn/alaudak8s/kube-ovn-cni"
Expand All @@ -261,7 +275,7 @@ coredns_version: "1.6.0"
coredns_image_repo: "docker.io/coredns/coredns"
coredns_image_tag: "{{ coredns_version }}"

nodelocaldns_version: "1.15.1"
nodelocaldns_version: "1.15.4"
nodelocaldns_image_repo: "k8s.gcr.io/k8s-dns-node-cache"
nodelocaldns_image_tag: "{{ nodelocaldns_version }}"

Expand Down Expand Up @@ -415,6 +429,15 @@ downloads:
groups:
- k8s-cluster

cilium_operator:
enabled: "{{ kube_network_plugin == 'cilium' }}"
container: true
repo: "{{ cilium_operator_image_repo }}"
tag: "{{ cilium_operator_image_tag }}"
sha256: "{{ cilium_operator_digest_checksum|default(None) }}"
groups:
- k8s-cluster

multus:
enabled: "{{ kube_network_plugin_multus }}"
container: true
Expand Down
17 changes: 13 additions & 4 deletions roles/etcd/tasks/configure.yml
Expand Up @@ -64,15 +64,19 @@
when: is_etcd_master and etcd_events_cluster_setup

- name: Configure | Check if etcd cluster is healthy
shell: "{{ bin_dir }}/etcdctl --endpoints={{ etcd_access_addresses }} cluster-health | grep -q 'cluster is healthy'"
shell: "{{ bin_dir }}/etcdctl --no-sync --endpoints={{ etcd_client_url }} cluster-health | grep -q 'cluster is healthy'"
register: etcd_cluster_is_healthy
until: etcd_cluster_is_healthy.rc == 0
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
ignore_errors: false
changed_when: false
check_mode: no
when: is_etcd_master and etcd_cluster_setup
delegate_to: "{{ groups['etcd'][0] }}"
run_once: yes
when:
- is_etcd_master
- etcd_cluster_setup
tags:
- facts
environment:
Expand All @@ -81,15 +85,20 @@
ETCDCTL_CA_FILE: "{{ etcd_cert_dir }}/ca.pem"

- name: Configure | Check if etcd-events cluster is healthy
shell: "{{ bin_dir }}/etcdctl --endpoints={{ etcd_events_access_addresses }} cluster-health | grep -q 'cluster is healthy'"
shell: "{{ bin_dir }}/etcdctl --no-sync --endpoints={{ etcd_events_client_url }} cluster-health | grep -q 'cluster is healthy'"
register: etcd_events_cluster_is_healthy
until: etcd_events_cluster_is_healthy.rc == 0
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
ignore_errors: false
changed_when: false
check_mode: no
when: is_etcd_master and etcd_events_cluster_setup
delegate_to: "{{ groups['etcd'][0] }}"
run_once: yes
when:
- is_etcd_master
- etcd_events_cluster_setup
- etcd_cluster_setup
tags:
- facts
environment:
Expand Down

0 comments on commit 30b3278

Please sign in to comment.