-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add backup/restore test in the CI #1687
Labels
complexity:medium
Something that requires one or few days to fix
topic:tests
What's not tested may be broken
Comments
TeddyAndrieux
added
topic:tests
What's not tested may be broken
moonshot
complexity:medium
Something that requires one or few days to fix
labels
Sep 13, 2019
alexandre-allard
added a commit
that referenced
this issue
Nov 21, 2019
We need jq to parse json output from some commands in the CI context in multiple-nodes tests Refs: #1687
alexandre-allard
added a commit
that referenced
this issue
Nov 21, 2019
Creation of the network used by the control plane, this allows the usage of a VIP for apiserver which will be used to test backup/restoration mechanism Refs: #1687
alexandre-allard
added a commit
that referenced
this issue
Nov 21, 2019
Link the control plane interfaces to the nodes Refs: #1687
alexandre-allard
added a commit
that referenced
this issue
Nov 21, 2019
This script creates the configuration of the new network interface used by the control plane network and ensures that this interface is up. It would be better to replace it by a cloud-init script at some point, but for now, it does the job. Refs: #1687
alexandre-allard
added a commit
that referenced
this issue
Nov 21, 2019
Use the dedicated control plane network and VIP in the bootstrap configuration script. Also enable the usage of keepalived. Refs: #1687
alexandre-allard
added a commit
that referenced
this issue
Nov 21, 2019
Tests were broken because of the introduction of the VIP, we were taking the first IP listed matching the CIDR, but VIP will always be the first as this is the first IP in the subnet. Refs: #1687
alexandre-allard
added a commit
that referenced
this issue
Nov 21, 2019
Script can be used to add a new node in an already running cluster. Script is needed in the restoration script tests as we need at least 3 nodes in ETCD cluster to be able to restore the bootstrap node. Refs: #1687
alexandre-allard
added a commit
that referenced
this issue
Nov 21, 2019
alexandre-allard
added a commit
that referenced
this issue
Nov 21, 2019
alexandre-allard
added a commit
that referenced
this issue
Nov 21, 2019
alexandre-allard
added a commit
that referenced
this issue
Nov 22, 2019
alexandre-allard
added a commit
that referenced
this issue
Nov 24, 2019
Script can be used to add a new node in an already running cluster. Script is needed in the restoration script tests as we need at least 3 nodes in ETCD cluster to be able to restore the bootstrap node. Refs: #1687
alexandre-allard
added a commit
that referenced
this issue
Nov 24, 2019
alexandre-allard
added a commit
that referenced
this issue
Nov 24, 2019
alexandre-allard
added a commit
that referenced
this issue
Nov 25, 2019
TeddyAndrieux
pushed a commit
that referenced
this issue
May 19, 2020
This scenario run the restore script and ensure that everything is working as expected (pods, ...) after the restoration. Refs: #1687
TeddyAndrieux
added a commit
that referenced
this issue
May 19, 2020
Launch bootstrap restore tests in CI during post-merge stages, this CI step first need a full MetalK8s cluster with at least 3-node etcd cluster Refs: #1687
TeddyAndrieux
added a commit
that referenced
this issue
May 19, 2020
Launch bootstrap restore tests in CI during post-merge stages, this CI step first need a full MetalK8s cluster with at least 3-node etcd cluster Refs: #1687
TeddyAndrieux
added a commit
that referenced
this issue
May 19, 2020
Launch bootstrap restore tests in CI during post-merge stages, this CI step first need a full MetalK8s cluster with at least 3-node etcd cluster Refs: #1687
TeddyAndrieux
pushed a commit
that referenced
this issue
May 20, 2020
Deploy two node instead of only one during expansion tests to have HA etcd and being able to test the bootstrap restoration Refs: #1687
TeddyAndrieux
pushed a commit
that referenced
this issue
May 20, 2020
TeddyAndrieux
pushed a commit
that referenced
this issue
May 20, 2020
Add a function to help retrieve control plane IP of a node through SSH using salt-call command. This function is needed in order to retrieve the IP of an API Server for the restoration of the bootstrap Refs: #1687
TeddyAndrieux
pushed a commit
that referenced
this issue
May 20, 2020
This `then` will be useful in restoration tests also so we move it to a common place Refs: #1687
TeddyAndrieux
pushed a commit
that referenced
this issue
May 20, 2020
we don't want to check presence of kube-dns pods on the bootstrap node, because pods can move to other node, especially during restore tests Refs: #1687
TeddyAndrieux
pushed a commit
that referenced
this issue
May 20, 2020
This scenario run the restore script and ensure that everything is working as expected (pods, ...) after the restoration. Refs: #1687
TeddyAndrieux
added a commit
that referenced
this issue
May 20, 2020
Launch bootstrap restore tests in CI during post-merge stages, this CI step first need a full MetalK8s cluster with at least 3-node etcd cluster Refs: #1687
TeddyAndrieux
pushed a commit
that referenced
this issue
May 20, 2020
This `then` will be useful in restoration tests also so we move it to a common place Refs: #1687
TeddyAndrieux
pushed a commit
that referenced
this issue
May 20, 2020
we don't want to check presence of kube-dns pods on the bootstrap node, because pods can move to other node, especially during restore tests Refs: #1687
TeddyAndrieux
pushed a commit
that referenced
this issue
May 20, 2020
This scenario run the restore script and ensure that everything is working as expected (pods, ...) after the restoration. Refs: #1687
TeddyAndrieux
added a commit
that referenced
this issue
May 20, 2020
Launch bootstrap restore tests in CI during post-merge stages, this CI step first need a full MetalK8s cluster with at least 3-node etcd cluster Refs: #1687
TeddyAndrieux
pushed a commit
that referenced
this issue
May 25, 2020
Deploy two node instead of only one during expansion tests to have HA etcd and being able to test the bootstrap restoration Refs: #1687
TeddyAndrieux
pushed a commit
that referenced
this issue
May 25, 2020
TeddyAndrieux
pushed a commit
that referenced
this issue
May 25, 2020
This `then` will be useful in restoration tests also so we move it to a common place Refs: #1687
TeddyAndrieux
pushed a commit
that referenced
this issue
May 25, 2020
we don't want to check presence of kube-dns pods on the bootstrap node, because pods can move to other node, especially during restore tests Refs: #1687
TeddyAndrieux
pushed a commit
that referenced
this issue
May 25, 2020
This scenario run the restore script and ensure that everything is working as expected (pods, ...) after the restoration. Refs: #1687
TeddyAndrieux
added a commit
that referenced
this issue
May 25, 2020
Launch bootstrap restore tests in CI during post-merge stages, this CI step first need a full MetalK8s cluster with at least 3-node etcd cluster Refs: #1687
This was referenced Sep 15, 2020
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
complexity:medium
Something that requires one or few days to fix
topic:tests
What's not tested may be broken
Component:
'tests'
Why this is needed:
Currently backup is launch at the end of each bootstrap but restore is never tested
What should be done:
Add a test for bootstrap recovery
Implementation proposal (strongly recommended):
Spawn a metalk8s cluster retrieve the backup file from
/var/lib/metalk8s/
destroy the bootstrap node, spawn a new machine and use it to spawn a new bootstrap nodeThe text was updated successfully, but these errors were encountered: