Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcd migrate: instead of scaleup playbook etcd server should be start… #7556

Conversation

michaelgugino
Copy link
Contributor

…ed back

master doesn't need to be restarted and etcd URLs updated as new etcd
nodes are being added. There is no need to run scaleup playbook, as etcd
nodes are already added to the cluster.

The migrate procedure now does the following:

  • checks if the etcd data needs to be migrated
  • makes etcd backup
  • stops etcd services on all nodes except the first one
  • migrates etcd data on first etcd node
  • clears data on other etcd nodes
  • updates etcd cluster configuration, updating ETCD_INITIAL_CLUSTER and
    ETCD_INITIAL_CLUSTER_STATE
  • starts the etcd service again one by one on etcd nodes

Now the only copy of migrated data is on the first etcd cluster, which
would replicate it to other nodes.

After migration is done master configs are updated and services
restarted if needed

(cherry picked from commit 7e30cda)

…ed back

master doesn't need to be restarted and etcd URLs updated as new etcd
nodes are being added. There is no need to run scaleup playbook, as etcd
nodes are already added to the cluster.

The migrate procedure now does the following:
* checks if the etcd data needs to be migrated
* makes etcd backup
* stops etcd services on all nodes except the first one
* migrates etcd data on first etcd node
* clears data on other etcd nodes
* updates etcd cluster configuration, updating ETCD_INITIAL_CLUSTER and
ETCD_INITIAL_CLUSTER_STATE
* starts the etcd service again one by one on etcd nodes

Now the only copy of migrated data is on the first etcd cluster, which
would replicate it to other nodes.

After migration is done master configs are updated and services
restarted if needed

(cherry picked from commit 7e30cda)
@openshift-ci-robot openshift-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Mar 16, 2018
@ingvagabund
Copy link
Member

ingvagabund commented Mar 16, 2018

We should remove the playbooks/openshift-etcd/scaleup.yml as well imho. I does not seem to be used anywhere else.

$ grep -rn "scaleup.yml"
openshift-etcd/private/migrate.yml:113:- import_playbook: scaleup.yml
openshift-etcd/scaleup.yml:4:- import_playbook: private/scaleup.yml
openshift-master/scaleup.yml:23:- import_playbook: private/scaleup.yml

@michaelgugino
Copy link
Contributor Author

We should remove the playbooks/openshift-etcd/scaleup.yml as well imho. I does not seem to be used anywhere else.

@ingvagabund That's because it's an 'entry point.' There should be no other playbooks importing that playbook, it's a playbook that users would run when they wish to add etcd hosts to the cluster.

@michaelgugino
Copy link
Contributor Author

/retest

@openshift-ci-robot
Copy link

openshift-ci-robot commented Mar 26, 2018

@michaelgugino: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
ci/openshift-jenkins/system-containers 62770f7 link /test system-containers
ci/openshift-jenkins/extended_conformance_install_crio 62770f7 link /test crio

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@vrutkovs
Copy link
Member

Replaced by #7662?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants