New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
install the latest excluders #3502
install the latest excluders #3502
Conversation
aos-ci-test |
fd05bcd - State: success - All Test Contexts: aos-ci-jenkins/OS_unit_tests - Logs: https://aos-ci.s3.amazonaws.com/openshift/openshift-ansible/jenkins-openshift-ansible-2-unit-tests-1010/fd05bcdb0fa633e2508d4f6ed9d95e9ac917d4c3.txt |
@@ -20,13 +20,15 @@ | |||
- name: Update to latest excluder packages | |||
package: | |||
name: "{{ openshift.common.service_type }}-excluder" | |||
state: latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be updating the excluder package outside of an upgrade?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the excluders are enabled, they must be updated during the upgrade. Otherwise, they are of no use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For atomic-openshift-excluder/origin-excluder this doesn't matter as those are unversioned excludes.
For atomic-openshift-docker-excluder it's ok because they're bound by the version of the excluder that's specific to a given release, we should never ship a new excluder to the same repo which would introduce a docker version that's not compatible.
For origin-docker-excluder this will be problematic right now because multiple versions are in one repo however that won't be the case for 3.6 (I think?). However, we've only provided excluders for 1.4 and soon 1.5 which both support docker 1.12.
So I think this is OK assuming we follow through on origin 3.6 living in a unique repo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sdodson @ingvagabund Is the openshift_excluder role applied outside of the upgrade playbooks? My concern with updating the excluder packages in status.yml
is that we would update the package outside of the upgrade workflow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The openshift_excluder
is included in playbooks/common/openshift-cluster
playbook which is included in all upgrade playbooks.
$ cat playbooks/common/openshift-cluster/disable_excluder.yml
---
- name: Record excluder state and disable
hosts: l_oo_all_hosts
gather_facts: no
tasks:
- include_role:
name: openshift_excluder
tasks_from: status
- include_role:
name: openshift_excluder
tasks_from: unexclude
$ grep -rn "common/openshift-cluster/disable_excluder.yml"
playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade_control_plane.yml:57:- include: ../../../../common/openshift-cluster/disable_excluder.yml
playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade_nodes.yml:50:- include: ../../../../common/openshift-cluster/disable_excluder.yml
playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade.yml:49:- include: ../../../../common/openshift-cluster/disable_excluder.yml
playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml:57:- include: ../../../../common/openshift-cluster/disable_excluder.yml
playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade_nodes.yml:50:- include: ../../../../common/openshift-cluster/disable_excluder.yml
playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade.yml:49:- include: ../../../../common/openshift-cluster/disable_excluder.yml
playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade_control_plane.yml:57:- include: ../../../../common/openshift-cluster/disable_excluder.yml
playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade_nodes.yml:50:- include: ../../../../common/openshift-cluster/disable_excluder.yml
playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade.yml:49:- include: ../../../../common/openshift-cluster/disable_excluder.yml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does not look nice: #3529
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately earlier versions of the excluders lacked a status function so I'd really like to ensure they're up to date so that we can rely on the status function. I've also just realized that we're not actually calling the status function below so we should fix that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/openshift/openshift-ansible/pull/3502/files#diff-b6b3dc3686358fd5aacff902dccb98a9R2 explains why i'd updated them in this task file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sdodson @ingvagabund: I still think it'd be better to ensure a minimum version of the excluder rather than doing an update each time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's move the discussion here #3529 (comment)
fd05bcd - State: success - All Test Contexts: "aos-ci-jenkins/OS_3.4_NOT_containerized, aos-ci-jenkins/OS_3.4_NOT_containerized_e2e_tests" - Logs: https://aos-ci.s3.amazonaws.com/openshift/openshift-ansible/jenkins-openshift-ansible-3-test-matrix-CONTAINERIZED=_NOT_containerized,OSE_VER=3.4,PYTHON=System-CPython-2.7,TOPOLOGY=openshift-cluster,TargetBranch=master,nodes=openshift-ansible-slave-1014/fd05bcdb0fa633e2508d4f6ed9d95e9ac917d4c3.txt |
fd05bcd - State: success - All Test Contexts: "aos-ci-jenkins/OS_3.4_containerized, aos-ci-jenkins/OS_3.4_containerized_e2e_tests" - Logs: https://aos-ci.s3.amazonaws.com/openshift/openshift-ansible/jenkins-openshift-ansible-3-test-matrix-CONTAINERIZED=_containerized,OSE_VER=3.4,PYTHON=System-CPython-2.7,TOPOLOGY=openshift-cluster-containerized,TargetBranch=master,nodes=openshift-ansible-slave-1014/fd05bcdb0fa633e2508d4f6ed9d95e9ac917d4c3.txt |
fd05bcd - State: success - All Test Contexts: "aos-ci-jenkins/OS_3.5_NOT_containerized, aos-ci-jenkins/OS_3.5_NOT_containerized_e2e_tests" - Logs: https://aos-ci.s3.amazonaws.com/openshift/openshift-ansible/jenkins-openshift-ansible-3-test-matrix-CONTAINERIZED=_NOT_containerized,OSE_VER=3.5,PYTHON=System-CPython-2.7,TOPOLOGY=openshift-cluster,TargetBranch=master,nodes=openshift-ansible-slave-1014/fd05bcdb0fa633e2508d4f6ed9d95e9ac917d4c3.txt |
fd05bcd - State: success - All Test Contexts: "aos-ci-jenkins/OS_3.5_containerized, aos-ci-jenkins/OS_3.5_containerized_e2e_tests" - Logs: https://aos-ci.s3.amazonaws.com/openshift/openshift-ansible/jenkins-openshift-ansible-3-test-matrix-CONTAINERIZED=_containerized,OSE_VER=3.5,PYTHON=System-CPython-2.7,TOPOLOGY=openshift-cluster-containerized,TargetBranch=master,nodes=openshift-ansible-slave-1014/fd05bcdb0fa633e2508d4f6ed9d95e9ac917d4c3.txt |
We want the latest version installed during upgrade
Tested the upgrade, rpms are updating when 3.4 versions installed.
Fixes: 1426070