Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 1.1 #17

Merged
merged 4 commits into from Dec 3, 2019
Merged

Release 1.1 #17

merged 4 commits into from Dec 3, 2019

Conversation

leseb
Copy link

@leseb leseb commented Dec 3, 2019

Description of your changes:

Which issue is resolved by this Pull Request:
Resolves #

Checklist:

  • Reviewed the developer guide on Submitting a Pull Request
  • Documentation has been updated, if necessary.
  • Unit tests have been added, if necessary.
  • Integration tests have been added, if necessary.
  • Pending release notes updated with breaking and/or notable changes, if necessary.
  • Upgrade from previous release is tested and upgrade user guide is updated, if necessary.
  • Code generation (make codegen) has been run to update object specifications, if necessary.
  • Comments have been added or updated based on the standards set in CONTRIBUTING.md
  • Add the flag for skipping the CI if this PR does not require a build. See here for more details.

travisn and others added 4 commits December 2, 2019 17:50
Some log messages indicate when the upgrade checks will be
performed. If the Ceph image hasn't changed, the upgrade checks
are skipped. The messages were confusing if running a Rook
upgrade.

Signed-off-by: Travis Nielsen <tnielsen@redhat.com>
(cherry picked from commit 87e2b09)
Clarify log message for Ceph upgrades (bp #4360)
We recently discovered a race condition which happens not to update the
child CRDs when CR image changes.

The race happens under the following sequence:

* orchestration onAdd() is called when the operator image restarts
or its image updated
* the orchestration then goes into processing mon/mgr/osd
* during this the cluster CR is updated so onUpdate() is called,
however, onAdd() is not done yet for child CRDs (mds, rgw)
* onUpdate() runs against the cluster CR and updates the cluster images
* by the time the child CRDs reached onUpdate() from their respective
ParentClusterChanged() methods, the image is already up to date
so we were exiting since our check for updating
these resources are based on the cluster image version

To fix this, we changed the comparison, but using isUpgrade instead and
slightly changed the way isUpgrade operates. So now, isUpgrade is true
even if the Ceph version is identical which allows us to verify whether
or not the image changed.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1775624
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit db61447)
ceph: fix race condition on crd child update (bp #4403)
@leseb leseb merged commit 1492ce7 into red-hat-storage:ocs-4.2 Dec 3, 2019
@openshift-ci-robot openshift-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Dec 3, 2019
leseb pushed a commit that referenced this pull request Sep 30, 2021
Sync from upstream 1.7 to downstream 4.9
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
3 participants