forked from rook/rook
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rsync release 1.2 #28
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Currently each time the ceph operator restart, osds also restart. This is due to a change in the crush-location args with location not ordered At each restart the location order can change and lead to that kind of change in the osd deployment 104c105 < "--crush-location=root=default host=hostname datacenter=PAR pod=2 rack=2", --- > "--crush-location=root=default host=hostname pod=2 rack=2 datacenter=PAR", Sorting the topology to ensure stability of this command arg Signed-off-by: n.fraison <n.fraison@criteo.com> (cherry picked from commit 7bac027)
If an image is specified in the external cluster CR, the 'rook-ceph-config' config map will be created and then child CRs like rgw/mds/nfs can be created. Also, calling config.GetStore() was a mistake since it's calling by the health check if new mons are being added to clusterInfo. Closes: #4686 Signed-off-by: Sébastien Han <seb@redhat.com> (cherry picked from commit f59bdd6)
ceph: fix external cluster config override cm creation (bp #4734)
ceph: ensure crush-location osd command args is always the same to avoid useless osds restart (bp #4729)
ceph: fix resources for prepare job (bp #4728)
In order for deployments managed outside the immediate scope of the ClusterController to accurately detect the Ceph version of their images, the ClusterController must store the detected version. This change records the Ceph version as part of the cluster status and modifies the crashcollector deployment to consume this field to record its own Ceph version. This replaces an earlier solution to this problem in which the Ceph version for a given image to version mapping was stored in a global in-memory map by the operator. Resolves: #4357 Signed-off-by: Elise Gafford <egafford@redhat.com> (cherry picked from commit b19f8d4)
ceph: add ceph version to cluster status (bp #4629)
With this modification is guaranteed that at least 1 MDS pod is going to be placed in each of the available zones in the k8s cluster. This would be improved in the future using: <Pod Topology Spread Constraints> (still in alpha state since kubernetes V.1.16) This modification obtain the number of zones in the k8s cluster and appends an <antiaffinity> term using the topologyKey <topology.kubernetes.io/zone> in the same number of MDS pods. In the case that will be more MDS pods than zones, the <antiaffinity> term won't be added in these extra pods. Important Note: The antiaffiniy term only will work with nodes labeled using the NEW label: <topology.kubernetes.io/zone> present in k8s V.1.17 clusters. For previous versions of k8s clusters the label used is: <failure-domain.beta.kubernetes.io/zone> And therefore, in this kind of clusters this modification won't work. Resolves #4641 [test ceph] Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@redhat.com> (cherry picked from commit f62b2fa)
Ceph: MDS pod placement adheres to fault domain topology (bp #4680)
leseb
added a commit
that referenced
this pull request
Sep 30, 2021
Bug 1983756: ceph: do not build all the args to remote exec cmd
10 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.