Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rsync release 1.2 #28

Merged
merged 12 commits into from Jan 29, 2020
Merged

Rsync release 1.2 #28

merged 12 commits into from Jan 29, 2020

Conversation

leseb
Copy link

@leseb leseb commented Jan 29, 2020

No description provided.

leseb and others added 12 commits January 21, 2020 16:52
Now, we purely rely on what the user puts in the CR and apply it.
Prior to this commit we were putting 0 if not declared, this was not the
intentional.

Closes: #4713
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit ef8ef45)
Currently each time the ceph operator restart, osds also restart.
This is due to a change in the crush-location args with location not ordered
At each restart the location order can change and lead to that kind of change in the osd deployment
104c105
<                             "--crush-location=root=default host=hostname datacenter=PAR pod=2 rack=2",
---
>                             "--crush-location=root=default host=hostname pod=2 rack=2 datacenter=PAR",
Sorting the topology to ensure stability of this command arg

Signed-off-by: n.fraison <n.fraison@criteo.com>

(cherry picked from commit 7bac027)
If an image is specified in the external cluster CR, the
'rook-ceph-config' config map will be created and then child CRs like
rgw/mds/nfs can be created.

Also, calling config.GetStore() was a mistake since it's calling by the
health check if new mons are being added to clusterInfo.

Closes: #4686
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit f59bdd6)
ceph: fix external cluster config override cm creation (bp #4734)
ceph: ensure crush-location osd command args is always the same to avoid useless osds restart (bp #4729)
…nts to config

Before, every few seconds the config files for the cluster where written to disk.
Even when there were no changes in the clusters monitors.

Fixes: #4717
Signed-off-by: Elias Wimmer <elias.wimmer@tuwien.ac.at>
(cherry picked from commit 70f312e)
ceph: fixed #4717 Ceph external cluster, repeatedly saving mon endpoi… (bp #4725)
In order for deployments managed outside the immediate scope of
the ClusterController to accurately detect the Ceph version of
their images, the ClusterController must store the detected
version. This change records the Ceph version as part of the
cluster status and modifies the crashcollector deployment to
consume this field to record its own Ceph version. This replaces
an earlier solution to this problem in which the Ceph version
for a given image to version mapping was stored in a global
in-memory map by the operator.

Resolves: #4357
Signed-off-by: Elise Gafford <egafford@redhat.com>
(cherry picked from commit b19f8d4)
ceph: add ceph version to cluster status (bp #4629)
With this modification is guaranteed that at least 1 MDS pod is going to be
placed in each of the available zones in the k8s cluster.
This would be improved in the future using:
<Pod Topology Spread Constraints> (still in alpha state since kubernetes V.1.16)

This modification obtain the number of zones in the k8s cluster and appends an
<antiaffinity> term using the topologyKey
<topology.kubernetes.io/zone> in the same number of MDS pods.
In the case that will be more MDS pods than zones, the <antiaffinity> term won't
be added in these extra pods.

Important Note:
The antiaffiniy term only will work with nodes labeled using the NEW label:
 <topology.kubernetes.io/zone> present in k8s V.1.17 clusters.
For previous versions of k8s clusters the label used is:
 <failure-domain.beta.kubernetes.io/zone>
And therefore, in this kind of clusters this modification won't work.

Resolves #4641

[test ceph]

Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@redhat.com>
(cherry picked from commit f62b2fa)
Ceph: MDS pod placement adheres to fault domain topology (bp #4680)
@openshift-ci-robot openshift-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jan 29, 2020
@leseb leseb merged commit e687f03 into red-hat-storage:ocs-4.3 Jan 29, 2020
leseb added a commit that referenced this pull request Sep 30, 2021
Bug 1983756: ceph: do not build all the args to remote exec cmd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
4 participants