Skip to content

Commit

Permalink
Merge pull request #2748 from travisn/release-0-9-3-version
Browse files Browse the repository at this point in the history
Set the rook image version to v0.9.3 in the example yamls
  • Loading branch information
travisn committed Mar 2, 2019
2 parents d0c11c5 + 27a0c7c commit 1fa68b3
Show file tree
Hide file tree
Showing 11 changed files with 18 additions and 18 deletions.
2 changes: 1 addition & 1 deletion Documentation/ceph-toolbox.md
Expand Up @@ -36,7 +36,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: rook/ceph:v0.9.2
image: rook/ceph:v0.9.3
command: ["/tini"]
args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down
16 changes: 8 additions & 8 deletions Documentation/ceph-upgrade.md
Expand Up @@ -33,7 +33,7 @@ those releases.
### Patch Release Upgrades
One of the goals of the 0.9 release is that patch releases are able to be automated completely by
the Rook operator. It is intended that upgrades from one patch release to another are as simple as
updating the image of the Rook operator. For example, when Rook v0.9.2 is released, the process
updating the image of the Rook operator. For example, when Rook v0.9.3 is released, the process
should be as simple as running the following:
```
kubectl -n rook-ceph-system set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v0.9.x
Expand Down Expand Up @@ -243,11 +243,11 @@ kubectl -n $ROOK_NAMESPACE patch rolebinding rook-ceph-osd-psp -p "{\"subjects\"
```

### 3. Update the Rook operator image
The largest portion of the upgrade is triggered when the operator's image is updated to v0.9.2, and
The largest portion of the upgrade is triggered when the operator's image is updated to v0.9.3, and
with the greatly-expanded automatic update features in the new version, this is all done
automatically.
```sh
kubectl -n $ROOK_SYSTEM_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v0.9.2
kubectl -n $ROOK_SYSTEM_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v0.9.3
```

Watch now in amazement as the Ceph MONs, MGR, OSDs, RGWs, and MDSes are terminated and replaced with
Expand Down Expand Up @@ -285,11 +285,11 @@ being used in the cluster.
kubectl -n $ROOK_NAMESPACE describe pods | grep "Image:.*" | sort | uniq
# This cluster is not yet finished:
# Image: ceph/ceph:v12.2.9-20181026
# Image: rook/ceph:v0.9.2
# Image: rook/ceph:v0.9.3
# Image: rook/ceph:v0.8.3
# This cluster is finished:
# Image: ceph/ceph:v12.2.9-20181026
# Image: rook/ceph:v0.9.2
# Image: rook/ceph:v0.9.3
```

### 6. Remove unused resources
Expand All @@ -314,7 +314,7 @@ kubectl -n $ROOK_NAMESPACE patch rolebinding rook-ceph-osd-psp -p "{\"subjects\"
```

### 7. Verify the updated cluster
At this point, your Rook operator should be running version `rook/ceph:v0.9.2`, and the Ceph daemons
At this point, your Rook operator should be running version `rook/ceph:v0.9.3`, and the Ceph daemons
should be running image `ceph/ceph:v12.2.9-20181026`. The Rook operator version and the Ceph version
are no longer tied together, and we'll cover how to upgrade Ceph later in this document.

Expand Down Expand Up @@ -379,10 +379,10 @@ kubectl -n $ROOK_NAMESPACE describe pods | grep "Image:.*ceph/ceph" | sort | uni
# This cluster is not yet finished:
# Image: ceph/ceph:v12.2.9-20181026
# Image: ceph/ceph:v13.2.4-20190109
# Image: rook/ceph:v0.9.2
# Image: rook/ceph:v0.9.3
# This cluster is finished:
# Image: ceph/ceph:v13.2.4-20190109
# Image: rook/ceph:v0.9.2
# Image: rook/ceph:v0.9.3
```

#### 2. Update dashboard external service if applicable
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/coreos/after-reboot-daemonset.yaml
Expand Up @@ -18,7 +18,7 @@ spec:
effect: NoSchedule
containers:
- name: ceph-after-reboot-check
image: rook/ceph-toolbox:v0.9.2
image: rook/ceph-toolbox:v0.9.3
imagePullPolicy: IfNotPresent
command: ["/scripts/status-check.sh"]
env:
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/coreos/before-reboot-daemonset.yaml
Expand Up @@ -18,7 +18,7 @@ spec:
effect: NoSchedule
containers:
- name: ceph-before-reboot-check
image: rook/ceph-toolbox:v0.9.2
image: rook/ceph-toolbox:v0.9.3
imagePullPolicy: IfNotPresent
command: ["/scripts/status-check.sh"]
env:
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/cassandra/operator.yaml
Expand Up @@ -186,7 +186,7 @@ subjects:
serviceAccountName: rook-cassandra-operator
containers:
- name: rook-cassandra-operator
image: rook/cassandra:v0.9.2
image: rook/cassandra:v0.9.3
imagePullPolicy: "Always"
args: ["cassandra", "operator"]
env:
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/operator.yaml
Expand Up @@ -389,7 +389,7 @@ spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v0.9.2
image: rook/ceph:v0.9.3
args: ["ceph", "operator"]
volumeMounts:
- mountPath: /var/lib/rook
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/toolbox.yaml
Expand Up @@ -18,7 +18,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: rook/ceph:v0.9.2
image: rook/ceph:v0.9.3
command: ["/tini"]
args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/cockroachdb/operator.yaml
Expand Up @@ -98,7 +98,7 @@ spec:
serviceAccountName: rook-cockroachdb-operator
containers:
- name: rook-cockroachdb-operator
image: rook/cockroachdb:v0.9.2
image: rook/cockroachdb:v0.9.3
args: ["cockroachdb", "operator"]
env:
- name: POD_NAME
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/edgefs/operator.yaml
Expand Up @@ -205,7 +205,7 @@ spec:
serviceAccountName: rook-edgefs-system
containers:
- name: rook-edgefs-operator
image: rook/edgefs:v0.9.2
image: rook/edgefs:v0.9.3
imagePullPolicy: "Always"
args: ["edgefs", "operator"]
env:
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/minio/operator.yaml
Expand Up @@ -88,7 +88,7 @@ spec:
serviceAccountName: rook-minio-operator
containers:
- name: rook-minio-operator
image: rook/minio:v0.9.2
image: rook/minio:v0.9.3
args: ["minio", "operator"]
env:
- name: POD_NAME
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/nfs/operator.yaml
Expand Up @@ -88,7 +88,7 @@ spec:
serviceAccountName: rook-nfs-operator
containers:
- name: rook-nfs-operator
image: rook/nfs:v0.9.2
image: rook/nfs:v0.9.3
args: ["nfs", "operator"]
env:
- name: POD_NAME
Expand Down

0 comments on commit 1fa68b3

Please sign in to comment.