Skip to content

Commit

Permalink
docs: clear pending release notes and update upgrade guide to 0.8
Browse files Browse the repository at this point in the history
Signed-off-by: Jared Watts <jbw976@gmail.com>
  • Loading branch information
jbw976 committed Jul 19, 2018
1 parent d537df8 commit 76280b9
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 65 deletions.
26 changes: 13 additions & 13 deletions Documentation/upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ The goal is to provide prescriptive guidance and knowledge on how to upgrade a l
We welcome feedback and opening issues!

## Supported Versions
The supported version for this upgrade guide is **from an 0.7 release to the latest builds**. Until 0.8 is released,
the latest builds are labeled such as `v0.7.0-280.g41829b1`. Build-to-build upgrades are not guaranteed to work.
The supported version for this upgrade guide is **from a 0.7 release to a 0.8 release**.
Build-to-build upgrades are not guaranteed to work.
This guide is to test upgrades only between the official releases.

For a guide to upgrade previous versions of Rook, please refer to the version of documentation for those releases.
Expand Down Expand Up @@ -127,7 +127,7 @@ Any pod that is using a Rook volume should also remain healthy:
## Upgrade Process
The general flow of the upgrade process will be to upgrade the version of a Rook pod, verify the pod is running with the new version, then verify that the overall cluster health is still in a good state.

In this guide, we will be upgrading a live Rook cluster running `v0.7.1` to the next available version of `v0.8`. Until the `v0.8` release is completed, we will instead use the latest `v0.7` tag such as `v0.7.0-280.g41829b1`.
In this guide, we will be upgrading a live Rook cluster running `v0.7.1` to the next available version of `v0.8`.

Let's get started!

Expand Down Expand Up @@ -226,7 +226,7 @@ kubectl delete clusterrolebindings rook-operator

Now we need to create the new Ceph specific operator.

**IMPORTANT:** Ensure that you are using the latest manifests from either `master` or the `release-0.8` branch. If you have custom configuration options set in your old `rook-operator.yaml` manifest, you will need to set those values in the new Ceph operator manifest below.
**IMPORTANT:** Ensure that you are using the latest manifests from the `release-0.8` branch. If you have custom configuration options set in your old `rook-operator.yaml` manifest, you will need to set those values in the new Ceph operator manifest below.

Navigate to the new Ceph manifests directory, apply your custom configuration options if you are using any, and then create the new Ceph operator with the command below.
Note that the new operator by default uses by `rook-ceph-system` namespace, but we will use `sed` to edit it in place to use `rook-system` instead for backwards compatibility with your existing cluster.
Expand All @@ -239,7 +239,7 @@ After the operator starts, after several minutes you may see some new OSD pods b
[OSD section](#object-storage-daemons-osds).

#### Operator Health Verification
To verify the operator pod is `Running` and using the new version of `rook/ceph:master`, use the following commands:
To verify the operator pod is `Running` and using the new version of `rook/ceph:v0.8.0`, use the following commands:
```bash
OPERATOR_POD_NAME=$(kubectl -n rook-system get pods -l app=rook-ceph-operator -o jsonpath='{.items[0].metadata.name}')
kubectl -n rook-system get pod ${OPERATOR_POD_NAME} -o jsonpath='{.status.phase}{"\n"}{.spec.containers[0].image}{"\n"}'
Expand All @@ -265,7 +265,7 @@ so we will delete the old pod and start the new toolbox.
kubectl -n rook delete pod rook-tools
```
After verifying the old tools pod has terminated, start the new toolbox.
You will need to either create the toolbox using the yaml in the master branch or simply set the version of the container to `rook/ceph-toolbox:master` before creating the toolbox.
You will need to either create the toolbox using the yaml in the `release-0.8` branch or simply set the version of the container to `rook/ceph-toolbox:v0.8.0` before creating the toolbox.
Note the below command uses `sed` to change the new default namespace for the toolbox from `rook-ceph` to `rook` to be backwards compatible with your existing cluster.
```
cat toolbox.yaml | sed -e 's/namespace: rook-ceph/namespace: rook/g' | kubectl create -f -
Expand All @@ -280,10 +280,10 @@ kubectl -n rook delete deploy rook-api

### Monitors
There are multiple monitor pods to upgrade and they are each individually managed by their own replica set.
**For each** monitor's replica set, you will need to update the pod template spec's image version field to `rook/ceph:master`.
**For each** monitor's replica set, you will need to update the pod template spec's image version field to `rook/ceph:v0.8.0`.
For example, we can update the replica set for `mon0` with:
```bash
kubectl -n rook set image replicaset/rook-ceph-mon0 rook-ceph-mon=rook/ceph:master
kubectl -n rook set image replicaset/rook-ceph-mon0 rook-ceph-mon=rook/ceph:v0.8.0
```

Once the replica set has been updated, we need to manually terminate the old pod which will trigger the replica set to create a new pod using the new version.
Expand Down Expand Up @@ -381,9 +381,9 @@ If you have optionally installed either [object storage](./object.md) or a [shar
They are both managed by deployments, which we have already covered in this guide, so the instructions will be brief.

#### Object Storage (RGW)
If you have object storage installed, first edit the RGW deployment to use the new image version of `rook/ceph:master`:
If you have object storage installed, first edit the RGW deployment to use the new image version of `rook/ceph:v0.8.0`:
```bash
kubectl -n rook set image deploy/rook-ceph-rgw-my-store rook-ceph-rgw-my-store=rook/ceph:master
kubectl -n rook set image deploy/rook-ceph-rgw-my-store rook-ceph-rgw-my-store=rook/ceph:v0.8.0
```

To verify that the RGW pod is `Running` and on the new version, use the following:
Expand All @@ -392,9 +392,9 @@ kubectl -n rook get pod -l app=rook-ceph-rgw -o jsonpath='{range .items[*]}{.met
```

#### Shared File System (MDS)
If you have a shared file system installed, first edit the MDS deployment to use the new image version of `rook/ceph:master`:
If you have a shared file system installed, first edit the MDS deployment to use the new image version of `rook/ceph:v0.8.0`:
```bash
kubectl -n rook set image deploy/rook-ceph-mds-myfs rook-ceph-mds-myfs=rook/ceph:master
kubectl -n rook set image deploy/rook-ceph-mds-myfs rook-ceph-mds-myfs=rook/ceph:v0.8.0
```

To verify that the MDS pod is `Running` and on the new version, use the following:
Expand All @@ -403,7 +403,7 @@ kubectl -n rook get pod -l app=rook-ceph-mds -o jsonpath='{range .items[*]}{.met
```

## Completion
At this point, your Rook cluster should be fully upgraded to running version `rook/ceph:master` and the cluster should be healthy according to the steps in the [health verification section](#health-verification).
At this point, your Rook cluster should be fully upgraded to running version `rook/ceph:v0.8.0` and the cluster should be healthy according to the steps in the [health verification section](#health-verification).

## Upgrading Kubernetes
Rook cluster installations on Kubernetes prior to version 1.7.x, use [ThirdPartyResource](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-third-party-resource/) that have been deprecated as of 1.7 and removed in 1.8. If upgrading your Kubernetes cluster Rook TPRs have to be migrated to CustomResourceDefinition (CRD) following [Kubernetes documentation](https://kubernetes.io/docs/tasks/access-kubernetes-api/migrate-third-party-resource/). Rook TPRs that require migration during upgrade are:
Expand Down
52 changes: 0 additions & 52 deletions PendingReleaseNotes.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,62 +2,10 @@

## Action Required

- Existing clusters that are running previous versions of Rook will need to be upgraded/migrated to be compatible with the `v0.8` operator and to begin using the new `rook.io/v1alpha2` and `ceph.rook.io/v1beta1` CRD types. Please follow the instructions in the [upgrade user guide](Documentation/upgrade.md) to successfully migrate your existing Rook cluster to the new release, as it has been updated with specific steps to help you upgrade to `v0.8`.

## Notable Features

- Rook is now architected to be a general cloud-native storage orchestrator, and can now support multiple types of storage and providers beyond Ceph.
- [CockroachDB](https://www.cockroachlabs.com/) is now supported by Rook with a new operator to deploy, configure and manage instances of this popular and resilient SQL database. Databases can be automatically deployed by creating an instance of the new `cluster.cockroachdb.rook.io` custom resource. See the [CockroachDB user guide](Documentation/cockroachdb.md) to get started with CockroachDB.
- [Minio](https://www.minio.io/) is also supported now with an operator to deploy and manage this popular high performance distributed object storage server. To get started with Minio using the new `objectstore.minio.rook.io` custom resource, follow the steps in the [Minio user guide](Documentation/minio-object-store.md).
- The status of Rook is no longer published for the project as a whole. Going forward, status will be published per storage provider or API group. Full details can be found in the [project status section](./README.md#project-status) of the README.
- [Ceph](https://ceph.com/) support has graduated to Beta.
- Ceph tools can be run [from any rook pod](Documentation/common-issues.md#ceph-tools).
- Output from stderr will be included in error messages returned from the `exec` of external tools.
- Rook-Operator no longer creates the resources CRD's or TPR's at the runtime. Instead, those resources are provisioned during deployment via `helm` or `kubectl`.
- The 'rook' image is now based on the ceph-container project's 'daemon-base' image so that Rook no
longer has to manage installs of Ceph in image.
- Rook CRD code generation is now working with BSD (Mac) and GNU sed.
- The [Ceph dashboard](Documentation/ceph-dashboard.md) can be enabled by the cluster CRD.
- `monCount` has been renamed to `count`, which has been moved into the [`mon` spec](Documentation/ceph-cluster-crd.md#mon-settings). Additionally the default if unspecified or `0`, is now `3`.
- You can now toggle if multiple Ceph mons might be placed on one node with the `allowMultiplePerNode` option (default `false`) in the [`mon` spec](Documentation/ceph-cluster-crd.md#mon-settings).
- One OSD will run per pod to increase the reliability and maintainability of the OSDs. No longer will restarting an OSD pod mean that all OSDs on that node will go down. See the [design doc](design/dedicated-osd-pod.md).
- Added `nodeSelector` to Rook Ceph operator Helm chart.

## Breaking Changes

- Removed support for Kubernetes 1.6, including the legacy Third Party Resources (TPRs).
- Various paths and resources have changed to accommodate multiple backends:
- Examples: The yaml files for creating a Ceph cluster can be found in `cluster/examples/kubernetes/ceph`. The yaml files that are backend-independent will still be found in the `cluster/examples/kubernetes` folder.
- CRDs: The `apiVersion` of the Rook CRDs are now backend-specific, such as `ceph.rook.io/v1beta1` instead of `rook.io/v1alpha1`.
- Cluster CRD: The Ceph cluster CRD has had several properties restructured for consistency with other backend CRDs that will be coming soon. Rook will automatically upgrade the previous Ceph CRD versions to the new versions with all the compatible properties. When creating the cluster CRD based on the new `ceph.rook.io` apiVersion you will need to take note of the new settings structure.
- Container images: The container images for Ceph and the toolbox are now `rook/ceph` and `rook/ceph-toolbox`. The steps in the [upgrade user guide](Documentation/upgrade.md) will automatically start using these new images for your cluster.
- Namespaces: The example namespaces are now backend-specific. Instead of `rook-system` and `rook`, you will see `rook-ceph-system` and `rook-ceph`.
- Volume plugins: The dynamic provisioner and flex driver are now based on `ceph.rook.io` instead of `rook.io`
- Ceph container images now use CentOS 7 as a base
- Minimal privileges are configured with a new cluster role for the operator and Ceph daemons, following the new [security design](design/security-model.md).
- A role binding must be defined for each cluster to be managed by the operator.
- OSD pods are started by a deployment, instead of a daemonset or a replicaset. The new OSD pods will crash loop until the old daemonset or replicasets are removed.

### Removal of the API service and rookctl tool

The [REST API service](https://github.com/rook/rook/issues/1122) has been removed. All cluster configuration is now accomplished through the
[CRDs](https://rook.io/docs/rook/master/crds.html) or with the Ceph tools in the [toolbox](https://rook.io/docs/rook/master/toolbox.html).

The tool `rookctl` has been removed from the toolbox pod. Cluster status and configuration can be queried and changed with the Ceph tools.
Here are some sample commands to help with your transition.

| `rookctl` Command | Replaced by | Description |
| -------------------- | --------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- |
| `rookctl status` | `ceph status` | Query the status of the storage components |
| `rookctl block` | See the [Block storage](Documentation/block.md) and [direct Block](Documentation/direct-tools.md#block-storage-tools) config | Create, configure, mount, or unmount a block image |
| `rookctl filesystem` | See the [Filesystem](Documentation/filesystem.md) and [direct File](Documentation/direct-tools.md#shared-filesystem-tools) config | Create, configure, mount, or unmount a file system |
| `rookctl object` | See the [Object storage](Documentation/object.md) config | Create and configure object stores and object users |

## Known Issues

## Deprecations

- Legacy CRD types in the `rook.io/v1alpha1` API group have been deprecated. The types from
`rook.io/v1alpha2` should now be used instead.
- Legacy command flag `public-ipv4` in the ceph components have been deprecated, `public-ip` should now be used instead.
- Legacy command flag `private-ipv4` in the ceph components have been deprecated, `private-ip` should now be used instead.

0 comments on commit 76280b9

Please sign in to comment.