@travisn travisn released this Dec 9, 2018 · 85 commits to master since this release

Assets 2

Major Themes

  • Storage Providers for Cassandra, EdgeFS, and NFS were added
  • Ceph CRDs have been declared stable V1.
  • Ceph versioning is decoupled from the Rook version. Luminous and Mimic can be run in production, or Nautilus in experimental mode.
  • Ceph upgrades are greatly simplified

Action Required

  • Existing clusters that are running previous versions of Rook will need to be migrated to be compatible with the v0.9 operator and to begin using the new ceph.rook.io/v1 CRD types. Please follow the instructions in the upgrade user guide to successfully migrate your existing Rook cluster to the new release.

Notable Features

  • The minimum version of Kubernetes supported by Rook changed from 1.7 to 1.8.
  • K8s client-go updated from version 1.8.2 to 1.11.3

Ceph

  • The Ceph CRDs are now v1. The operator will automatically convert the CRDs from v1beta1 to v1.
  • Different versions of Ceph can be orchestrated by Rook. Both Luminous and Mimic are now supported, with Nautilus coming soon.
    The version of Ceph is specified in the cluster CRD with the cephVersion.image property. For example, to run Mimic you could use image ceph/ceph:v13.2.2-20181023
    or any other image found on the Ceph DockerHub.
  • The fsType default for StorageClass examples are now using XFS to bring it in line with Ceph recommendations.
  • Rook Ceph block storage provisioner can now correctly create erasure coded block images. See Advanced Example: Erasure Coded Block Storage for an example usage.
  • Service account (rook-ceph-mgr) added for the mgr daemon to grant the mgr orchestrator modules access to the K8s APIs.
  • reclaimPolicy parameter of StorageClass definition is now supported.
  • The toolbox manifest now creates a deployment based on the rook/ceph image instead of creating a pod on a specialized rook/ceph-toolbox image.
  • The frequency of discovering devices on a node is reduced to 60 minutes by default, and is configurable with the setting ROOK_DISCOVER_DEVICES_INTERVAL in operator.yaml.
  • The number of mons can be changed by updating the mon.count in the cluster CRD.
  • RBD Mirroring is enabled by Rook. By setting the number of rbd mirroring workers, the daemon(s) will be started by rook. To configure the pools or images to be mirrored, use the Rook toolbox to run the rbd mirror configuration tool.
  • Object Store User creation via CRD for Ceph clusters.
  • Ceph MON, OSD, MGR, MDS, and RGW deployments (or DaemonSets) will be updated/upgraded automatically with updates to the Rook operator.
  • Ceph OSDs are created with the ceph-volume tool when configuring devices, adding support for multiple OSDs per device. See the OSD configuration settings

NFS

  • Network File System (NFS) is now supported by Rook with a new operator to deploy and manage this widely used server. NFS servers can be automatically deployed by creating an instance of the new nfsservers.nfs.rook.io custom resource. See the NFS server user guide to get started with NFS.

Cassandra

  • Cassandra and Scylla are now supported by Rook with the rook-cassandra operator. Users can now deploy, configure and manage Cassandra or Scylla clusters, by creating an instance of the clusters.cassandra.rook.io custom resource. See the user guide to get started.

EdgeFS Geo-Transparent Storage

  • EdgeFS are supported by a Rook operator, providing high-performance and low-latency object storage system with Geo-Transparent data access via standard protocols. See the user guide to get started.

Breaking Changes

  • The Rook container images are no longer published to quay.io, they are published only to Docker Hub. All manifests have referenced Docker Hub for multiple releases now, so we do not expect any directly affected users from this change.
  • Rook no longer supports kubernetes 1.7. Users running Kubernetes 1.7 on their clusters are recommended to upgrade to Kubernetes 1.8 or higher. If you are using kubeadm, you can follow this guide to from Kubernetes 1.7 to 1.8. If you are using kops or kubespray for managing your Kubernetes cluster, just follow the respective projects' upgrade guide.

Ceph

  • The Ceph CRDs are now v1. With the version change, the kind has been renamed for the following Ceph CRDs:
    • Cluster --> CephCluster
    • Pool --> CephBlockPool
    • Filesystem --> CephFilesystem
    • ObjectStore --> CephObjectStore
    • ObjectStoreUser --> CephObjectStoreUser
  • The rook-ceph-cluster service account was renamed to rook-ceph-osd as this service account only applies to OSDs.
    • On upgrade from v0.8, the rook-ceph-osd service account must be created before starting the operator on v0.9.
    • The serviceAccount property has been removed from the cluster CRD.
  • Ceph mons are named consistently with other daemons with the letters a, b, c, etc.
  • Ceph mons are now created with Deployments instead of ReplicaSets to improve the upgrade implementation.
  • Ceph mon, osd, mgr, mds, and rgw container names in pods have changed with the refactors to initialize the daemon environments via pod InitContainers and run the Ceph daemons directly from the container entrypoint.

Minio

  • Minio no longer exposes a configurable port for each distributed server instance to use. This was an internal only port that should not need to be configured by the user. All connections from users and clients are expected to come in through the configurable Service instance.

Known Issues

Ceph

  • Upgrades are not supported to nautilus. Specifically, OSDs configured before the upgrade (without ceph-volume) will fail to start on nautilus. Nautilus is not officially supported until its release, but otherwise is expected to be working in test clusters.