Be notified of new releases
Create your free GitHub account today to subscribe to this repository for new releases and build software alongside 28 million developers.Sign up
- Storage Providers for Cassandra, EdgeFS, and NFS were added
- Ceph CRDs have been declared stable V1.
- Ceph versioning is decoupled from the Rook version. Luminous and Mimic can be run in production, or Nautilus in experimental mode.
- Ceph upgrades are greatly simplified
- Existing clusters that are running previous versions of Rook will need to be migrated to be compatible with the v0.9 operator and to begin using the new
ceph.rook.io/v1CRD types. Please follow the instructions in the upgrade user guide to successfully migrate your existing Rook cluster to the new release.
- The minimum version of Kubernetes supported by Rook changed from
- K8s client-go updated from version 1.8.2 to 1.11.3
- The Ceph CRDs are now v1. The operator will automatically convert the CRDs from v1beta1 to v1.
- Different versions of Ceph can be orchestrated by Rook. Both Luminous and Mimic are now supported, with Nautilus coming soon.
The version of Ceph is specified in the cluster CRD with the cephVersion.image property. For example, to run Mimic you could use image
or any other image found on the Ceph DockerHub.
fsTypedefault for StorageClass examples are now using XFS to bring it in line with Ceph recommendations.
- Rook Ceph block storage provisioner can now correctly create erasure coded block images. See Advanced Example: Erasure Coded Block Storage for an example usage.
- Service account (
rook-ceph-mgr) added for the mgr daemon to grant the mgr orchestrator modules access to the K8s APIs.
StorageClassdefinition is now supported.
- The toolbox manifest now creates a deployment based on the
rook/cephimage instead of creating a pod on a specialized
- The frequency of discovering devices on a node is reduced to 60 minutes by default, and is configurable with the setting
- The number of mons can be changed by updating the
mon.countin the cluster CRD.
- RBD Mirroring is enabled by Rook. By setting the number of rbd mirroring workers, the daemon(s) will be started by rook. To configure the pools or images to be mirrored, use the Rook toolbox to run the rbd mirror configuration tool.
- Object Store User creation via CRD for Ceph clusters.
- Ceph MON, OSD, MGR, MDS, and RGW deployments (or DaemonSets) will be updated/upgraded automatically with updates to the Rook operator.
- Ceph OSDs are created with the
ceph-volumetool when configuring devices, adding support for multiple OSDs per device. See the OSD configuration settings
- Network File System (NFS) is now supported by Rook with a new operator to deploy and manage this widely used server. NFS servers can be automatically deployed by creating an instance of the new
nfsservers.nfs.rook.iocustom resource. See the NFS server user guide to get started with NFS.
- Cassandra and Scylla are now supported by Rook with the rook-cassandra operator. Users can now deploy, configure and manage Cassandra or Scylla clusters, by creating an instance of the
clusters.cassandra.rook.iocustom resource. See the user guide to get started.
EdgeFS Geo-Transparent Storage
- EdgeFS are supported by a Rook operator, providing high-performance and low-latency object storage system with Geo-Transparent data access via standard protocols. See the user guide to get started.
- The Rook container images are no longer published to quay.io, they are published only to Docker Hub. All manifests have referenced Docker Hub for multiple releases now, so we do not expect any directly affected users from this change.
- Rook no longer supports kubernetes
1.7. Users running Kubernetes
1.7on their clusters are recommended to upgrade to Kubernetes
1.8or higher. If you are using
kubeadm, you can follow this guide to from Kubernetes
1.8. If you are using
kubesprayfor managing your Kubernetes cluster, just follow the respective projects'
- The Ceph CRDs are now v1. With the version change, the
kindhas been renamed for the following Ceph CRDs:
rook-ceph-clusterservice account was renamed to
rook-ceph-osdas this service account only applies to OSDs.
- On upgrade from v0.8, the
rook-ceph-osdservice account must be created before starting the operator on v0.9.
serviceAccountproperty has been removed from the cluster CRD.
- On upgrade from v0.8, the
- Ceph mons are named consistently with other daemons with the letters a, b, c, etc.
- Ceph mons are now created with Deployments instead of ReplicaSets to improve the upgrade implementation.
- Ceph mon, osd, mgr, mds, and rgw container names in pods have changed with the refactors to initialize the daemon environments via pod InitContainers and run the Ceph daemons directly from the container entrypoint.
- Minio no longer exposes a configurable port for each distributed server instance to use. This was an internal only port that should not need to be configured by the user. All connections from users and clients are expected to come in through the configurable Service instance.
- Upgrades are not supported to nautilus. Specifically, OSDs configured before the upgrade (without ceph-volume) will fail to start on nautilus. Nautilus is not officially supported until its release, but otherwise is expected to be working in test clusters.