- Ceph: Nautilus is supported, improved automation for Ceph upgrades, experimental CSI driver, NFS, and much more!
- EdgeFS: CRDs declared beta, upgrade guide, new storage protocols, a new management experience, and much more!
- Minio: Responsive operator reconciliation loop and added health checks
- NFS: Dynamic provisioning of volumes
If you are running a previous Rook version, please see the corresponding storage provider upgrade guide:
- The minimum version of Kubernetes supported by Rook changed from 1.8 to 1.10.
- K8s client packages updated from version 1.11.3 to 1.14.0
- Rook Operator switches from Extensions v1beta1 to use Apps v1 API for DaemonSet and Deployment.
- Ceph Nautilus (
v14) is now supported by Rook and is the default version deployed by the examples.
- The Ceph-CSI driver is available for experimental mode.
- An operator restart is no longer needed for applying changes to the cluster in the following scenarios:
- When a node is added to the cluster, OSDs will be automatically configured as needed.
- When a device is attached to a storage node, OSDs will be automatically configured as needed.
- Any change to the CephCluster CR will trigger updates to the cluster.
- Upgrading the Ceph version will update all Ceph daemons (in v0.9, mds and rgw daemons were skipped)
- Ceph status is surfaced in the CephCluster CR and periodically updated by the operator (default is 60s). The interval can be configured with the
CephNFSCRD will start NFS daemon(s) for exporting CephFS volumes or RGW buckets. See the NFS documentation.
- The flex driver can be configured to properly disable SELinux relabeling and FSGroup with the settings in operator.yaml.
- The number of mons can be increased automatically when new nodes come online. See the preferredCount setting in the cluster CRD documentation.
- New Kubernetes nodes or nodes which are not tainted
NoScheduleanymore get added automatically to the existing rook cluster if useAllNodes is set.
- Pod's logs can be written on the filesystem as of Ceph Nautilus 14.2.1 on demand (see common issues)
ceph-versionlabels are now applied to Ceph daemon Deployments, DaemonSets,
Jobs, and StatefulSets. These identify the Rook version which last modified the resource and the
Ceph version which Rook has detected in the pod(s) being run by the resource.
- OSDs provisioned by
- The operator will no longer remove osds from specified nodes when the node is tainted with
automatic Kubernetes taints
Osds can still be removed by more explicit methods. See the "Node Settings" section of the
Ceph Cluster CRD documentation for full details.
- Declare all EdgeFS CRDs to be Beta v1. All users recommended to use documented migration procedure
- Automatic host validation and preparation of sysctl settings
- Support for OpenStack/SWIFT CRD
- Support for S3 bucket as DNS subdomain
- Support for Block (iSCSI) CSI Provisioner
- Support for Prometheus Dashboard and REST APIs
- Support for Management GUI with automated CRD wizards
- Support for Failure domains and zoning provisioning support
- Support for Multi-Namespace clusters with single operator instance
- Support for embedded mode and low resources deployments with minimally of 1GB of memory and 2 CPU cores
- Many bug fixes and usability improvements
- Rook no longer supports Kubernetes
- The build process no longer publishes the alpha, beta, and stable channels. The only channels published are
- The stability of storage providers is determined by the CRD versions rather than the overall product build, thus the channels were renamed to match this expectation.
- Rook no longer supports running more than one monitor on the same node when
- The example operator and CRD yaml files have been refactored to simplify configuration. See the examples help topic for more details.
- The common resources are now factored into
common.yaml: Creates the namespace, RBAC, CRD definitions, and other common operator and cluster resources
operator.yaml: Only contains the operator deployment
cluster.yaml: Only contains the cluster CRD
- Multiple examples of the operator and CRDs are provided for common usage of the operator and CRDs.
- By default, a single namespace (
rook-ceph) is configured instead of two namespaces (
rook-ceph). New and upgraded clusters can still be configured with the operator and cluster in two separate namespaces. Existing clusters will maintain their namespaces on upgrade.
- The common resources are now factored into
- Rook will no longer create a directory-based osd in the
dataDirHostPathif no directories or
devices are specified or if there are no disks on the host.
- Containers in
rbd-mirrorpods have been removed and/or changed names.
- Config paths in
rgwcontainers are now always under
/var/lib/cephand as close to Ceph's default path as possible regardless of the
rbd-mirrorpod labels now read
- Creating an object store from Rook v1.0 will be configured incorrectly when running Ceph Luminous or Mimic. For users who are upgrading from v0.9 it is recommended to either create the object store before upgrading, or update to Nautilus before creating an object store.