Congratulations and thanks to everyone of you from the OpenEBS community for reaching this significant milestone!
Prerequisite to install
- Kubernetes 1.12+ is installed
- Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
- Make sure iSCSI Initiator is installed on the Kubernetes nodes.
- NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
- NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.0.0.yaml
Using helm stable charts
helm repo update helm install --namespace openebs --name openebs stable/openebs
For more details refer to the documentation at: https://docs.openebs.io/
Upgrade to 1.0 is supported only from 0.9 and follows a similar approach like earlier releases.
- Upgrade OpenEBS Control Plane components. This involves a pre-upgrade step.
- Upgrade Jiva PVs to 1.0, one at a time
- Upgrade CStor Pools to 1.0 and its associated Volumes, one at a time.
The detailed steps are provided here.
For upgrading from releases prior to 0.9, please refer to the respective release upgrade here.
OpenEBS Release 1.0 has multiple enhancements and bug fixes which include:
- Major enhancements to Node Device Manager (NDM) to help with managing the lifecycle of block devices attached to the Kubernetes nodes.
- The first and most widely deployed OpenEBS Data Engine - Jiva has graduated to stable. Jiva is ideal for use cases where Kubernetes nodes have storage available via hostpaths. Jiva Volumes support thin provisioning and supports backup and restore via Velero.
- cStor Data Engine continues to be a preferred solution for use cases that require instant snapshot and clone of volumes. This release has some more fixes around the rebuild and backup/restore scenarios.
- The latest volume type - OpenEBS Local PV has graduated to beta with some users already using it in production. The current release enhances the support of Local PV by tighter integration into NDM and adding the ability to create Local PVs on attached block devices.
Note: If you have automated tools built around OpenEBS cStor Data Engine, please pay closer attention to the following changes:
- The Storage Devices are now represented using the the CR called -
blockdevice. For a list of blockdevices in your cluster - run
kubectl get blockdevices -n <openebs namespace>
- The StoragePoolClaim (SPC) that is used to setup the cStor Pools - will have to be provided with
blockdevicesin place of
diskCRs. For more details and examples check the documentation.
For detailed change summary and steps to upgrade from previous version, please refer to: Release 1.0 Change Summary
For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.