1.8.0
Change Summary
For a detailed change summary, please refer to Release 1.8 Change Summary.
Special thanks to our first-time contributors in this release. @Pensu @novasharper @nerdeveloper @nicklasfrahm
OpenEBS v1.8 includes a critical fix (#2956) for Jiva volumes that are running in version 1.6 and 1.7. You must use these pre-upgrade steps to check if your jiva volumes are impacted. If they are, please reach out to us on OpenEBS Slack or Kubernetes Slack #openebs channel for helping you with the upgrade.
Here are some of the key highlights in this release:
Key Improvements
- Added support for configuring capacity threshold limit for a cStor Pool. The default threshold limit is set at 85%. The threshold setting has been introduced to avoid a scenario where pool capacity is fully utilized, resulting in failure of all kinds of operations - including pool expansion. #2937 ( @mynktl, @shubham14bajpai)
- Validated that OpenEBS cStor can be used with K3OS(k3os-v0.9.0). #2686 (@gprasath)
Key Bug Fixes
- Fixes an issue where Jiva volumes could cause data loss when a node restarts during an ongoing space reclamation at its replica. #2956( @utkarshmani1997 @payes)
- Fixes an issue where cStor restore from scheduled backup fails, if the first scheduled backup was aborted. #2926 (@mynktl)
- Fixes an issue where upgrade scripts were failing on Mac. #2952 (@novasharper)
- Fixes documentation references to deprecated
disk
custom resource in example YAMLs. (@nerdeveloper) - Fixes documentation to include a troubleshooting section to work with openebs api server ports blocked due to advanced network configuration. #2843 (@nicklasfrahm)
Alpha Features
Active development is underway on the following alpha features:
- MayaStor
- ZFS Local PV
- CSI Driver for cStor and Jiva
- ARM builds tracked under #1295
Some notable changes are:
- Support for generating automated ARM builds for Jiva. (@shubham14bajpai)
- Support for generating automated ppc64le builds for Node Disk Manager. (@Pensu)
- Support for volume expansion of ZFS Local PV and add automated e2e tests. (@pawanpraka1, @w3aman )
- Support for declarative scale up and down of cstor volume replicas, increasing the e2e coverage and fixing the issue uncovered. (@mittachaitu, @gprasath, @nsathyaseelan )
- Incorporate the feedback on the cStor Custom Resource Schema and work towards v1 schema. (@sonasingh46 @prateekpandey14 @mittachaitu )
Major Limitations and Notes
For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you using cStor Storage Engine, please review the following before upgrading to this release.
- The recommended approach for deploying cStor Pools is to specify the list of block devices to be used in the StoragePoolClaim (SPC). The automatic selection of block devices has very limited support. Automatic provisioning of cStor pools with block devices of different capacities is not recommended.
- When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem, partitioned or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community at https://slack.openebs.io.
- If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to
Init
. - Capacity over provisioning is enabled by default on the cStor pools. If you don’t have alerts setup for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into read-only state. To avoid this, setup resource quotas as described in #2855.
- The new version of cStor Schema is being worked on to address the user feedback in terms of ease of use for cStor provisioning as well as to make it easier to perform Day 2 Operations on cStor Pools using GitOps. Note that existing StoragePoolClaim pools will continue to function as-is. Along with stabilizing the new schema, we have also started working on migration feature - which will easily migrate the clusters to new schema in the upcoming releases. Once the proposed changes are complete, seamless migration from older CRs to new will be supported. To track the progress of the proposed changes, please refer to this design proposal. Note: We recommend users to try out the new schema on greenfield clusters to provide feedback. Get started with these (instructions)[https://blog.mayadata.io/openebs/cstor-pool-operations-via-cspc-in-openebs].
Getting Started
Prerequisite to install
- Kubernetes 1.13+ is installed
- Make sure that you run the below installation steps with the cluster-admin context. The installation will involve creating a new Service Account and assigning it to OpenEBS components.
- Make sure iSCSI Initiator is installed on the Kubernetes nodes.
- Node-Disk-Manager (NDM) helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
- NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on
Install using kubectl
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.8.0.yaml
Install using helm stable charts
helm repo update
helm install --namespace openebs --name openebs stable/openebs --version 1.8.0
For more details refer to the documentation at https://docs.openebs.io/
Upgrade
Upgrade to 1.8 is supported only from 1.0 or higher and follows a similar process like earlier releases. The detailed steps are provided here.
- Upgrade OpenEBS Control Plane components.
- Upgrade Jiva PVs to 1.8, one at a time
- Upgrade CStor Pools to 1.8 and its associated Volumes, one at a time.
For upgrading from releases prior to 1.0, please refer to the respective release upgrade here.
Support
If you are having issues in setting up or upgrade, you can contact us via:
- OpenEBS Slack community
- Already signed up? Head to our discussions at #openebs-users - Kubernetes Slack community
- Already signed up? Head to our discussions at #openebs - Raise an issue