Skip to content

0.7.1

Pre-release
Pre-release
Compare
Choose a tag to compare
@kmova kmova released this 01 Nov 03:00

Getting Started

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.1.yaml

Using helm stable charts

helm install  --namespace openebs --name openebs stable/openebs

Using OpenEBS Helm Charts (will be deprecated in the coming releases)

helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs

Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs

For more details refer to the documentation at: https://docs.openebs.io/

Change Summary

Minor enhancements

  • Support for using OpenEBS PVs as Block Devices for Application Pods

Bug Fixes

  • Fixed an issue with PVs not getting created when capacity had "i" suffix
  • Fixed an issue with cStor Target Pod stuck in terminating state due to shared hostPath
  • Fixed an issue with FSType from StorageClass not being configured on PV
  • Fixed an issue with NDM discovering capacity of disks via CDB16
  • Fixed an issue with PV name generation exceeding 64 characters. PVC UUID will be used as PV Name.
  • Fixed an issue with cStor Pool Pod terminating when there is an abrupt connection break
  • Fixed an issue with cStor Volume clean-up failure blocking new volumes from being created.

Detailed release notes are maintained in Project Tracker Wiki.

Limitations

  • Jiva target to Replica message protocol has been enhanced to handle the write errors. This change in the data exchanges causes the older replicas to be incompatible with the newer target and vice versa. The upgrade involves shutting down all the replicas before launching them with the new version. Since the volume requires the target and at least 2 replicas to be online, chances of volumes getting into the read-only state during upgrade are higher. A manual intervention will be required to recover the volume.
  • For OpenEBS volumes configured with more than 1 replica, at least more than half of the replicas should be online for the Volume to allow Read and Write. In the upcoming releases, with cStor data engine, Volumes can be allowed to Read/Write when there is at least one replica in the ready state.
  • This release contains a preview support for cloning an OpenEBS Volume from a snapshot. This feature only supports single replica for a cloned volume, which is intended to be used for temporarily spinning up a new application pod for recovering lost data from the previous snapshot.
  • While testing for different platforms, with a three-node/replica OpenEBS volume and shutting down one of the three nodes, there was an intermittent case where one of the 2 remaining replicas also had to be restarted.
  • The OpenEBS target (controller) pod depends on the Kubernetes node tolerations to reschedule the pod in the event of node failure. For this feature to work, TaintNodesByCondition alpha feature must be enabled in Kubernetes. In a scenario where OpenEBS target (controller) is not rescheduled or is back to running within 120 seconds, the volume gets into a read-only state and a manual intervention is required to make the volume as read-write.
  • The current version of OpenEBS volumes are not optimized for performance sensitive applications.

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.