Be notified of new releases
Create your free GitHub account today to subscribe to this repository for new releases and build software alongside 31 million developers.Sign up
Prerequisite to install
- Kubernetes 1.9.7+ is installed
- Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
- Make sure iSCSI Initiator is installed on the Kubernetes nodes.
- NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.
- NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately.
kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.8.0.yaml
Using helm stable charts
helm repo update helm install --namespace openebs --name openebs stable/openebs
Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs
For more details refer to the documentation at: https://docs.openebs.io/
Support for creating instant Snapshots on cStor volumes that can be used for both data protection or data warm-up usecases. The snapshots can be taken using
kubectlby providing an
VolumeSnapshotYAML as shown below:
apiVersion: volumesnapshot.external-storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: <snapshot-name> namespace: default spec: persistentVolumeClaimName: <cstor-pvc-name>
cStor Volume Snapshot creation is controlled by the cStor Target Pod by flushing all the pending IOs to the Replicas and requesting each of the Replica to take snapshot. cStor Volume Snapshot can be taken as long as two Replicas are Healthy.
Since snapshots can be managed via
kubectl, you can setup your own K8s cron job that can take period snapshots etc.,
Support for creating a clone PVs from a previously taken snapshot. Clones can be used for both recovering data from a previously taken snapshot or for optimizing the application startup time. Application startup time can be reduced in usecases where an application pod requires some kind of seed data to be available. With clones support, user can setup a seed volume and fill it with the data. Create a snapshot with the seed data. When the application are launched, the PVCs can be setup to create Cloned PVs from the seed data snapshot. cStor Clones are also optimized for minimizing the capacity overhed. cStor Clones are reference based and need capacity only for new or modified data.
Clones can be created using the
PVCYAML as shown below:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <cstor-clone-pvc-name> namespace: default annotations: snapshot.alpha.kubernetes.io/snapshot: <snapshot-name> spec: storageClassName: openebs-snapshot-promoter accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 5Gi
Note that, the requested storage specified (like
5Giin the above example) should match the requested storage on the
<cstor-pvc-name>is the PVC on which the snapshot was taken.
Support for providing the runtime status of the cStor Volumes and Pools via kubectl describe commands. For example:
- cStor Volume Status can be obtained by
kubectl describe cstorvolume <cstor-pv-name> -n <openebs-namespace>. The status reported will contain information as shown below and more:
Status: Phase: Healthy Replica Statuses: Replica Id: 15041373535437538769 Status: Healthy Up Time: 14036 Replica Id: 6608387984068912137 Status: Healthy Up Time: 14035 Replica Id: 17623616871400753550 Status: Healthy Up Time: 14034
- Similarly, each cStor Pool status can also be fetched by a
kubectl describe csp <pool-name> -n <openebs-namespace>. The details shown are:
status: capacity: free: 9.62G total: 9.94G used: 322M phase: Healthy
- The above describe command for cStor Volume - can also show details like the number of IOs currently inflight to Replica and details of how the Volume status has changed via Kubernetes events.
- The interval between updating the status can be configured via Storage Policy -
ResyncInterval. The default sync interval is
- cStor Volume Status can be obtained by
- The status of the Volume or Pool can be in -
Support for new Storage Policies for cStor Volumes such as:
Target Affinity: (Applicable to both both jiva and cStor Volumes) The stateful workloads access the OpenEBS Storage by connecting to the Volume Target Pod. This policy can be used to co-locate volume target pod on the same node as workload to avoid conditions like:
- network disconnects between the workload node and target node
- shutting down of the node on which volume target pod is scheduled for maintenance.
In the above cases, if the restoration of network, pod or node takes more than 120 seconds, the workload loses connectivity to the storage.
This feature makes use of the Kubernetes Pod Affinity feature that is dependent on the Pod labels. User will need to add the following label to both Application and PVC.
labels: openebs.io/target-affinity: <application-unique-label>
Example of using this policy can be found here.
Note that this Policy only applies to Deployments or StatefulSets with single workload instance.
Target Namespace: (Applicable to only cStor volumes). By default the cStor target pods are scheduled in a dedicated openebs namespace. The target pod also is provided with openebs service account so that it can access the Kubernetes Custom Resource called
This policy, allows the Cluster administrator to specify if the Volume Target pods should be deployed in the namespace of the workloads itself. This can help with setting the limits on the resources on the target pods, based on the namespace in which they are deployed.
To use this policy, the Cluster administrator could either use the existing openebs service account or create a new service account with limited access and provide it in the StorageClass as follows:
annotations: cas.openebs.io/config: | - name: PVCServiceAccountName value: "user-service-account"
The sample service account can be found here
Support for sending anonymous analytics to Google Analytics server. This feature can be disabled by setting the maya-apiserver environment flag -
OPENEBS_IO_ENABLE_ANALYTICSto false. Very minimal information like the K8s version and the type of volumes being deployed is collected. No sensitive information like the names or IP addresses are collected. The details collected can be viewed here.
- Enhance the metrics reported from cStor and jiva Volumes to include target and replica status.
- Enhance the volume metrics exporter to provide metrics in json format.
- Enhance the maya-apiserver API to include a stats api that will pass through the request to the respective volume exporter metrics API.
- Enhance cStor storage engine to be resilient against replica failures and cover several corner cases associated with rebuild replica.
- Enhance jiva storage engine to clear up the space occupied by temporary snapshots taken by the replicas during replica rebuild.
- Enhance jiva storage engine to support sync and unmap IOs.
- Enhance CAS Templates to allow invoking REST API calls to non Kubernetes services.
- Enhance CAS Templates to support an option to disable a Run Task.
- Enhance CAS Templates to include .CAST object which will be available for Run Tasks. .CAST contains information like openebs and kubernetes versions.
- Enhance the maya-apiserver installer code to remove the dependency on the config map and to determine and load the CAS Templates based on maya-apiserver version. When maya-apiserver is upgraded from 0.7 to 0.8 - a new set of default CAS Templates will be available for 0.8.
- Enhance mayactl (CLI) to include bash or zsh completion support. User needs to run : source <(mayactl completion bash).
- Enhance the volume provisioning to add Prometheus annotations for scrape and port to the volume target pods.
- Enhance the build scripts to push commit tagged images and also add support for GitLab based CI.
- Enhance the CI scripts in each of the repos to cover new features.
- 250+ PRs merged from the community fixing the documentation, code style/lint and add missing unit tests across various repositories.
Major Bugs Fixed
- Fixed an issue where cStor pool can become inaccessible if two pool pods attempt to access the same disks. This can happen during pool pod termination/eviction, followed by immediately scheduling a new pod on the same node.
- Fixed an issue where cStor pool can restart if one of the cstor volume target pod is restarted.
- Fixed an issue with auto-creation of cStor Pools using SPC and type as mirrored. The type was being ignored during the pool creation.
- Fixed an issue with recreating the cStor Pool by automatically selecting Disks. A check has been added to only pick the Active Disks on the node.
- Fixed an issue with Provisioning of cStor Volume by creating the Replica even if the Pool is Offline during the provisioning. After the Pool comes back online, the Replica will be created on the Pool.
- None from 0.7.0
- For previous releases, please refer to the respective release notes and upgrade steps. Upgrade to 0.8.0 is supported only from 0.7.0.
Limitations / Known Issues
- The current version of the OpenEBS volumes are not optimized for performance sensitive applications
- cStor Target or Pool pods can at times be stuck in a Terminating state. They will need to be manually cleaned up using kubectl delete with 0 sec grace period. Example:
kubectl delete deploy <volume-target-deploy> -n openebs --force --grace-period=0
- cStor Pool pods can consume more Memory when there is continuous load. This can cross memory limit and cause pod evictions. It is recommend that you create the cStor pools by setting the Memory limits and requests.
- Jiva Volumes are not recommended if your use case requires snapshots and clone capabilities.
- Jiva Replicas use sparse file to store the data. When the application causing too many fragments (extents) to be created on the sparse file, the replica restart can cause replica take longer time to get attached to the target. This issue was seen when there were 31K fragments created.
- Volume Snapshots are dependent on the functionality provided by the Kubernetes. The support is currently alpha. The only operations supported are:
- Create Snapshot, Delete Snapshot and Clone from a Snapshot
- Creation of the Snapshot - uses a reconciliation loop, which would mean that a Create Snapshot operation will be retried on failure, till the Snapshot has been successfully created. This may not be an desirable option in cases where Point in Time snapshots are expected.
- If you using K8s version earlier than 1.12, in certain cases, it will be observed that when the node the target pod is offline, the target pod can take more than 120 seconds to get rescheduled. This is because, target pods are configured with Tolerations based on the Node Condition, and TaintNodesByCondition is available only from K8s 1.12. If running earlier version, you may have to enable the alpha gate for TaintNodesByCondition. If there is active load on the volume when the target pod goes offline, the volume will be marked as read-only.
For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.
Additional details and note to upgrade and uninstall are available on Project Tracker Wiki.