Skip to content

v1.7.0-rc.1

Compare
Choose a tag to compare
@WanzenBug WanzenBug released this 16 Nov 13:11
· 169 commits to master since this release
v1.7.0-rc.1

This is the first release candidate for the upcoming 1.7.0 release of the Piraeus Operator. Please help by testing it!

It's been quite some time since the last release, and a lot of new features and improvements were made since then.

The most exciting feature is certainly the option to run Piraeus without an additional database for the LINSTOR Controller. LINSTOR 1.16.0 added experimental support for using the Kubernetes API directly to store its internal state. The current plan is to support both Etcd and Kubernetes API as datastore, with the eventual goal of removing Etcd support once we are happy with the stability of this new backend. Read more on this topic here.

Apart from that, the Operator now applies Kubernetes node labels to the LINSTOR node objects as auxiliary properties. What that means is that LINSTOR CSI can now make scheduling decisions based on existing node labels, like the commonly used topology.kubernetes.io/zone. To take full advantage of this, we enabled the topology feature for CSI by default, and also updated the CSI driver to properly respect both StorageClass parameters (replicasOnDifferent, etc.) as well as topology information. We now recommend using volumeBindingMode: WaitForFirstConsumer in all storage classes.

Another important change is the removal of the Stork scheduler. In the past it caused issues by improperly restarting pods, scheduling to unusable nodes and just plain not working on newer Kubernetes versions. With Kubernetes now supporting volumeBindingMode: WaitForFirstConsumer and the LINSTOR CSI version being better at scheduling volumes, we felt it was safe to disable Stork by default. You can still enable it in the chart if you wish.

This is also the first Piraeus Operator release that supports creating backups of your volumes and storing them in S3 or another LINSTOR cluster. Currently this is only available using the LINSTOR CLI, take a look at the linstor remote ... and linstor backup ... commands. In a future release, this should be more tightly integrated with the Kubernetes infrastructure. In order to securly store any access tokens for remote locations, LINSTOR needs to be configured with a master passphrase. If no passphrase is defined, the Helm chart will create one for you.


Known issues

  • A bug in LINSTOR 1.16.0 when setting a master passphrase means that a restarted controller gets stuck with a node not authorized error. As a workaround, restart the piraeus-op-ns-node daemonset: kubectl rollout restart daemonset/piraeus-op-ns-node.

All Changes

Added

  • pv-hostpath: automatically determine on which nodes PVs should be created if no override is given.
  • Automatically add labels on Kubernetes Nodes to LINSTOR satellites as Auxiliary Properties. This enables using
    Kubernetes labels for volume scheduling, for example using replicasOnSame: topology.kubernetes.io/zone.
  • Support LINSTORs k8s backend by adding the necessary RBAC resources and documentation.
  • Automatically create a LINSTOR passphrase when none is configured.
  • Automatic eviction and deletion of offline satellites if the Kubernetes node object was also deleted.

Changed

  • Enable CSI topology by default, allowing better volume scheduling with volumeBindingMode: WaitForFirstConsumer.
  • Disable STORK by default. Instead, we recommend using volumeBindingMode: WaitForFirstConsumer in storage classes.