Skip to content

Releases: libopenstorage/operator

Portworx Enterprise Operator 1.9.0

02 Aug 20:23
2021e3a
Compare
Choose a tag to compare

Updates

  • Daemonset to Operator migration is now Generally Available. This includes the following features:
    • The ability to perform a dry run of the migration
    • Migration for generic helm chart from Daemonset to the Operator
    • Support for the OnDelete migration strategy
    • Support for various configurations such as external KVDB, custom volumes, environment variables, service type, and annotations
  • You can now use a generic helm chart to install Portworx with the Operator. Note: Only AWS EKS has been validated for cloud deployments.
  • Support for enabling pprof in order to get Portworx Operator container profiles for memory, CPU, and so on.
  • The Operator now creates example CSI storage classes.
  • The Operator now enables the CSI snapshot controller by default on Kubernetes 1.17 and newer.

Bug Fixes

  • Fixed an issue where KVDB pods were repeatedly created when a pod was in the evicted or outOfPods status.

Portworx Enterprise Operator 1.8.1

22 Jun 14:07
e7f8498
Compare
Choose a tag to compare

Updates

  • Added support for Operator to run on IPv6 environment.
  • You can now enable CSI topology feature by setting the .Spec.CSI.Topology.Enabled flag to true in the StorageCluster CRD, default value is false. The feature is only supported on FlashArray direct access volumes.
  • Operator now uses custom SecurityContextConstraints portworx instead of privileged on OpenShift.
  • You can now add custom annotations to any service created by Operator.
  • You can now configure ServiceType on any service created by Operator.

Bug Fixes

  • Fixed pod recreation race condition during OCP upgrade by introducing exponential back-off to pod recreation when the operator.libopenstorage.org/cordoned-restart-delay-secs annotation is not set.
  • Fixed the incorrect CSI provisioner arguments when custom image registry path contains ":".

Portworx Enterprise Operator 1.8.0

14 Apr 22:52
Compare
Choose a tag to compare

Updates

  • Daemonset to operator migration is in Beta release.
  • Added support for passing custom labels to Portworx API service from StorageCluster.
  • Operator now enables the Autopilot component to communicate securely using tokens when PX-Security is enabled in the Portworx cluster.
  • Added field preserveFullCustomImageRegistry in StorageCluster spec to preserve full image path when using custom image registry.
  • Operator now retrieves the version manifest through proxy if PX_HTTP_PROXY is configured.
  • Stork, Stork scheduler, CSI, and PVC controller pods are now deployed with topologySpreadConstraints to distribute pod replicas across Kubernetes failure domains.
  • Added support for installing health monitoring sidecars from StorageCluster.
  • Added support for installing snapshot controller and CRD from StorageCluster.
  • The feature gate for CSI is now deprecated and replaced by setting spec.csi.enabled in StorageCluster.
  • Added support to enable hostPID to Portworx pods using the annotation portworx.io/host-pid="true" in StorageCluster.
  • Operator now sets fsGroupPolicy in the CSIDriver object to File. Previously it was not set explicitly, and the default value was ReadWriteOnceWithFsType.
  • Added skip-resource annotation to PX-Security Kubernetes secrets to skip backing them to the cloud.
  • Operator now sets the dnsPolicy of Portworx pod to ClusterFirstWithHostNet by default.
  • When using Cloud Storage, Operator validates that the node groups in StorageCluster use only one common label selector key across all node groups. It also validates that the value matches spec.cloudStorage.nodePoolLabel if a is present. If the value is not present, it automatically populates it with the value of the common label selector.

Bug Fixes

  • Fixed Pod Disruption Budget issue blocking Openshift upgrade on Metro DR setup.
  • Fixed Stork scheduler's pod anti-affinity by adding the label name: stork-scheduler to Stork scheduler deployments.
  • When a node level spec specifies a cloud storage configuration, we no longer set the cluster level default storage configuration. Before this fix, the node level cloud storage configuration would be overwritten.

Portworx Enterprise Operator 1.7.1

15 Mar 17:11
Compare
Choose a tag to compare

Bug fixes

  • Restored the v1alpha1 version of the StorageCluster and StorageNode CRDs. This version was previously removed, and broke the upgrade to Portworx Operator 1.7.0 in OpenShift. If you are already on the Operator version 1.7.0 and stuck in Pending state, please remove the Operator and re-install the latest 1.7.1 operator.

Portworx Enterprise Operator 1.7.0

02 Feb 01:36
Compare
Choose a tag to compare

Updates

  • The Operator can now deploy a metrics collector for Portworx versions 2.9.1 and above to upload Portworx metrics to Pure1.

Bug Fixes

  • Portworx installations no longer fail on PKS 1.12.1.
  • The telemetry container now uses a proxy specified by the PX_HTTP(S)_PROXY environment variable in the StorageCluster.

Known Issues

  • If Telemetry is enabled and cluster is uninstalled with deleteStrategy set to Uninstall, after reinstallation telemetry and metrics collector will not push metrics correctly. This is intended to avoid unauthorized user from retrieving the certificate, please ask support engineer for help.

Portworx Enterprise Operator 1.6.1

23 Nov 03:04
Compare
Choose a tag to compare

Updates

  • Support running operator on Kubernetes 1.22 and OpenShift 4.9
  • The Operator now deploys a newer version of Prometheus Operator version (v0.50.0) when running on Kubernetes 1.22+

Bug Fixes

  • Fixed crashing PVC controller pods for Kubernetes > 1.22+

Portworx Enterprise Operator 1.6.0

28 Oct 22:48
Compare
Choose a tag to compare

Updates

  • Added support for managing Prometheus AlertManager.
  • The Operator now pauses Portworx upgrades if an OpenShift upgrade is ongoing or has been newly started.
  • Support to assign CPU and memory resource requests to Portworx pods.
  • Introduced a separate Kubernetes service for internal KVDB pods so its metrics can be pulled from just the internal KVDB nodes.
  • Remove privileged permissions from CSI pods.
  • kubectl explain can now be used to describe the schema of StorageCluster and StorageNode objects just like all other Kubernetes objects.
  • Discontinued serving v1alpha1 API versions of StorageCluster and StorageNode CRDs. Only the v1 version will be served for the Portworx Operator CRDs.

Bug Fixes

  • Portworx and the internal KVDB no longer lose quorum during a Kubernetes upgrade.
  • The Operator now allows you to pass a username and password along with certificates when setting up an external ETCD for Portworx.
  • The StorageNode status no longer erroneously displays Initializing when the Portworx node is online.
  • Portworx can now be installed on RKE2 clusters.
  • Removed a protobuf deprecation warning from Operator logs.
  • The Operator will now deploy Portworx by default on IBM OpenShift clusters where all nodes have both master and worker labels.

Portworx Enterprise Operator 1.5.2

21 Sep 22:55
Compare
Choose a tag to compare

Updates

  • By default, the Operator deploys portworx-proxy components in the kube-system namespace when Portworx is installed outside of kube-system and when not using 9001 as the start port. You can disable portworx-proxy components by adding the portworx.io/portworx-proxy: "false" annotation to the StorageCluster object.

Portworx Enterprise Operator 1.5.1

10 Sep 21:34
Compare
Choose a tag to compare

Updates

  • Updated the default telemetry image to version 3.0.3

Bug Fixes

  • Disabled Pure1 telemetry by default on new and existing installs. Users can still enable it if they want to use Pure1 telemetry.
  • If CSI was not setup originally, CSI is not enabled by default on existing Portworx clusters when they're upgraded . Only new Portworx installations will have CSI enabled by default.

Portworx Enterprise Operator 1.5.0

30 Jul 20:27
4588465
Compare
Choose a tag to compare

Update

  • Pure1 telemetry is now enabled by default, starting with Portworx Enterprise 2.8.0.
  • The Operator now allows you to specify cache devices during installation.
  • The Operator now supports heterogeneous node configurations for cloud storage.
  • Added support for passing custom annotations to Portworx pods from StorageCluster.
  • Added support for explicitly specifying the cloud provider when using cloud storage.
  • The Operator can now overwrite existing Portworx volumes with user-defined custom volumes.
  • Simplified the OpenShift console StorageCluster Create form.
  • Upgraded CSI driver object to v1.

Bug Fixes

  • The Operator will no longer attempt to install Portworx if a DaemonSet installation already exists.
  • The Operator now supports using annotation to overwrite images in "k8s.gcr.io" and other registries with custom image registry. For instance, operator.libopenstorage.org/common-image-registries: "gcr.io,k8s.gcr.io".
  • The Operator no longer creates Portworx and KVDB pods repeatedly after a node is cordoned and drained.
  • OpenShift no longer fails upgrade operations due to legacy immutable file attributes on config files /etc/pwx. This issue has been fixed in Portworx 2.6.6 and above.
  • Portworx pods and other resources no longer randomly get deleted by the Kubernetes garbage collector.
  • Changed the default port on AKS to avoid port collision.