Skip to content

Releases: libopenstorage/operator

Portworx Enterprise Operator 23.3.1

29 Mar 22:55
Compare
Choose a tag to compare

Improvements

Portworx Operator has upgraded or enhanced functionality in the following areas:

Improvement Number Improvement Description
PWX-30005 With Operator 23.3.0 when telemetry is enabled on an air-gapped cluster, telemetry pods remain in the init state as it fails to reach the Pure1 telemetry endpoint. This does not impact Portworx pods. However, with version 23.3.1, the Operator checks for Pure1 connectivity when enabled for the first time and a telemetry cert has not yet been created. If Portworx cannot reach Pure1, the Operator disables telemetry.

Portworx Enterprise Operator 23.3.0

22 Mar 02:16
Compare
Choose a tag to compare

Notes

  • Starting with 23.3.0, the naming scheme for Operator releases has changed. Release numbers are now based on the year and month of the release.
  • You need to upgrade to Operator 23.3.0 to avoid magePullError after April 3rd due to changes in the Kubernetes registry path. Kubernetes is freezing gcr.k8s.io and moving to registry.k8s.io repository on 3rd of April. For more information, see the Kubernetes blog.

New features

Portworx Operator is proud to introduce the following new features:

  • Enabled Pure1 telemetry by default for all clusters when you generate a spec from PX-Central. However, for air-gapped clusters or when the PX_HTTPS_PROXY variable is configured, telemetry must be explicitly disabled during spec generation. To learn how to disable telemetry, see the air-gapped installation flow.

    • During an upgrade to Portworx Operator 23,3.0, telemetry will be enabled by default unless telemetry is disabled in the StorageCluster spec or when the PX_HTTPS_PROXY variable is configured. To learn more about PX_HTTPS_PROXY support, see Enable Pure1 integration.
  • Added the following new fields to the StorageCluster spec for configuring Prometheus:

    • spec.Monitoring.Prometheus.Resources: Provides the ability to configure Prometheus resource usage, such as memory and CPU usage. If the resources field is not configured, default limits will be set to CPU 1, memory 800M, and ephemeral storage 5G.
    • spec.Monitoring.Prometheus.securityContext.runAsNonRootin: Provides the ability to configure the Prometheus service type, and the default value is set to true.
  • Added a new environment variable KUBELET_DIR. This variable can be used to specify a custom kubelet directory path.

  • Added an annotation portworx.io/scc-priority to the StorageCluster spec for configuring the priority of Portworx security context constraints (SCC).

Improvements

Portworx Operator has upgraded or enhanced functionality in the following areas:

Improvement Number Improvement Description
PWX-28147 When upgrading to Operator version 23.3.0, all CSI sidecar images will be updated to the latest versions.
PWX-28077 Operator will now update Prometheus and Alertmanager CRDs.

Fixes

Issue Number Issue Description
PWX-28343 During the Operator upgrade, the old telemetry registration pod were not being deleted.

Resolution: Changed the update deployment strategy of px-telemetry-registration to Recreate. Now the old pods will be deleted before the new ones are created.
PWX-29531 The prometheus-px-prometheus pods were not being created in OpenShift due to failed SCC validation.

Resolution: This issue has been fixed.
PWX-29565 Upgrading OpenShift from version 4.11.x to 4.12.3 was failing for the Portworx cluster.

Resolution: Changed Portworx SCC default priority to nil.
PWX-28101 If the kubelet path was not set to the default path, the CSI driver would fail to start, and the PVC could not be provisioned.

Resolution: Now the KUBELET_DIR environment variable can be used to specify a custom path for the CSI driver.

Portworx Enterprise Operator 1.10.5

10 Mar 01:15
b78d11f
Compare
Choose a tag to compare

Updates

  • Added the new spec.updateStrategy.rollingUpdate.minReadySeconds flag. During rolling updates, this flag will wait for all pods to be ready for at least minReadySeconds before updating the next batch of pods, where the size of the pod batch is specified through the spec.updateStrategy.rollingUpdate.maxUnavailable flag.

Portworx Enterprise Operator 1.10.4

22 Feb 18:45
Compare
Choose a tag to compare

Updates

  • Added a new annotation portworx.io/is-oke=true to the StorageCluster spec to support Portworx deployment on the Oracle Container Engine for Kubernetes (OKE) cluster.

Bug fixes

  • Fixed a bug where the Portworx PVC controller leader election resources conflicted with the resources used by the Kubernetes controller manager.
  • Fixed the Anthos Telemetry installation failure. Operator now allows two sidecar containers to run on the same node.

Portworx Enterprise Operator 1.10.3

29 Jan 00:16
f750efa
Compare
Choose a tag to compare

Bug fixes

  • In Operator version 1.10.2, the Portworx pod was being scheduled on a random node because of a missing node name in the Portworx pod template. This issue is fixed in Operator version 1.10.3.

Portworx Enterprise Operator 1.10.2

25 Jan 03:23
5cb672e
Compare
Choose a tag to compare

Updates

  • Stork now uses KubeSchedulerConfiguration for Kubernetes version 1.23 or newer, so that pods are evenly distributed across all nodes in your cluster.

Portworx Enterprise Operator 1.10.1

06 Dec 04:52
52f4dd9
Compare
Choose a tag to compare

Updates

  • Added support for Kubernetes version 1.25, which includes:
    • Removed PodSecurityPolicy when deploying Portworx with Operator.
    • Upgraded the API version of PodDisruptionBudget from policy/v1beta1 to policy/v1
  • Added a UI option in the spec generator to configure Kubernetes version when you choose to deploy Portworx version 2.12.
  • Operator is now deployed without verbose log by default. To enable it, add the --verbose argument to the Operator deployment.
  • For CSI deployment, the px-csi-ext pod now sets Stork as a scheduler in the px-csi-ext deployment spec.
  • Operator now chooses maxStorageNodesPerZone’s default value to efficiently manage the number of storage nodes in a cluster.

Portworx Enterprise Operator 1.9.2

08 Nov 01:00
Compare
Choose a tag to compare

Updates

  • Upgrade the base image for vulnerability issues

Portworx Enterprise Operator 1.10.0

25 Oct 06:54
Compare
Choose a tag to compare

Notes

IMPORTANT: To enable telemetry for DaemonSet-based Portworx installations, you must migrate to an Operator-based installation, then upgrade to Portworx version 2.12 before enabling Pure1 integration. For more details, see this document.

Updates

  • Pure1 integration has been re-architected to be more robust and use less memory. It is supported on Portworx version 2.12 clusters deployed with Operator version 1.10.
  • To reduce memory usage, added a new argument disable-cache-for to disable Kubernetes objects from controller runtime cache. For example,--disable-cache-for="Event,ConfigMap,Pod,PersistentVolume,PersistentVolumeClaim".
  • Operator now blocks Portworx installation if Portworx is uninstalled without a wipe and then reinstalled with a different name.
  • For a new installation, Operator now sets the max number of storage nodes per zone, so that the 3 storage nodes in the entire cluster are uniformly spread across zones.

Bug Fixes

  • Fixed a bug where DaemonSet migration was failing if the Portworx cluster ID was too long.

Portworx Enterprise Operator 1.9.1

08 Sep 23:46
Compare
Choose a tag to compare

Updates

  • Added support for Kubernetes version 1.24:
    • Added docker.io prefix for component images deployed by Operator.
    • To determine Kubernetes master nodes, Operator now uses the control-plane node role instead of master.

Bug Fixes

  • In Operator 1.9.0, when you enabled the CSI snapshot controller explicitly in the StorageCluster, the csi-snapshot-controller sidecar containers might have been removed during an upgrade or restart operation. This issue is fixed in Operator 1.9.1.