Skip to content

Conversation

dalbani
Copy link
Contributor

@dalbani dalbani commented Nov 18, 2020

Following the discussion in #1197, I tried to find a way to have a single values.yaml in the operator Helm chart, while still being compatible with both configTarget: ConfigMap and configTarget: OperatorConfigurationCRD.

The main benefit is that a OperatorConfigurationCRD based deployment of the operator can now be made with Helm(file) when the chart is located in a repository or via an HTTP URL.
Up to now, the helm install command had to be run from the tree checked out of Git. In my situation where all our deployments are centralised in an Helm, that's not an option.

The second benefit is that a single values.yaml has to be maintained. In this pull request, I copied values-crd.yaml over values.yaml and you can see that there are indeed a few discrepancies (only cosmetics?) between them.

The consequence of this pull request is that OperatorConfigurationCRD based deployments become the norm.
ConfigMap based deployments are described in the documentation as "deprecated", right?
But it's of course still possible to deploy with a ConfigMap by passing the value configTarget=ConfigMap to Helm.

@FxKu
Copy link
Member

FxKu commented Nov 24, 2020

I like the idea of reducing it down to one values.yaml. It's something we can merge after the 1.6 release.

@FxKu FxKu added this to the 1.7 milestone Dec 16, 2020
@dalbani
Copy link
Contributor Author

dalbani commented Dec 21, 2020

I have just rebased my branch on the latest 1.6 release.

@FxKu FxKu mentioned this pull request Jan 19, 2021
@FxKu FxKu modified the milestones: 1.6.1, 1.7 Feb 15, 2021
@FxKu FxKu modified the milestones: 1.7, 1.8 Mar 26, 2021
@toabi
Copy link

toabi commented Apr 14, 2021

This would be cool. But doesn't it fail if the crd structure is not equal to the configmap structure?

I just tried to install it with helm which will use the default values.yaml… and I overwrote everything which has the wrong type ("true" => true) … but in the end the default values.yaml still creates those two entries in the templates/operatorconfiguration.yaml

error validating data: [
ValidationError(OperatorConfiguration.configuration.kubernetes): unknown field "spilo_allow_privilege_escalation" in do.zalan.acid.v1.OperatorConfiguration.configuration.kubernetes
ValidationError(OperatorConfiguration.configuration): unknown field "major_version_upgrade" in do.zalan.acid.v1.OperatorConfiguration.configuration]

It's a pitty, I guess I'll have to go with the ConfigMap approach for now.

@lchdev
Copy link

lchdev commented Apr 14, 2021

(Posting my 2 cents on this PR since the associated issue has already been closed)

I'd like to give credit to the work done by @dalbani to provide a solution to all those "double default values" issues. I've installed the postgres-operator through Helm using the OperatorConfigurationCRD target, because using ConfigMap is deprecated.

But because I need to override some config parameter (that's the whole point of those values files), this forced me to:

  1. Keep a full copy of the values-CRD.yaml file (with all parameters), even if only 3 ou 4 are changed
  2. Update it every time a new value appears in a new release (or some default changes), otherwise the update fails because of incompatible types

If the ConfigMap approach is really "deprecated", the CRD approach should be favored and streamlined, not requiring to manually maintain the associated values-CRD.yaml.

@FxKu
Copy link
Member

FxKu commented May 31, 2021

@dalbani can you rebase the branch? It's one of the next things I'd like to merge 😃

We have to adjust the docs to then where the values-crd.yaml is mentioned.

@dalbani
Copy link
Contributor Author

dalbani commented Jun 1, 2021

@FxKu: I have just rebased my branch.

@FxKu
Copy link
Member

FxKu commented Jun 1, 2021

Thanks a lot. Could you change the following paragraph in the developer docs? My suggestion:

For the CRD-based configuration, please update the following files:
* the default [OperatorConfiguration](../manifests/postgresql-operator-default-configuration.yaml)
* the CRD's [validation](../manifests/operatorconfiguration.crd.yaml)
* the CRD's validation in the [helm chart](../charts/postgres-operator/crds/operatorconfigurations.yaml)

Add new options also to the Helm chart's [values file](../charts/postgres-operator/values.yaml) file.
It follows the OperatorConfiguration CRD layout. Nested values will be flattened for the ConfigMap.
Last but no least, update the [ConfigMap](../manifests/configmap.yaml) manifest example as well.

And remove mention about CRD and values-crd file in the quickstart. Then we can merge it *

  • EDIT: After this is fixed ;)
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [
ValidationError(ConfigMap.data.cluster_labels): invalid type for io.k8s.api.core.v1.ConfigMap.data: got "map", expected "string", 
ValidationError(ConfigMap.data.postgres_superuser_teams): invalid type for io.k8s.api.core.v1.ConfigMap.data: got "array", expected "string", 
ValidationError(ConfigMap.data.protected_role_names): invalid type for io.k8s.api.core.v1.ConfigMap.data: got "array", expected "string", 
ValidationError(ConfigMap.data.team_api_role_configuration): invalid type for io.k8s.api.core.v1.ConfigMap.data: got "map", expected "string"]

I can see that your helper function flattenValuesForConfigMap is not used in any of the templates. How did you use it? 😃

@FxKu
Copy link
Member

FxKu commented Jun 1, 2021

ok I think I got it working adding fromYaml to the flattenValuesForConfigMap helper function -> {{- range $key, $value := fromYaml . }}. And in the ConfigMap template I use for example:

{{ (include "flattenValuesForConfigMap" (toYaml .Values.configKubernetes)) | indent 2 }}

Or is there an easier way to implement it @dalbani ?

@dalbani
Copy link
Contributor Author

dalbani commented Jun 1, 2021

@FxKu: sorry for the missing use of flattenValuesForConfigMap in my latest push, I mixed things up.
It was indeed intended to be used as you mentioned.
I have also included your suggestion in _helpers.tpl.

@dalbani
Copy link
Contributor Author

dalbani commented Jun 2, 2021

Is the failure of the end to end test related to my changes by the way??

@FxKu
Copy link
Member

FxKu commented Jun 2, 2021

@dalbani no not related. Unfortunately, one e2e test is very flaky on GitHub. Could you please do the two updates in the documentation that I've mentioned before? Then we would be done 😃

@dalbani
Copy link
Contributor Author

dalbani commented Jun 2, 2021

@FxKu: I have just updated the documentation as you proposed.

@Jan-M
Copy link
Member

Jan-M commented Jun 2, 2021

Thanks for the effort put into improving helm deployment and manifest maintenance. Looks interesting to, will approve mostly due to other positive feedback, due to lack of helm experience. Seems not to touch anything else.

@Jan-M
Copy link
Member

Jan-M commented Jun 2, 2021

👍

1 similar comment
@sdudoladov
Copy link
Member

👍

@sdudoladov sdudoladov merged commit b300fca into zalando:master Jun 3, 2021
@sdudoladov
Copy link
Member

@dalbani thank you for the effort 👍

@stefankuksenko
Copy link

stefankuksenko commented Aug 31, 2021

this flattenValuesForConfigMap function just skips boolean parameters and doesn't append it to the configmap.

I tried to use this chart with default values.yaml, only changed configTarget value to ConfigMap.
In rendered ConfigMap i don't see boolean parameters at all.

Helm version: 3.6.3
k8s version: 1.21.2 (k3s)
chart version: 1.7.0
custom values:

configTarget: "ConfigMap"

rendered ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-operator
  namespace: operators
  labels:
    app.kubernetes.io/instance: postgres-operator
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: postgres-operator
    helm.sh/chart: postgres-operator-1.7.0
  annotations:
    meta.helm.sh/release-name: postgres-operator
    meta.helm.sh/release-namespace: operators
data:
  aws_region: eu-central-1
  cluster_domain: cluster.local
  cluster_labels: 'application:spilo'
  cluster_name_label: cluster-name
  connection_pooler_default_cpu_limit: '1'
  connection_pooler_default_cpu_request: 500m
  connection_pooler_default_memory_limit: 100Mi
  connection_pooler_default_memory_request: 100Mi
  connection_pooler_image: 'registry.opensource.zalan.do/acid/pgbouncer:master-18'
  connection_pooler_mode: transaction
  connection_pooler_schema: pooler
  connection_pooler_user: pooler
  db_hosted_zone: db.example.com
  docker_image: 'registry.opensource.zalan.do/acid/spilo-13:2.1-p1'
  etcd_host: ''
  external_traffic_policy: Cluster
  logical_backup_docker_image: 'registry.opensource.zalan.do/acid/logical-backup:v1.7.0'
  logical_backup_job_prefix: logical-backup-
  logical_backup_provider: s3
  logical_backup_s3_access_key_id: ''
  logical_backup_s3_bucket: my-bucket-url
  logical_backup_s3_endpoint: ''
  logical_backup_s3_region: ''
  logical_backup_s3_secret_access_key: ''
  logical_backup_s3_sse: AES256
  logical_backup_schedule: 30 00 * * *
  major_version_upgrade_mode: 'off'
  master_dns_name_format: '{cluster}.{team}.{hostedzone}'
  minimal_major_version: '9.5'
  pam_role_name: zalandos
  pdb_name_format: 'postgres-{cluster}-pdb'
  pod_antiaffinity_topology_key: kubernetes.io/hostname
  pod_deletion_wait_timeout: 10m
  pod_label_wait_timeout: 10m
  pod_management_policy: ordered_ready
  pod_role_label: spilo-role
  pod_service_account_name: postgres-pod
  pod_terminate_grace_period: 5m
  postgres_superuser_teams: postgres_superusers
  protected_role_names: admin
  ready_wait_interval: 3s
  ready_wait_timeout: 30s
  repair_period: 5m
  replica_dns_name_format: '{cluster}-repl.{team}.{hostedzone}'
  replication_username: standby
  resource_check_interval: 3s
  resource_check_timeout: 10m
  resync_period: 30m
  role_deletion_suffix: _deleted
  secret_name_template: '{username}.{cluster}.credentials.{tprkind}.{tprgroup}'
  storage_resize_mode: pvc
  super_username: postgres
  target_major_version: '13'
  team_admin_role: admin
  team_api_role_configuration: 'log_statement:all'
  watched_namespace: '*'

@dalbani
Copy link
Contributor Author

dalbani commented Aug 31, 2021

Hum, not good indeed that some values are missing.
And it's more than boolean actually: among all the variables types (string, bool, float64, map, slice) is float64 missing as well 😱
I'll have a look.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants