-
Notifications
You must be signed in to change notification settings - Fork 1k
Keep single values.yaml in operator chart while supporting ConfigMap & CRD #1224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Keep single values.yaml in operator chart while supporting ConfigMap & CRD #1224
Conversation
I like the idea of reducing it down to one values.yaml. It's something we can merge after the 1.6 release. |
I have just rebased my branch on the latest 1.6 release. |
This would be cool. But doesn't it fail if the crd structure is not equal to the configmap structure? I just tried to install it with helm which will use the default values.yaml… and I overwrote everything which has the wrong type (
It's a pitty, I guess I'll have to go with the ConfigMap approach for now. |
(Posting my 2 cents on this PR since the associated issue has already been closed) I'd like to give credit to the work done by @dalbani to provide a solution to all those "double default values" issues. I've installed the postgres-operator through Helm using the But because I need to override some config parameter (that's the whole point of those values files), this forced me to:
If the ConfigMap approach is really "deprecated", the CRD approach should be favored and streamlined, not requiring to manually maintain the associated |
@dalbani can you rebase the branch? It's one of the next things I'd like to merge 😃 We have to adjust the docs to then where the |
@FxKu: I have just rebased my branch. |
Thanks a lot. Could you change the following paragraph in the developer docs? My suggestion:
And remove mention about CRD and values-crd file in the quickstart. Then we can merge it *
I can see that your helper function |
ok I think I got it working adding
Or is there an easier way to implement it @dalbani ? |
@FxKu: sorry for the missing use of |
Is the failure of the end to end test related to my changes by the way?? |
@FxKu: I have just updated the documentation as you proposed. |
Thanks for the effort put into improving helm deployment and manifest maintenance. Looks interesting to, will approve mostly due to other positive feedback, due to lack of helm experience. Seems not to touch anything else. |
👍 |
1 similar comment
👍 |
@dalbani thank you for the effort 👍 |
this I tried to use this chart with default values.yaml, only changed Helm version: 3.6.3 configTarget: "ConfigMap" rendered ConfigMap: apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-operator
namespace: operators
labels:
app.kubernetes.io/instance: postgres-operator
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgres-operator
helm.sh/chart: postgres-operator-1.7.0
annotations:
meta.helm.sh/release-name: postgres-operator
meta.helm.sh/release-namespace: operators
data:
aws_region: eu-central-1
cluster_domain: cluster.local
cluster_labels: 'application:spilo'
cluster_name_label: cluster-name
connection_pooler_default_cpu_limit: '1'
connection_pooler_default_cpu_request: 500m
connection_pooler_default_memory_limit: 100Mi
connection_pooler_default_memory_request: 100Mi
connection_pooler_image: 'registry.opensource.zalan.do/acid/pgbouncer:master-18'
connection_pooler_mode: transaction
connection_pooler_schema: pooler
connection_pooler_user: pooler
db_hosted_zone: db.example.com
docker_image: 'registry.opensource.zalan.do/acid/spilo-13:2.1-p1'
etcd_host: ''
external_traffic_policy: Cluster
logical_backup_docker_image: 'registry.opensource.zalan.do/acid/logical-backup:v1.7.0'
logical_backup_job_prefix: logical-backup-
logical_backup_provider: s3
logical_backup_s3_access_key_id: ''
logical_backup_s3_bucket: my-bucket-url
logical_backup_s3_endpoint: ''
logical_backup_s3_region: ''
logical_backup_s3_secret_access_key: ''
logical_backup_s3_sse: AES256
logical_backup_schedule: 30 00 * * *
major_version_upgrade_mode: 'off'
master_dns_name_format: '{cluster}.{team}.{hostedzone}'
minimal_major_version: '9.5'
pam_role_name: zalandos
pdb_name_format: 'postgres-{cluster}-pdb'
pod_antiaffinity_topology_key: kubernetes.io/hostname
pod_deletion_wait_timeout: 10m
pod_label_wait_timeout: 10m
pod_management_policy: ordered_ready
pod_role_label: spilo-role
pod_service_account_name: postgres-pod
pod_terminate_grace_period: 5m
postgres_superuser_teams: postgres_superusers
protected_role_names: admin
ready_wait_interval: 3s
ready_wait_timeout: 30s
repair_period: 5m
replica_dns_name_format: '{cluster}-repl.{team}.{hostedzone}'
replication_username: standby
resource_check_interval: 3s
resource_check_timeout: 10m
resync_period: 30m
role_deletion_suffix: _deleted
secret_name_template: '{username}.{cluster}.credentials.{tprkind}.{tprgroup}'
storage_resize_mode: pvc
super_username: postgres
target_major_version: '13'
team_admin_role: admin
team_api_role_configuration: 'log_statement:all'
watched_namespace: '*' |
Hum, not good indeed that some values are missing. |
Following the discussion in #1197, I tried to find a way to have a single
values.yaml
in the operator Helm chart, while still being compatible with bothconfigTarget: ConfigMap
andconfigTarget: OperatorConfigurationCRD
.The main benefit is that a
OperatorConfigurationCRD
based deployment of the operator can now be made with Helm(file) when the chart is located in a repository or via an HTTP URL.Up to now, the
helm install
command had to be run from the tree checked out of Git. In my situation where all our deployments are centralised in an Helm, that's not an option.The second benefit is that a single
values.yaml
has to be maintained. In this pull request, I copiedvalues-crd.yaml
overvalues.yaml
and you can see that there are indeed a few discrepancies (only cosmetics?) between them.The consequence of this pull request is that
OperatorConfigurationCRD
based deployments become the norm.ConfigMap
based deployments are described in the documentation as "deprecated", right?But it's of course still possible to deploy with a
ConfigMap
by passing the valueconfigTarget=ConfigMap
to Helm.