-
Notifications
You must be signed in to change notification settings - Fork 38
feat: deploy cloud-sql-proxy in its down deployment #340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: deploy cloud-sql-proxy in its down deployment #340
Conversation
2fb2d27
to
55866c2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @zaibon
Following up with our discussion in Discord.
a) Did you manage to make it work also with the migration job?
b) Should we add inter-pod affinity so we make sure that the proxy is scheduled in the same pod the controlplane is running?
name: {{ include "chainloop.sql-proxy.fullname" . }} | ||
labels: | ||
{{- include "chainloop.sql-proxy.labels" . | nindent 4 }} | ||
spec: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the affinity code might look like
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/component: controlplane
but I think it will only guarantee that one of the controlplane replicas will have a proxy in the same node, not that both replicas have it.
In any case I believe it is still worth it and we can just keep an eye on it. WDYT?
Signed-off-by: Christophe de Carvalho <christophe@archipelo.co>
55866c2
to
5552b3e
Compare
yes, I find the issue. The proxy needed to listen on 0.0.0.0 to accept connections from other pods. After this change. the migration job was executed successfully.
it has been added. |
--- /tmp/before.yaml 2023-09-07 18:04:03
+++ /tmp/after.yaml 2023-09-07 18:04:11
+---
+# Source: chainloop/templates/controlplane/service_sql-proxy.yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: foo-chainloop-sql-proxy
+ labels:
+ app.kubernetes.io/name: chainloop
+ helm.sh/chart: chainloop-1.15.0
+ app.kubernetes.io/instance: foo
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/part-of: chainloop
+ app.kubernetes.io/component: sql-proxy
+spec:
+ type: ClusterIP
+ ports:
+ - port: 5432
+ targetPort: 5432
+ protocol: TCP
+ name: tpc
+ selector:
+ app.kubernetes.io/name: chainloop
+ app.kubernetes.io/instance: foo
+ app.kubernetes.io/component: sql-proxy
---
# Source: chainloop/templates/cas/deployment.yaml
@@ -356,7 +380,7 @@
- name: chainloop
securityContext:
null
- image: "ghcr.io/chainloop-dev/chainloop/artifact-cas:v0.16.2"
+ image: "ghcr.io/chainloop-dev/chainloop/artifact-cas:v0.17.1"
imagePullPolicy:
ports:
- name: http
@@ -439,36 +463,10 @@
securityContext:
null
containers:
-
- - name: cloud-sql-proxy
- # It is recommended to use the latest version of the Cloud SQL proxy
- # Make sure to update on a regular schedule!
- image: gcr.io/cloudsql-docker/gce-proxy:1.28.0 # make sure the use the latest version
- command:
- - "/cloud_sql_proxy"
- # If connecting from a VPC-native GKE cluster, you can use the
- # following flag to have the proxy connect over private IP
- # - "-ip_address_types=PRIVATE"
-
- # By default, the proxy will write all logs to stderr. In some
- # environments, anything printed to stderr is consider an error. To
- # disable this behavior and write all logs to stdout (except errors
- # which will still go to stderr), use:
- - "-log_debug_stdout"
- - "-instances=archipelo-dev:europe-west1:XXXXXX=tcp:5432"
- securityContext:
- runAsNonRoot: true
- resources:
- limits:
- cpu: 250m
- memory: 512Mi
- requests:
- cpu: 250m
- memory: 512Mi
- name: chainloop
securityContext:
null
- image: "ghcr.io/chainloop-dev/chainloop/control-plane:v0.16.2"
+ image: "ghcr.io/chainloop-dev/chainloop/control-plane:v0.17.1"
imagePullPolicy:
ports:
- name: http
@@ -523,6 +521,72 @@
- name: gcp-secretmanager-serviceaccountkey
secret:
secretName: foo-chainloop-controlplane-gcp-secretmanager-serviceaccountkey
+---
+# Source: chainloop/templates/controlplane/deployment_sqlproxy.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: foo-chainloop-sql-proxy
+ labels:
+ app.kubernetes.io/name: chainloop
+ helm.sh/chart: chainloop-1.15.0
+ app.kubernetes.io/instance: foo
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/part-of: chainloop
+ app.kubernetes.io/component: sql-proxy
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: chainloop
+ app.kubernetes.io/instance: foo
+ app.kubernetes.io/component: sql-proxy
+ template:
+ metadata:
+ annotations:
+ kubectl.kubernetes.io/default-container: cloud-sql-proxy
+ labels:
+ app.kubernetes.io/name: chainloop
+ app.kubernetes.io/instance: foo
+ app.kubernetes.io/component: sql-proxy
+ spec:
+ affinity:
+ podAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchLabels:
+ app.kubernetes.io/component: controlplane
+ topologyKey: kubernetes.io/hostname
+ serviceAccountName: chainloop-zaibon-controleplane
+ securityContext:
+ null
+ containers:
+
+ - name: cloud-sql-proxy
+ # It is recommended to use the latest version of the Cloud SQL proxy
+ # Make sure to update on a regular schedule!
+ image: gcr.io/cloudsql-docker/gce-proxy:1.33.10 # make sure the use the latest version
+ command:
+ - "/cloud_sql_proxy"
+ # If connecting from a VPC-native GKE cluster, you can use the
+ # following flag to have the proxy connect over private IP
+ # - "-ip_address_types=PRIVATE"
+
+ # By default, the proxy will write all logs to stderr. In some
+ # environments, anything printed to stderr is consider an error. To
+ # disable this behavior and write all logs to stdout (except errors
+ # which will still go to stderr), use:
+ - "-log_debug_stdout"
+ - "-instances=archipelo-dev:europe-west1:ardb-fresh-zaibon-7c3c491f=tcp:0.0.0.0:5432"
+ securityContext:
+ runAsNonRoot: true
+ resources:
+ limits:
+ cpu: 250m
+ memory: 512Mi
+ requests:
+ cpu: 250m
+ memory: 512Mi
---
# Source: chainloop/templates/cas/hpa.yaml
@@ -723,39 +787,13 @@
securityContext:
null
containers:
-
- - name: cloud-sql-proxy
- # It is recommended to use the latest version of the Cloud SQL proxy
- # Make sure to update on a regular schedule!
- image: gcr.io/cloudsql-docker/gce-proxy:1.28.0 # make sure the use the latest version
- command:
- - "/cloud_sql_proxy"
- # If connecting from a VPC-native GKE cluster, you can use the
- # following flag to have the proxy connect over private IP
- # - "-ip_address_types=PRIVATE"
-
- # By default, the proxy will write all logs to stderr. In some
- # environments, anything printed to stderr is consider an error. To
- # disable this behavior and write all logs to stdout (except errors
- # which will still go to stderr), use:
- - "-log_debug_stdout"
- - "-instances=archipelo-dev:europe-west1:ardb-fresh-zaibon-7c3c491f=tcp:5432"
- securityContext:
- runAsNonRoot: true
- resources:
- limits:
- cpu: 250m
- memory: 512Mi
- requests:
- cpu: 250m
- memory: 512Mi
- name: migrate
- image: "ghcr.io/chainloop-dev/chainloop/control-plane-migrations:v0.16.2"
+ image: "ghcr.io/chainloop-dev/chainloop/control-plane-migrations:v0.17.1"
imagePullPolicy:
args:
- migrate
- apply
- --url
- - "postgres://chainloop:0f9RxHXbX2QH2Hvs@127.0.0.1:5432/chainloop?sslmode=disable"
+ - "postgres://chainloop:0f9RxHXbX2QH2Hvs@foo-chainloop-sql-proxy:5432/chainloop?sslmode=disable"
- --dir
- file:///migrations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just one comment about the requirement of the topologyKey
Thank you!
- labelSelector: | ||
matchLabels: | ||
app.kubernetes.io/component: controlplane | ||
topologyKey: kubernetes.io/hostname |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the topology key required?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes it is indeed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
Refs #325