Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Best practice for cloudsql proxy during migrations? #389

Closed
tobsch opened this issue Nov 27, 2018 · 4 comments
Closed

Best practice for cloudsql proxy during migrations? #389

tobsch opened this issue Nov 27, 2018 · 4 comments

Comments

@tobsch
Copy link

tobsch commented Nov 27, 2018

Hi there,

as a GCP user I am used to having the cloudsql proxy running besides my rails apps to establish a connection to the cloudsql server.

With rails migrations running in kubernetes-deploy having a proxy as a sidecar, the migration finishes successfully and terminates afterwards, while the cloud sql proxy stays alive.

I assume you also use GCP @ shopify. Is there any best practice to shut down the proxy?

Tobias

@ibawt
Copy link
Contributor

ibawt commented Nov 28, 2018

oh we use a service to back the cloudsql connections with a dedicated proxy deployment.

To use it as a sidecar, hrm...will be annoying as it won't stop.

@KnVerey
Copy link
Contributor

KnVerey commented Nov 29, 2018

oh we use a service to back the cloudsql connections with a dedicated proxy deployment.

To clarify, that service is not managed by kubernetes-deploy directly. We instead deploy a custom resource that manages both the cloudsql instance itself and the local proxy service.

We don't have any advice for you based on experience, since our setup isn't in fact the same, but here's one possibility I thought of: if you are running a version that lets you enable PID namespace sharing, you could run a second sidecar that scans the process tree to watch the rails container, and signals the proxy container to exit gracefully once the rails container is gone (and then exits gracefully itself). It's not something we've tried, but it might work. Sorry we don't have more practical advice for you.

@KnVerey KnVerey closed this as completed Nov 29, 2018
@KnVerey
Copy link
Contributor

KnVerey commented Dec 4, 2018

FYI a KEP was recently merged that addresses this problem: kubernetes/community#2148. It mentions the (unideal, but workable without PID namespace sharing) solution of using files on a shared volume to communicate lifecycle events between containers.

@micke
Copy link

micke commented Jul 7, 2023

For those that might find this issue through a search engine;

We've solved this by doing two things:

  1. Enabling quitquitquit on the proxy's container.
  2. We've also released a gem that'll handle the situation where the proxy takes a bit longer that the application container to start, it does this by adding a rake task called db:await that can be called before db:migrate, it will wait for the database(proxy) to become awailable.

An example kubernetes job tying this together might look something like this:

apiVersion: batch/v1
kind: Job
metadata:
  name: migrations
spec:
    spec:
      serviceAccountName: YOUR_SERVICE_ACCOUNT_NAME
      containers:
      - image: YOUR_IMAGE
        name: migration
        command: ["/bin/bash"]
        args: ["-c", "rails db:await db:migrate && curl -X POST localhost:9091/quitquitquit"]
        env: []
      - name: cloud-sql-proxy
        image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.4.0-alpine
        args:
          - "--private-ip"
          - "--structured-logs"
          - "--auto-iam-authn"
          - "--port=5432"
          - "--quitquitquit"
          - "DATABASE_NAME"
        securityContext:
          runAsNonRoot: true
      restartPolicy: Never
  backoffLimit: 0

Replace YOUR_SERVICE_ACCOUNT_NAME, YOUR_IMAGE and DATABASE_NAME

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants