-
Notifications
You must be signed in to change notification settings - Fork 349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Staying alive on SIGTERM #86
Comments
What action do you want to have the Proxy process to take when it received
a SIGTERM? From your post it would seem that you want it to do nothing
instead.
Can you configure whatever is sending the SIGTERM to send nothing instead?
Or some dummy signal instead?
I suppose we could add a flag which causes the Proxy to exit after it sees
there are no active connections after receiving the SIGTERM, but to be
honest it will likely be insufficient for many applications: unfortunately
many applications do not use connection pooling, so even during the web
app's shutdown sequence the connection count through the Proxy may
temporarily dip to zero for some period of time. It seems like it would be
tricky to get the Proxy to act correctly and would require a few awkward
flags to get done well.
Is it possible to arrange for that SIGTERM to just not happen? Since the
Proxy process is stateless it is totally fine to SIGKILL it (assuming
nothing is utilizing the "state" stored in the database connections made
through the process, of course).
In any case, if you have a simple proposal, please feel free to send a pull
request and I will be happy to take a look at it.
|
Thanks for the thoughtful reply! I'm not sure how to tell Kubernetes not to send a SIGTERM, but I'll investigate a little and get back to you. |
If possible, I'd suggest writing a shell script that traps SIGTERM and emits a different signal to the cloudsql proxy. |
@jesseshieh I am facing the same issue as you. Could you solve the issue? If yes, could you tell me how? |
I haven't solved it yet, but @hfwang's suggestion sounds good to me. |
It turned out that the entrypoint of my main container was not in exec format so SIGTERM was not transfered to nginx and it was functioning until SIGKILL stopped it finally. |
We have the same setup: a kubernetes pod having a web app container + cloudsql container. You can easily trap the sigterm signal the following way in your deployment: command: ["/bin/bash", "-c", "trap 'sleep 15; exit 0' SIGTERM; /cloud_sql_proxy -dir=/cloudsql -instances=..."] This delay will ensure the web app is shut down before the cloudsql proxy container (e.g. during rolling updates). Previously you'd need a custom container since the |
I'd like to see the proxy stop accepting new connections (but keep active ones alive). That way I can SIGTERM it and immediately start a new (version of the) proxy without interrupting service. |
A preStop hook execution will prevent the https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods Also you can add communication between containers using shared volumes https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/ Which could be used to instruct your preStop hook when to complete by creating a file at the end of your webserver shutdown inside the shared volume and making your preStop hook for cloudsql proxy wait for that file to exist before stopping using a sleep loop. |
@park9140 or @mhindery were you guys able to get either of your solutions working? It seems like Also, I get a Ultimately I was able to get a working graceful shutdown by:
I have separate preStop hooks on my webapp containers that are correctly sleeping to drain connections so I originally thought I just needed cloud SQL proxy to not exit on SIGTERM. However, without the I would much prefer a cleaner solution like you guys mentioned above. Am I missing something about how to get those working? Thanks! |
If you're going down the route of compiling your own Proxy, you might as
well just write Go code to catch the signal and handle it in some way there.
You can use the os/signal library to watch for an interrupt and handle it
that way. If the code is generic enough (and those on this issue seem to
like the functionality) I'm happy to accept a pull request.
See here for an example handler:
https://stackoverflow.com/questions/11268943/golang-is-it-possible-to-capture-a-ctrlc-signal-and-run-a-cleanup-function-in
I can't tell from the os/signal documentation whether it would trap SIGTERM
as well, but you can easily test it.
|
Thank you for the information. I had the same problem. I got it to work with sleep on preStop... Maybe there is some way to get a environment variable or commandline parameter with a wait time before shuting down on SIGTERM ? |
A least for our use case a gracefull shutdown(stop listning for incoming connections and finish processing the current ones) on SIGTERM would solve the problem as we use connection pooling in our application. |
Simplest solution to stop TERM killing the proxy in Kubernetes is to setup container with:
^ This will cause the But agree that ideal solution would be implement this inside the proxy:
|
I'll close this thread and this will be resolved together with #128. |
Update node version to 6.10.2
Hi, I'm wondering if it's possible to add an option to keep cloudsql-proxy from exiting on receiving a SIGTERM.
I'm running cloudsql-proxy on Kubernetes in a pod alongside a web app. When Kubernetes deletes a pod, it sends a SIGTERM to both cloudsql-proxy and my web app and then sends a SIGKILL 30 seconds later. Upon receiving the SIGTERM, my web app performs a graceful shutdown by draining the requests in flight, but cloudsql-proxy shuts down immediately. This means that the requests being drained fail if they need any more access to the database.
It'd be great if I could configure cloudsql-proxy to stay alive after receiving a SIGTERM so my web app can drain requests properly. Eventually, cloudsql-proxy can exit upon receiving a SIGKILL.
The text was updated successfully, but these errors were encountered: