-
Notifications
You must be signed in to change notification settings - Fork 345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add instructions on how to use with Kubernetes. #3
Conversation
The instructions are a nice touch, but I use CloudSQL with GKE/Kubernetes, and I don't need to supply credentials when using it from a Google container node. When you created your GKE cluster, did you enable CloudSQL scopes/permissions? You won't have to provide |
Thanks for the info. It seems like the guidance is unclear on whether or not GKE clusters should be using credentials from the metadata server. This issue has some discussion: I'll try to get some clarity on this and modify the docs to explain that you only need the credentials if your cluster doesn't have the scope already or if you're running off of GKE. |
I personally think it would be a lot easier not having to manage secrets if the scope is enabled. This seems to be Google's preferred way here: https://cloud.google.com/sql/docs/compute-engine-access#gce-connect-proxy "Ensure that your Compute Engine instance has the sql-admin scope enabled." "Your Compute Engine instance can be configured using Docker if needed. |
Here's some more guidance: It seems like they want to eventually block access to the metadata server from the pod, which makes sense from a design perspective, mainly because credentials from the metadata server apply to the whole node, not just one pod. |
Thanks for the post on the issue. I don't think Google wants to eliminate the automation of setting up of secrets through the metadata; it looks like they just want to control it on a per node or per pod basis? In any case, the issue is more than 10 months old. Are you reading this issue as Google wants to get rid of the "it just works" philosophy, or they want to give fine-grained access control over scopes? |
Sorry, I had some reading comprehension issues on your original response :) Here's the comment from 10 months ago that I was reading: ``We would want a way for different pods to have different sets of scopes, without restricting scheduling. So, we could not use the single service account attached to each node, which has all scopes on it. We would need to give tokens to the pod which were generated via some other mechanism. This could be some sort of multi-tenant version of metadata, as yet not designed or implemented. But it might work better to use the kubernetes 'secrets' and 'serviceAccounts' mechanisms.'' My interpretation of kubernetes/kubernetes#8867 from that comment is that they're just changing the way secrets are being shared. I'm sure the Google command utilities will be updated to "it just works" instead of forcing customers to change their setup to manually read credentials. Even if they block metadata, they could still pass the secrets using ENV variables for the pods to access. They would just need to modify GCP tools using the GCloud auth libraries to auth using secrets being passed the new way? Additionally, the guy alluded to a multi tenant version of metadata, yet to be designed or built. Making people configure credentials and such feels like the AWS way of doing things ;) In any case, the issue is months old. Do we know if there's been any update or momentum to secure the metadata service? |
Hey, disclaimer: I actually work at Google, but not directly on Kubernetes so my responses here shouldn't be considered as the absolute truth. Here's my interpretation: Most GCP client libraries attempt to automatically retrieve oauth2 credentials from the metadata server, which makes authenticating from GCE easy. Kubernetes wants to disallow access to the metadata server, because credentials there are node-level, not pod-level. For example, it's not possible to allow one pod to access Cloud SQL, but disallow another. Kubernetes has a "Secret" API that is recommended for use-cases like this, but it's a little more effort than simply using the metadata server when running on GCE/GKE. Using the Secret API is probably considered a best-practice right now, and might become the only option in the future. I'm sure we'll figure out a way to make this easier in the future before completely disallowing metadata, but for now we might as well document the "best practice" method. |
Got it. I am definitely not a Google employee, so it is good to have some internal insight. Just some random startup guy on the street! I guess the cafeteria discussions in MTV have been pointing this way... That's very interesting. So I've been provisioning a service account for my docker-compose development setup. What I've been doing is setting the contents of the JSON credentials file as an ENV variable:
Then, in my Docker container, I have a bash script as my default CMD:
Apparently, the Google Application Default Credentials don't really work that well with the GCP command utilities... According to some Stack Overflow issues, If I understand you correctly, I should start modifying my GKE controllers to somehow pass service accounts into my containers, using the Kubernetes Secrets API or the ENV variable? Somehow I like using the ENV variable with passing secrets because it will be easier for me to test deployments locally (since I don't have a Kubernetes running on my laptop; just using docker-compose). There doesn't seem to be any upside with security by using the Kubernetes API, since the pod can freely access the secrets volume? Anyway, thanks for your thoughts. Just some musings from a GKE customer. |
Changed the docs to use the b.gcr.io/cloudsql-docker/gce-proxy image name. |
Add instructions on how to use with Kubernetes.
…latform#3) Co-authored-by: Tiffany Xiang <tiffanyx@google.com>
cc @Carrotman42
Hey Kevin,
I added some instructions on how to use your proxy with Kubernetes. The main difference is that k8s disallows using the metadata server for auth, so we have to create a secret and mount it as a volume.
I had one question about the proxy container though, do you publish that anywhere other than the gcr.io/google_appengine namespace? We probably shouldn't document that since the App Engine team might change the naming. If not, we can just document how to build your own container.