Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add instructions on how to use with Kubernetes. #3

Merged
merged 2 commits into from Mar 14, 2016

Conversation

dlorenc
Copy link
Contributor

@dlorenc dlorenc commented Mar 9, 2016

cc @Carrotman42
Hey Kevin,

I added some instructions on how to use your proxy with Kubernetes. The main difference is that k8s disallows using the metadata server for auth, so we have to create a secret and mount it as a volume.

I had one question about the proxy container though, do you publish that anywhere other than the gcr.io/google_appengine namespace? We probably shouldn't document that since the App Engine team might change the naming. If not, we can just document how to build your own container.

@jasonjei
Copy link

The instructions are a nice touch, but I use CloudSQL with GKE/Kubernetes, and I don't need to supply credentials when using it from a Google container node. When you created your GKE cluster, did you enable CloudSQL scopes/permissions? You won't have to provide GOOGLE_APPLICATION_CREDENTIALS if you enable the CloudSQL scope. If you already created the cluster, I think you just have to modify the template. You might want to include a notation in your instructions to modify the cluster template, or to create select the CloudSQL scope when creating the container cluster.

@dlorenc
Copy link
Contributor Author

dlorenc commented Mar 10, 2016

Thanks for the info. It seems like the guidance is unclear on whether or not GKE clusters should be using credentials from the metadata server. This issue has some discussion:

kubernetes/kubernetes#8512

I'll try to get some clarity on this and modify the docs to explain that you only need the credentials if your cluster doesn't have the scope already or if you're running off of GKE.

@jasonjei
Copy link

I personally think it would be a lot easier not having to manage secrets if the scope is enabled. This seems to be Google's preferred way here:

https://cloud.google.com/sql/docs/compute-engine-access#gce-connect-proxy

"Ensure that your Compute Engine instance has the sql-admin scope enabled."

"Your Compute Engine instance can be configured using Docker if needed.
Depending on how you will be connecting to your Compute Engine instance, you might need to enable some non-default scopes on the instance. Enabling scopes must be done at instance creation time, so make sure that you determine what scopes are needed for your instance before you create it. Learn more about scopes."

@dlorenc
Copy link
Contributor Author

dlorenc commented Mar 10, 2016

Here's some more guidance:
kubernetes/kubernetes#8867

It seems like they want to eventually block access to the metadata server from the pod, which makes sense from a design perspective, mainly because credentials from the metadata server apply to the whole node, not just one pod.

@jasonjei
Copy link

Thanks for the post on the issue. I don't think Google wants to eliminate the automation of setting up of secrets through the metadata; it looks like they just want to control it on a per node or per pod basis? In any case, the issue is more than 10 months old. Are you reading this issue as Google wants to get rid of the "it just works" philosophy, or they want to give fine-grained access control over scopes?

@jasonjei
Copy link

Sorry, I had some reading comprehension issues on your original response :)

Here's the comment from 10 months ago that I was reading: ``We would want a way for different pods to have different sets of scopes, without restricting scheduling. So, we could not use the single service account attached to each node, which has all scopes on it. We would need to give tokens to the pod which were generated via some other mechanism. This could be some sort of multi-tenant version of metadata, as yet not designed or implemented. But it might work better to use the kubernetes 'secrets' and 'serviceAccounts' mechanisms.''

My interpretation of kubernetes/kubernetes#8867 from that comment is that they're just changing the way secrets are being shared. I'm sure the Google command utilities will be updated to "it just works" instead of forcing customers to change their setup to manually read credentials. Even if they block metadata, they could still pass the secrets using ENV variables for the pods to access. They would just need to modify GCP tools using the GCloud auth libraries to auth using secrets being passed the new way? Additionally, the guy alluded to a multi tenant version of metadata, yet to be designed or built. Making people configure credentials and such feels like the AWS way of doing things ;)

In any case, the issue is months old. Do we know if there's been any update or momentum to secure the metadata service?

@dlorenc
Copy link
Contributor Author

dlorenc commented Mar 10, 2016

Hey,

disclaimer: I actually work at Google, but not directly on Kubernetes so my responses here shouldn't be considered as the absolute truth.

Here's my interpretation:

Most GCP client libraries attempt to automatically retrieve oauth2 credentials from the metadata server, which makes authenticating from GCE easy.

Kubernetes wants to disallow access to the metadata server, because credentials there are node-level, not pod-level. For example, it's not possible to allow one pod to access Cloud SQL, but disallow another.

Kubernetes has a "Secret" API that is recommended for use-cases like this, but it's a little more effort than simply using the metadata server when running on GCE/GKE.

Using the Secret API is probably considered a best-practice right now, and might become the only option in the future.

I'm sure we'll figure out a way to make this easier in the future before completely disallowing metadata, but for now we might as well document the "best practice" method.

@jasonjei
Copy link

Got it. I am definitely not a Google employee, so it is good to have some internal insight. Just some random startup guy on the street! I guess the cafeteria discussions in MTV have been pointing this way...

That's very interesting. So I've been provisioning a service account for my docker-compose development setup. What I've been doing is setting the contents of the JSON credentials file as an ENV variable:

nginx:
  build: nginx
  ports:
    - "80:80"
    - "443:443"
  links:
    - app
    - middleware
  environment:
    - DNSDOCK_NAME=nginx
    - DNSDOCK_IMAGE=nginx
    - APP_HOST=http://app:3000/
    - MIDDLEWARE_HOST=http://middleware:9000/
    - "GOOGLE_SERVICE_ACCOUNT={ \"type\": \"service_account\", \"project_id\": \"XXXXXXXX\", \"private_key_id\": \"XXXXXXXXX\", \"private_key\": \"-----BEGIN PRIVATE KEY-----\\nXXXXXXXXXXXXXXXX=\\n-----END PRIVATE KEY-----\\n\", \"client_email\": \"XXXX@XXXX.iam.gserviceaccount.com\", \"client_id\": \"XXXXXX\", \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\", \"token_uri\": \"https://accounts.google.com/o/oauth2/token\", \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\", \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/XXXXX%40XXXXXXXX\" }" 
    - DOWNLOAD_CERT=1

Then, in my Docker container, I have a bash script as my default CMD:

if [ -n "$GOOGLE_SERVICE_ACCOUNT" ] ; then
    echo "$GOOGLE_SERVICE_ACCOUNT" >/root/gcredentials.json
    gcloud auth activate-service-account --key-file=/root/gcredentials.json
fi

if [ -n "$DOWNLOAD_CERT" ] ; then
    if test -f "/etc/nginx/supersecret.crt"; then rm /etc/nginx/supersecret.crt;fi
    if test -f "/etc/nginx/supersecret.key"; then rm /etc/nginx/supersecret.key;fi

    gsutil cp gs://secrets/certs/supersecret.crt /etc/nginx/
    gsutil cp gs://secrets/certs/supersecret.key /etc/nginx/
fi

Apparently, the Google Application Default Credentials don't really work that well with the GCP command utilities... According to some Stack Overflow issues, GOOGLE_APPLICATION_CREDENTIALS is ignored by tools like gcloud or gsutil, and that I actually have to issue the activate-service-account command. Would you kindly let your colleagues know about that despite Google's docs stating that the ENV variable is the first thing to be checked when looking for credentials? 😆 (https://developers.google.com/identity/protocols/application-default-credentials#howtheywork)

If I understand you correctly, I should start modifying my GKE controllers to somehow pass service accounts into my containers, using the Kubernetes Secrets API or the ENV variable? Somehow I like using the ENV variable with passing secrets because it will be easier for me to test deployments locally (since I don't have a Kubernetes running on my laptop; just using docker-compose). There doesn't seem to be any upside with security by using the Kubernetes API, since the pod can freely access the secrets volume?

Anyway, thanks for your thoughts. Just some musings from a GKE customer.

@dlorenc
Copy link
Contributor Author

dlorenc commented Mar 14, 2016

Changed the docs to use the b.gcr.io/cloudsql-docker/gce-proxy image name.

Carrotman42 added a commit that referenced this pull request Mar 14, 2016
Add instructions on how to use with Kubernetes.
@Carrotman42 Carrotman42 merged commit d7d544a into GoogleCloudPlatform:master Mar 14, 2016
@kurtisvg kurtisvg mentioned this pull request Feb 25, 2021
elsbrock pushed a commit to elsbrock/cloudsql-proxy that referenced this pull request Sep 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants