New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running cloudsql-proxy as Kubernetes DaemonSet #49

Closed
ppawiggers opened this Issue Oct 7, 2016 · 3 comments

Comments

Projects
None yet
3 participants
@ppawiggers

ppawiggers commented Oct 7, 2016

I'd like to run the cloudsql-proxy container as a DaemonSet instead of a sidecar in my pods, because I have multiple (different) pods on a node, which all need to connect to a Cloud SQL instance. So instead of:

volumes:
- emptyDir:
  name: cloudsql-sockets

I use:

volumes:
- hostPath:
    path: /cloudsql
  name: cloudsql-sockets

So the other pods can just mount the hostPath /cloudsql/ (read-only) to load the UNIX sockets.

However, when I try to start the cloudsql-proxy container, it gives me this error:

Error syncing pod, skipping: failed to "StartContainer" for "cloudsql-proxy" with RunContainerError: "runContainer: Error response from daemon: mkdir /cloudsql: read-only file system"

According to the Kubernetes docs, when using hostPath, only root can write to it. So containers which want to write to the mounted hostPath should also be using user root. Is this not the case for cloudsql-proxy container?

One solution would be using TCP sockets, but I prefer UNIX sockets.

@apelisse

This comment has been minimized.

Show comment
Hide comment
@apelisse

apelisse Oct 7, 2016

Contributor

May I ask why you prefer UNIX sockets over TCP sockets?

Here is how I would have done that:

  • Create sqlproxy pod (potentially with multiple tcp/instances)
  • Expose multiple ports from your pod (with instance name in the port name, as seen below)
  • Create one service per instance with the same port (or one service with multiple ports):
apiVersion: v1
kind: Service
metadata:
  name: sqlproxy-service-INSTANCENAME
spec:
  ports:
  - name: sqlport
    port: 3306
    targetPort: sqlproxy-port-INSTANCENAME
  selector:
    app: sqlproxy

Then, simply connect to the SQL proxy of your choice:

mysql -h sqlproxy-service-INSTANCENAME ...

You end-up with only one pod/sqlproxy running in the cluster on one node (doesn't matter which), and of course this kind of service is not accessible from the outside. You can scale the sqlproxy by increasing the number of replicas.

On the other hand, if you absolutely want to use UNIX sockets and the DaemonSet, then this documentation shows you how to run the container in a privileged mode: http://kubernetes.io/docs/user-guide/security-context/. I'll try to get the documentation of HostPath updated to point to this URL.

Contributor

apelisse commented Oct 7, 2016

May I ask why you prefer UNIX sockets over TCP sockets?

Here is how I would have done that:

  • Create sqlproxy pod (potentially with multiple tcp/instances)
  • Expose multiple ports from your pod (with instance name in the port name, as seen below)
  • Create one service per instance with the same port (or one service with multiple ports):
apiVersion: v1
kind: Service
metadata:
  name: sqlproxy-service-INSTANCENAME
spec:
  ports:
  - name: sqlport
    port: 3306
    targetPort: sqlproxy-port-INSTANCENAME
  selector:
    app: sqlproxy

Then, simply connect to the SQL proxy of your choice:

mysql -h sqlproxy-service-INSTANCENAME ...

You end-up with only one pod/sqlproxy running in the cluster on one node (doesn't matter which), and of course this kind of service is not accessible from the outside. You can scale the sqlproxy by increasing the number of replicas.

On the other hand, if you absolutely want to use UNIX sockets and the DaemonSet, then this documentation shows you how to run the container in a privileged mode: http://kubernetes.io/docs/user-guide/security-context/. I'll try to get the documentation of HostPath updated to point to this URL.

@ppawiggers

This comment has been minimized.

Show comment
Hide comment
@ppawiggers

ppawiggers Oct 11, 2016

I prefer UNIX sockets because I don't like to manage (duplicate) ports. However, the solution you suggest - using named ports and attaching a service to it - would solve this problem. Thanks, I will close this issue.

ppawiggers commented Oct 11, 2016

I prefer UNIX sockets because I don't like to manage (duplicate) ports. However, the solution you suggest - using named ports and attaching a service to it - would solve this problem. Thanks, I will close this issue.

@ppawiggers ppawiggers closed this Oct 11, 2016

@pkyeck

This comment has been minimized.

Show comment
Hide comment
@pkyeck

pkyeck Apr 10, 2017

@apelisse I like your idea of having one service for all instances. could you also post the yml file for the pod, I'm currently trying to figure out how to expose different ports for the different instances ... thanks!

pkyeck commented Apr 10, 2017

@apelisse I like your idea of having one service for all instances. could you also post the yml file for the pod, I'm currently trying to figure out how to expose different ports for the different instances ... thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment