-
Notifications
You must be signed in to change notification settings - Fork 308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I get the agent to talk to containerd.sock using tls certs? #3108
Comments
@dhop90 As far as I know, k8s, doesn't support Docker APIs. K8s removed Docker as a runtime. For Dozzle to work, it would need to implement the k8s API. |
Thanks, that's what I figured. Put me down as a use case that would like to continue to use remote host since agent will not work in k8s |
Wait remote hosts work? How? They have the same API. This is news to me. Can you share your configuration? |
I mount a directory with certs on the pod running dozzle @ /certs DOZZLE_REMOTE_HOST=tcp://qnap.domain.duckdns.org:2375,tcp://kube0.domain.duckdns.org:2376,tcp://mini.domain.duckdns.org:2376,tcp://kube1.domain.duckdns.org:2376,tcp://kube2.domain.duckdns.org:2376,tcp://kube3.domain.duckdns.org:2376,tcp://kube4.domain.duckdns.org:2376,tcp://kube5.domain.duckdns.org:2376,tcp://kube6.domain.duckdns.org:2376,tcp://kube7.domain.duckdns.org:2376,tcp://kube8.domain.duckdns.org:2376,tcp://kube9.domain.duckdns.org:2376,tcp://kube10.domain.duckdns.org:2376,tcp://kube11.domain.duckdns.org:2376,tcp://pi-dns.domain.duckdns.org:2376,tcp://pi-gway.domain.duckdns.org:2376,tcp://pi-pool.domain.duckdns.org:2376,tcp://pi-homebridge.domain.duckdns.org:2376,tcp://pi-zeek.domain.duckdns.org:2376,tcp://dell.domain.duckdns.org:2376,tcp://thinkcentre.domain.duckdns.org:2376 @kube1:~ $ systemctl status docker logs: |
Ah nice! So you are not using I need to do a little testing to update k8s support. I am just a little confused because based on your configuration Is this true? Does If true, then you want to use |
It would seem that containerd.sock works with Docker API, I did not enable anything special. I did install cri-dockerd (https://github.com/Mirantis/cri-dockerd) This adapter provides a shim for Docker Engine that lets you control Docker via the Kubernetes Container Runtime Interface. I'll try using swarm mode |
I use FromEnv which supports env variables:
You can try setting up one agent with those env variable. You probably have to mount certs. If that works then in theory swarm should work. This would actually give you significant performance boost as the agents would handle all heavy work in swarm mode. And inter communication is all protobuf which is much faster than json. I am still reading about Docker engine in k8s. I thought Docker was removed from k8s so I am a little confused what is happening. I'll have to setup k8s locally to really understand how it works. I won't be able to look at it too much today, but if k8s works with |
When creating a single deployment pod using the following env variable:
I get the following log messages: time="2024-07-12T22:25:06Z" level=info msg="Dozzle version v8.0.5" the above log messages repeat. what/where is tasks.dozzle configured? I created a dozzle service using this manifest: apiVersion: v1 Any thoughts? I also tried creating pods using a daemonset but I haven't figured out how to set DOCKER_HOST per pod using these env variables: I'm getting this error |
tasks.dozzle is how docker swarm sets a cluster DNS. That's how I discover all other instances. I think this would have to change for k8s. From my earlier comment, I meant you should first create an agent with At the moment, I am trying to setup k8s to play around with it, but it's no easy task. I am going to try I am out of the office next week so this will have to wait a little. :) But based on what you found, I would imagine UI --> multiple agents should work if you use |
I am reading more about cri-dockerd. Sounds like this bring docker back as a runtime. Is that right? Honestly, I am having a hard time setting up an environment to test k8s with containerd. Based on your screenshot above, you are running docker which is how Dozzle is working. But I am not too sure if I understand this would work with containerd out of the box. |
Hey, after learning a little more, I think you are over complicating things. With I don't know k8s, but you just want to mount So something like I don't think Maybe in the future I can figure out how to do I am off for the day! Let me know if it works. |
Not an expert but I was successfully able to get working in k8s and agent. This only work with Docker runtime of course. My deployment: apiVersion: apps/v1
kind: Deployment
metadata:
name: dozzle
labels:
app: dozzle
spec:
replicas: 1
selector:
matchLabels:
app: dozzle
template:
metadata:
labels:
app: dozzle
spec:
containers:
- name: dozzle
image: amir20/dozzle:latest
command: ["/dozzle"]
args: ["agent"]
ports:
- containerPort: 8080
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: Socket
---
apiVersion: v1
kind: Service
metadata:
name: dozzle-service
labels:
app: dozzle-service
spec:
type: LoadBalancer
selector:
app: dozzle
ports:
- protocol: TCP
port: 7007
targetPort: 7007 Then I started without k8s in docker using
Closing this @dhop90. Will eventually also support |
@amir20, you were correct, I was making it more complicated then it needed to be. Got it working with these manifests ------
apiVersion: apps/v1
kind: Deployment
metadata:
name: dozzle
namespace: dozzle
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: dozzle
strategy:
type: Recreate
template:
spec:
containers:
- env:
- name: wud.watch
value: "false"
- name: DOZZLE_LEVEL
value: debug
- name: DOZZLE_REMOTE_AGENT
value: kube0:7007,kube1:7007,kube2:7007,kube3:7007,kube4:7007,kube5:7007,kube6:7007,kube7:7007,kube8:7007,kube9:7007,kube10:7007,kube11:7007,dell:7007,mini:
7007,pi-zeek:7007,pi-dns:7007,pi-pool:7007,pi-homebridge:7007,pi-gway:7007,thinkcentre:7007,pi-dns:7007
image: amir20/dozzle:v8.0.5
imagePullPolicy: IfNotPresent
name: dozzle
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/docker.sock
name: dockersock
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /run/containerd/containerd.sock
name: dockersock
------
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
kompose.cmd: ../manifests/kompose/kompose convert --controller daemonSet
kompose.version: 1.33.0 (3ce457399)
labels:
io.kompose.service: dozzle
name: dozzle-agent
namespace: dozzle
spec:
selector:
matchLabels:
io.kompose.service: dozzle-agent
template:
metadata:
labels:
io.kompose.network/agent-default: "true"
io.kompose.service: dozzle-agent
spec:
containers:
- args:
- agent
image: amir20/dozzle:v8.0.5
name: dozzle-agent
ports:
- containerPort: 7007
hostPort: 7007
protocol: TCP
volumeMounts:
- mountPath: /var/run/docker.sock
name: dockersock
restartPolicy: Always
volumes:
- hostPath:
path: /var/run/docker.sock
name: dockersock
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule |
🚀 I am astonished by how much configuration there is k8s. If you get a chance, I think an addition to docs would be amazing. |
I'm not able to get agents to work in my kubernetes (k8s) environment. I'm using docker with containerd=/run/containerd/containerd.sock with tls certs, hence docker.socket is not available. In my scenario all docker communication is done over a tls connection to the containerd.sock. How do I get the agent to talk to containerd.sock using tls certs?
Originally posted by @dhop90 in #3066 (comment)
The text was updated successfully, but these errors were encountered: