Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

document how to run kind in a kubernetes pod #303

Open
BenTheElder opened this issue Feb 15, 2019 · 7 comments

Comments

@BenTheElder
Copy link
Member

commented Feb 15, 2019

xref: #284
additionally these mounts are known to be needed:

    volumeMounts:
      - mountPath: /lib/modules
        name: modules
        readOnly: true
      - mountPath: /sys/fs/cgroup
        name: cgroup
   volumes:
    - name: modules
      hostPath:
        path: /lib/modules
        type: Directory
    - name: cgroup
      hostPath:
        path: /sys/fs/cgroup
        type: Directory

thanks to @maratoid

/kind documentation
/priority important-longterm

We probably need a new page in the user guide for this.

EDIT: Additionally, for any docker in docker usage the docker storage (typically /var/lib/docker) should be a volume. A lot of attempts at using kind in Kubernetes seem to miss this one. Typically an emptyDir is suitable for this.

EDIT2: you also probably want to set a pod DNS config to some upstream resolvers so as not to have your inner cluster pods trying to talk to the outer cluster's DNS which is probably on a clusterIP and not necessarily reachable.

 dnsPolicy: "None"
  dnsConfig:
    nameservers:
     - 1.1.1.1
     - 1.0.0.1
@fejta-bot

This comment has been minimized.

Copy link

commented May 16, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@BenTheElder

This comment has been minimized.

Copy link
Member Author

commented May 16, 2019

/remove-lifecycle stale

@BenTheElder

This comment has been minimized.

Copy link
Member Author

commented Jul 2, 2019

this came up again in #677 and again today in another deployment
/assign

@BenTheElder

This comment has been minimized.

Copy link
Member Author

commented Jul 19, 2019

see this about possibly inotify watch limits on the host and a work around #717 (comment)

this issue may also apply to other linux hosts (non-kubernetes)

@radu-matei

This comment has been minimized.

Copy link
Contributor

commented Aug 6, 2019

For future reference, here's a working pod spec for running kind in a pod:
(Add your own image)
(cc @BenTheElder - is this a sane pod spec for kind?)

That being said, there should also be documentation for:

  • why kind needs the volume mounts and what impact they have on the underlying node infrastructure
  • what happens when the pod is terminated before deleting the cluster (in the context of #658 (comment))
  • configuring garbage collection for unused image to avoid node disk pressure (#663)
  • anything else?
apiVersion: v1
kind: Pod
metadata:
  name: dind-k8s
spec:
  containers:
    - name: dind
      image: <image>
      securityContext:
        privileged: true
      volumeMounts:
        - mountPath: /lib/modules
          name: modules
          readOnly: true
        - mountPath: /sys/fs/cgroup
          name: cgroup
        - name: dind-storage
          mountPath: /var/lib/docker
  volumes:
  - name: modules
    hostPath:
      path: /lib/modules
      type: Directory
  - name: cgroup
    hostPath:
      path: /sys/fs/cgroup
      type: Directory
  - name: dind-storage
    emptyDir: {}
@howardjohn

This comment has been minimized.

Copy link

commented Aug 8, 2019

Make sure you do kind delete cluster! See #759

@BenTheElder

This comment has been minimized.

Copy link
Member Author

commented Aug 14, 2019

That's pretty sane. As @howardjohn notes please make sure you clean up the top level containers in that pod (IE kind delete cluster in an exit trap or similar). DNS may also give you issues.

why kind needs the volume mounts and what impact they have on the underlying node infrastructure

  • /lib/modules is not strictly necessary, but a number of things want to probe these contents, and it's harmless to mount them. For clarity I would make this mount read-only. No impact.
  • cgroups are mounted because cgroupsv1 containers don't exactly nest. if we were just doing docker in docker we wouldn't need this.

what happens when the pod is terminated before deleting the cluster (in the context of #658 (comment))

It depends on your setup, with these mounts IIRC the processes / containers can leak. Don't do this. Have an exit handler, deleting the containers should happen within the grace period.

configuring garbage collection for unused image to avoid node disk pressure (#663)

You shouldn't need this in CI, kind clusters should be ephemeral. Please, please use them ephemerally. There are a number of ways kind is not optimized for production long lived clusters. For temporary clusters used during a test this is a non-issue.

Also note that turning on disk eviction risks your pods being evicted based on the disk usage of the host. There's a reason this is off by default. Eventually we will ship an alternative to make long lived clusters better, but for now it's best to not depend on long lived clusters or image GC.

anything else?

DNS (see above). Your outer cluster's in-cluster DNS servers are typically on a clusterIP which won't necessarily be visible to the containers in the inner cluster. Ideally configure the "host machine" Pod's DNS to your preferred upstream DNS provider (see above).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants
You can’t perform that action at this time.