Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compatibility with containerd? #1265

Open
vikas027 opened this issue Aug 16, 2021 · 8 comments
Open

Compatibility with containerd? #1265

vikas027 opened this issue Aug 16, 2021 · 8 comments
Labels
enhancement New feature or request future Feature work that we haven't prioritized

Comments

@vikas027
Copy link

vikas027 commented Aug 16, 2021

I use Bottlerocket AMI on EKS clusters which use containerd that does not uses docker or docker socket.

Custom actions fails with errors like these. As a workaround, is there a way I can use a pre-pulled docker image instead of GitHub action trying to build an image on the fly?

Build container for action use: '/runner/_work/_actions/aevea/commitsar/v0.16.0/Dockerfile'.
  /usr/local/bin/docker build -t 60e226:7de2787af7e04b038ce49eb6a1a987d8 -f "/runner/_work/_actions/aevea/commitsar/v0.16.0/Dockerfile" "/runner/_work/_actions/aevea/commitsar/v0.16.0"
  Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
  Warning: Docker build failed with exit code 1, back off 8.072 seconds before retry.
  /usr/local/bin/docker build -t 60e226:7de2787af7e04b038ce49eb6a1a987d8 -f "/runner/_work/_actions/aevea/commitsar/v0.16.0/Dockerfile" "/runner/_work/_actions/aevea/commitsar/v0.16.0"
  Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
  Warning: Docker build failed with exit code 1, back off 4.07 seconds before retry.
  /usr/local/bin/docker build -t 60e226:7de2787af7e04b038ce49eb6a1a987d8 -f "/runner/_work/_actions/aevea/commitsar/v0.16.0/Dockerfile" "/runner/_work/_actions/aevea/commitsar/v0.16.0"
  Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: Docker build failed with exit code 1
@ethomson
Copy link
Contributor

Thanks @vikas027 - I agree that this is something that we need to improve. We're taking a look at improving our support for different containers and runtimes now and this is something that we'll take into account. Appreciate the feedback, we hope to have some news here soon.

@jmorcar
Copy link

jmorcar commented Aug 12, 2022

Any update about this? Images with Docker in docker is a segurity risk and today docker is deprecated in Cloud environment with kubernetes, so docker.sock won't found any more.

Here an issue related with the alternative (deploy docker cli without docker daemon instance).. conclusion it docker always need a daemon started with its docker.sock, so this confirm there is not alternative to use docker in docker to pull images in nodes migrated with new dockershim like contairnerd

@tmehlinger
Copy link

tmehlinger commented Mar 20, 2023

Dockershim has been deprecated in Kubernetes on 1.23 and removed for 1.24. This follows the guidance set by the Kubernetes team. This absolutely needs to be prioritized.

https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/
https://docs.aws.amazon.com/eks/latest/userguide/dockershim-deprecation.html

@jre21
Copy link

jre21 commented Mar 20, 2023

@TingluoHuang To underscore the urgency here, Kubernetes' version policy is to support the three most recent minor releases, which are currently 1.24-1.26. In other words, the docker runtime is no longer available in any kubernetes version with upstream support.

@corrigat
Copy link

corrigat commented Mar 25, 2023

Chiming in as an affected consumer, with EKS dropping K8S 1.22 in June. I'm reading what may be some misunderstandings of the capabilities of containerd and the desire to use docker in runner pods. If runner pods were able to call docker and launch containers before, they will still be able to after upgrading to the latest K8S version. At least they will if you're using the summerwind-dind container or built your own and borrowed the pieces that install dockerd and supervisord. The ARC dind containers are already launched with the privileged security context, so dockerd will still work so long as the node has docker engine installed, and the runnerdeployment mounts the socket and /var/lib/docker into the runner.

kubectl version --short
Working in namespace default!
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.4
Kustomize Version: v4.5.7
Server Version: v1.25.6-eks-48e63af
...
kubectl exec -it corrigat-testing -- /bin/bash
builder@corrigat-testing:/$ docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
...
builder@corrigat-testing:/$ docker run alpine:latest echo hello
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
63b65145d645: Pull complete
Digest: sha256:ff6bdca1701f3a8a67e328815ff2346b0e4067d32ec36b7992c1fdc001dc8517
Status: Downloaded newer image for alpine:latest
hello

The difference in K8S without dockershim is we can no longer share the imagefs directory or runtime socket with the runner pods. While this does not totally affect the ability to process workflow jobs, and even workflows building or using containers with docker will continue to work, without containerd support the affect is image pulls in new and ephemeral runners will always require a full image fetch from the registry. It has been a great help to mount the docker socket and imagefs into runner pods - we have processed 2,325,917 (accurate as of now) workflow runs with ARC, and saved at least 2/3 of that times our 3GB image size in image pulls and the associated time. It would be fantastic to be able to continue that with containerd. Additionally, imagefs storage sizing will need to be reconsidered to facilitate the additional copies of container images stored locally in each pod's overlayfs on worker nodes.

I've played with just continuing to install docker in our worker node AMI, but Kubelet doesn't try to manage anything docker related under disk pressure, so the node just fills up and dies.

Looking forward to updates here.

@piroinno
Copy link

Any updates on this?

@artemry-nv
Copy link

Any chances to see this feature soon?

@Davrosss
Copy link

Would love to see this feature as well. We have reverted back to EC2 based runners for now for container jobs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request future Feature work that we haven't prioritized
Projects
None yet
Development

No branches or pull requests

9 participants