Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker disappeared #382

Open
g00nix opened this issue Mar 26, 2019 · 73 comments
Open

docker disappeared #382

g00nix opened this issue Mar 26, 2019 · 73 comments

Comments

@g00nix
Copy link

@g00nix g00nix commented Mar 26, 2019

Docker disappeared from microk8s:

root@kube:~# microk8s.docker
microk8s.docker: command not found
root@kube:~# ls /snap/bin/  
microk8s.config  microk8s.ctr  microk8s.disable  microk8s.enable  microk8s.inspect  microk8s.istioctl  microk8s.kubectl  microk8s.reset  microk8s.start  microk8s.status  microk8s.stop
root@kube:~# cat /etc/issue
Ubuntu 18.04.2 LTS \n \l

It it still going to be used in the project? Is there an alternative to inspect what kube is doing in the background?

If this was a planned change, is there documentation/release notes?

@mattiasarro
Copy link

@mattiasarro mattiasarro commented Mar 26, 2019

They have replaced docker daemon with containerd.

For me, this is a problem, since I was relying on exposing the Docker daemon on a TCP port. I did this by adding -H tcp://0.0.0.0:$PORT to /var/snap/microk8s/current/args/dockerd, which meant I could execute docker commands from outside the machine (with docker -H tcp://$HOST:$PORT).

Now I don't really know if I can connect to the containerd daemon the same way, and if so, how.

Loading

@g00nix
Copy link
Author

@g00nix g00nix commented Mar 26, 2019

For me, this is a problem

I can't say it's a "problem" for me. It was just a surprise. We were using the that docker to also upload images into the registry, but we just went for apt install docker for that scope.

Well, if that is the case, I guess it's up to the devs if they close the ticket or not.

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented Mar 26, 2019

Hi,

Indeed in the 1.14 release contanerd replaced dockerd. We gave the heads up on this change in the on this topic https://discuss.kubernetes.io/t/containerd-and-security-updates-on-the-next-microk8s-release/4844 and on the #microk8s channel at https://k8s.slack.com/ some time ago. For those who cannot do the transition dockerd is available from the 1.13 channel:

snap install microk8s --classic --channel=1.13/stable

One of the reasons why we moved ahead with containerd is that a few users wanted to deploy MicroK8s next to a local dockerd and that was causing unexpected issues. As @gun1x mentions this should not be a problem anymore.

@mattiasarro, I am not aware of how you are using dockerd. Could you provide more info so we can offer some suggestions?

Apologies for any inconvenience we may have cause.

Loading

@termie
Copy link

@termie termie commented Mar 26, 2019

this also took me a bit by surprise, i think i didn't read the expectations that snaps would autoupdate and would have pegged a specific version channel if i had. The microk8s docker did indeed cause some conflicts for me when i was first setting it up (and has some restrictions on it that make using it as a normal docker install kind of weird), so i'm generally for this change, but it has left a bunch of systems of mine in a (so far) unrecoverable state. Am hunting through logs to figure out why kubelet-daemon refuses to start after a reboot and will file another issue.

Loading

@carlososiel
Copy link

@carlososiel carlososiel commented Mar 26, 2019

I cannot configure containerd correctly for use insecure registries after these update. I configure containerd as say in the documentation and kubelet continue trying to pull from https instead of http.

Loading

@mattiasarro
Copy link

@mattiasarro mattiasarro commented Mar 27, 2019

Thanks for the quick response @ktsakalozos!

I am developing on macOS and running microk8s on a ubuntu-based PC, to which I connect directly over ethernet. I have enabled the private registry on microk8s (microk8s.enable registry) and would like to push Docker images to it. So I have exposed the dockerd daemon on a tcp port by editing /var/snap/microk8s/current/args/dockerd and adding a line -H 0.0.0.0:$PORT to it, which makes it available on $PORT of my ubuntu instance. Which means I can do on my mac:

docker -H $UBUNTU_MACHINE_IP:$PORT build -t mytag .
docker -H $UBUNTU_MACHINE_IP:$PORT tag mytag localhost:32000/mytag
docker -H $UBUNTU_MACHINE_IP:$PORT push localhost:32000/mytag

and now the image is available to the microk8s cluster.

Do you know if it's possible to expose containerd in a similar way and push images to it over the network?

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented Mar 27, 2019

@mattiasarro in short you will have to use the image build tool chain of your liking and upload the image you build into the insecure registry MicroK8s provides.

Here is a way to do this. I have on my host a VM with MicroK8s:

> multipass list
Name                    State             IPv4             Release
microk8s-vm             RUNNING           10.141.241.134   Ubuntu 18.04 LTS

In that VM I have the registry enabled:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get all -n container-registry
NAME                           READY   STATUS    RESTARTS   AGE
pod/registry-7d65c894c-d88jh   1/1     Running   0          39m

NAME               TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/registry   NodePort   10.152.183.66   <none>        5000:32000/TCP   39m

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/registry   1/1     1            1           39m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/registry-7d65c894c   1         1         1       39m

On the host machine I have docker running (installed via apt install docker.io), you can have any image building toolchain. The dockerd running on the host should trust the registry on the microk8s-vm. To do this:

echo "{
  \"insecure-registries\" : [\"10.141.241.134:32000\"]
}" | sudo tee /etc/docker/daemon.json

systemctl restart docker

Note that we use the IP of the VM as this is where the registry is.

To put an image in the registry we:

docker pull busybox
docker tag busybox 10.141.241.134:32000/my-busybox
docker push 10.141.241.134:32000/my-busybox

Now that the registry is populated we can create a pod in MicroK8s with:

multipass@microk8s-vm:~$ cat > bbox.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: localhost:32000/my-busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
multipass@microk8s-vm:~$ microk8s.kubectl apply -f ./bbox.yaml 
pod/busybox created

If you apt-get install docker.io inside the VM and configure the docker daemon to a) trust the registry at localhost:32000 and b) have it listening on 0.0.0.0:{A_PORT}. You can have exactly the same behavior with the one you had so far.

Loading

@g00nix
Copy link
Author

@g00nix g00nix commented Mar 27, 2019

We gave the heads up on this change in the on this topic https://discuss.kubernetes.io/t/containerd-and-security-updates-on-the-next-microk8s-release/4844 and on the #microk8s channel at https://k8s.slack.com/ some time ago.

Do you think it is possible to document changes in the release history on git? People have a tendency to pick their specific socializing platforms (example: facebook, telegram, irc), so they might miss the news if it's announced on slack, and I also doubt everybody is subscribed to discuss.kubernetes.io. Documenting stuff within git would help a lot, since everybody will go back to the git repo when there is an issue.

I think this would be great as the user base of microk8s grows. It's a very cool product and I think that helping the community with extended documentation would help microk8s become even better.

Loading

@RemcoPerlee
Copy link

@RemcoPerlee RemcoPerlee commented Mar 29, 2019

I can't seem to configure this correctly. I've used the steps from @ktsakalozos which seemed to work, however the Pod stays in the status Waiting: ContainerCreating, and I have no idea what's going on behind the scenes...

Any way I can debug this (what logs to look at, etc)?

The kubectl logs state the following:

Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: ContainerCreating

When I try this /snap/bin/microk8s.ctr image pull localhost:32000/my-busybox
I get an error, but it might be my lack of understanding CTR:
ctr: failed to resolve reference "localhost:32000/my-busybox": object required

Loading

@g00nix g00nix changed the title docker dissapeared docker dissappeared Mar 29, 2019
@g00nix g00nix changed the title docker dissappeared docker disappeared Mar 29, 2019
@g00nix
Copy link
Author

@g00nix g00nix commented Mar 29, 2019

Quick offtopic: now everybody knows I am bad with grammar. 🤣

Loading

@RemcoPerlee
Copy link

@RemcoPerlee RemcoPerlee commented Mar 29, 2019

well, I gave up and decided to rollback to 1.13, but now my cluster won't boot again. Can I state here, that I really dislike breaking changes like this?

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented Mar 29, 2019

@RemcoPerlee could you run a microk8s.inspect and attach the produced tarball here. For pods that do not start please share the output of microk8s.kubectl logs and microk8s.kubectl describe.

Loading

@RemcoPerlee
Copy link

@RemcoPerlee RemcoPerlee commented Mar 29, 2019

Let's see (I did try to reinstall it all, so this is a relatively clean setup - I hope):

Logs for Busybox:
microk8s.kubectl logs busybox Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: ContainerCreating

Describe of the pod:

Name:               busybox
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               jarvis/192.168.1.120
Start Time:         Fri, 29 Mar 2019 20:57:54 +0000
Labels:             <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"busybox","namespace":"default"},"spec":{"containers":[{"command":["sl...
Status:             Pending
IP:
Containers:
  busybox:
    Container ID:
    Image:         localhost:32000/my-busybox
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      3600
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z9grs (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-z9grs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-z9grs
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  2m53s  default-scheduler  Successfully assigned default/busybox to jarvis
  Normal  Pulling    2m53s  kubelet, jarvis    Pulling image "localhost:32000/my-busybox"
```

[inspection-report-20190329_210125.tar.gz](https://github.com/ubuntu/microk8s/files/3024538/inspection-report-20190329_210125.tar.gz)

Loading

@RemcoPerlee
Copy link

@RemcoPerlee RemcoPerlee commented Apr 1, 2019

@ktsakalozos hey, any ideas or should I do a full reinstall of the server?

Loading

@termie
Copy link

@termie termie commented Apr 2, 2019

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented Apr 2, 2019

@termie @RemcoPerlee can you please provide me with instructions on exactly what you did so I can reproduce the issue you are facing? Please be as detailed as you possibly can. Also attach any yaml manifests you apply to expose this error. Thank you.

Loading

@MingyiZhang
Copy link

@MingyiZhang MingyiZhang commented Apr 3, 2019

Hi @ktsakalozos, I want to remove a docker image from the private registry at localhost:32000. If there is no microk8s.docker, how could I do?

Loading

@g00nix
Copy link
Author

@g00nix g00nix commented Apr 3, 2019

Hi @ktsakalozos, I want to remove a docker image from the private registry at localhost:32000. If there is no microk8s.docker, how could I do?

did you read every comment within this issue? the answer is written above.

Loading

@RemcoPerlee
Copy link

@RemcoPerlee RemcoPerlee commented Apr 3, 2019

@ktsakalozos I understand why you're asking this and thanks in advance. It is however pretty difficult to provide that information. The best I can come with is the most recent action, where I uninstalled microk8s and docker.io. I restarted more or less from scratch, using all the defaults and the instructions above to get busybox running. The logs for that are attached above. Unless you can come up with something very clever, I'm about to give this up as a lost cause and reinstall the entire machine :(

Loading

@MingyiZhang
Copy link

@MingyiZhang MingyiZhang commented Apr 3, 2019

Hi @ktsakalozos, I want to remove a docker image from the private registry at localhost:32000. If there is no microk8s.docker, how could I do?

did you read every comment within this issue? the answer is written above.

Could you please be more specific? I only know how to push images to the registry, but how to remove?

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented Apr 3, 2019

Hi @MingyiZhangQimia the registry is not anything special you would use it as any private registry. Here is a link from google: https://stackoverflow.com/questions/37033055/how-can-i-use-the-docker-registry-api-v2-to-delete-an-image-from-a-private-regis

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented Apr 3, 2019

@RemcoPerlee I am sorry. I cannot reproduce the issue you are getting. :(

Loading

@RemcoPerlee
Copy link

@RemcoPerlee RemcoPerlee commented Apr 3, 2019

@ktsakalozos no worries, thanks for trying, I don't get it either, must be something with the machine, though it's almost default. Gonne plan a day to restart from scratch.

Loading

@MingyiZhang
Copy link

@MingyiZhang MingyiZhang commented Apr 4, 2019

@ktsakalozos Thank you!

Loading

@RemcoPerlee
Copy link

@RemcoPerlee RemcoPerlee commented Apr 5, 2019

@ktsakalozos ok, did a full reinstall of the server, defaults all the way, ubuntu 18.04

some initial config i need:

apt install avahi-daemon nfs-common docker.io
apt-get install iptables-persistent
sudo iptables -P FORWARD ACCEPT
iptables-save

Jenkins:

apt install openjdk-8-jdk
wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
apt update
apt install jenkins

MicroK8s:

snap install microk8s --classic
microk8s.enable dashboard registry dns

docker:

echo "{
  \"insecure-registries\" : [\”192.168.1.120:32000\"]
}" | sudo tee /etc/docker/daemon.json
systemctl restart docker
sudo groupadd docker
sudo usermod -aG docker remco

testing:

docker pull busybox
docker tag busybox 192.168.1.120:32000/my-busybox
docker push 192.168.1.120:32000/my-busybox

made a bbox.yaml, containing:

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: localhost:32000/my-busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

kubectl apply -f ./bbox.yaml

and checked the dashboard at: https://jarvis.local:16443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default

same result: ContainerWaiting

kubectl logs busybox
Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: ContainerCreating

kubectl describe:

Name:               busybox
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               jarvis/192.168.1.120
Start Time:         Fri, 05 Apr 2019 19:39:50 +0000
Labels:             <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"busybox","namespace":"default"},"spec":{"containers":[{"command":["sl...
Status:             Pending
IP:
Containers:
  busybox:
    Container ID:
    Image:         localhost:32000/my-busybox
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      3600
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gz792 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-gz792:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gz792
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  9m13s  default-scheduler  Successfully assigned default/busybox to jarvis
  Normal  Pulling    9m13s  kubelet, jarvis    Pulling image "localhost:32000/my-busybox"

inspection-report-20190405_195447.tar.gz

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented Apr 6, 2019

@RemcoPerlee can you share the logs of the registry, I want to see if we get a request from containerd to pull the image. Could you also run kubelet with -v=4 so as to collect more logs? Can you also share your /etc/hosts? Finally, is there anything special about this node you can think of?

Loading

@etheleon
Copy link

@etheleon etheleon commented May 1, 2019

Hi @MingyiZhangQimia,

I also run into similar problem of wanting to delete a particular image, Here's how I solved it, hope it helps

Firstly, list the repository and associated tags:

curl -k -s -X GET http://localhost:32000/v2/_catalog \
| jq '.repositories[]'  \
| sort \
| xargs -I _ curl -s -k -X GET http://localhost:32000/v2/_/tags/list

which returns:

{"name":"<repository_name>","tags":["<tag1>", "<tag2>"]}

Next get image's digest

curl -v \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
-X HEAD localhost:32000/v2/<repository>/manifests/<tag1>

Example output:

Warning: Setting custom HTTP method to HEAD with -X/--request may not work the
Warning: way you want. Consider using -I/--head instead.
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 32000 (#0)
> HEAD /v2/<repo>/manifests/latest HTTP/1.1
> Host: localhost:32000
> User-Agent: curl/7.47.0
> Accept: application/vnd.docker.distribution.manifest.v2+json
>
< HTTP/1.1 200 OK
< Content-Length: 7470
< Content-Type: application/vnd.docker.distribution.manifest.v2+json
< Docker-Content-Digest: sha256:d971fc1f9473c43ae330948190420a557c65018b5386b78aced0e6c9103c10983
< Docker-Distribution-Api-Version: registry/2.0
< Etag: "sha256:d971fc1f9473c43ae330948190420a557c65018b5386b78aced0e6c9103c10983"
< X-Content-Type-Options: nosniff
< Date: Wed, 01 May 2019 07:40:49 GMT

Keep Docker-Content-Digest's value ie.: sha256:d971fc1f9473c43ae330948190420a557c65018b5386b78aced0e6c9103c10983

Send the DELETE request:

curl -v \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
-X DELETE localhost:32000/v2/<repository>/manifests/<digest>

NOTE: You might get an error message:

{"errors":[{"code":"UNSUPPORTED","message":"The operation is unsupported."}]}

Sadly, it is not enabled in 1.14 by default.

You'll have to edit the registry's deployment to include REGISTRY_STORAGE_DELETE_ENABLED="yes".

kubectl edit deployment registry -n container-registry
    spec:
      containers:
      - env:
        - name: REGISTRY_HTTP_ADDR
          value: :5000
        - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
          value: /var/lib/registry
        - name: REGISTRY_STORAGE_DELETE_ENABLED
          value: "yes"

Rerun the the DELETE request:

curl -v \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
-X DELETE localhost:32000/v2/<repository>/manifests/<digest>

Finally check if the image is deleted

curl -k -s -X GET http://localhost:32000/v2/_catalog \
| jq '.repositories[]'  \
| sort \
| xargs -I _ curl -s -k -X GET http://localhost:32000/v2/_/tags/list

Loading

@gousse
Copy link

@gousse gousse commented May 2, 2019

like @etheleon said, you have to add the env REGISTRY_STORAGE_DELETE_ENABLED="yes"

but there is an easier way to manipulate registry after this:
you can also use reg tool: https://github.com/genuinetools/reg

and just do :
to list images
$ reg ls -f localhost:3200

to delete an image
$ reg rm -f localhost:32000/my-busybox

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented May 2, 2019

Thank you @etheleon, @gousse. Do you think we should run with the deletion option enabled by default? If not please -1 the PR above.

@gousse we could pull in the reg binary during the microk8s.enable registry so that it is available as a microk8s.reg command. We do something very similar in the case of linkerd: https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/enable.linkerd.sh#L17

Loading

@gousse
Copy link

@gousse gousse commented May 2, 2019

@ktsakalozos I think it is a great idea to have reg tool with the registry plugin.
Manipulating registry if far easier with it !

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented May 2, 2019

@gousse, would you (or anyone else) be interested in providing a PR? I am here to help.

Loading

@etheleon
Copy link

@etheleon etheleon commented May 2, 2019

@ktsakalozos volunteer! Let me put together the PR

Loading

etheleon added a commit to etheleon/microk8s that referenced this issue May 4, 2019
@RXminuS
Copy link

@RXminuS RXminuS commented May 16, 2019

To be honest I hate the private registry with retagging of images! Because all of our deployment manifests are setup to use certain image names (we have people working on MacOS, Windows and Linux) and normally they just pull the Docker images and it works in Kubernetes. Now all of a sudden we need to re-tag images just so they can be available in microk8s and THEN we also need to update the manifests to use different image tags. This was the whole reason we weren't using Minikube but Docker for Mac/Windows and the setup with docker.sock on microk8s. I think the user story needs to focus back around being able to interact with the Docker ecosystem, regardless of whether or not it uses containerd directly under the hood...this feels like a leaky abstraction.

Loading

@RXminuS
Copy link

@RXminuS RXminuS commented May 16, 2019

Maybe a stupid idea...but since ContainerD is OCI, can't we still just provide a docker.sock that you can have the Docker client talk with. I just want to be able to run docker build and docker pull I don't care if the images end up in microk8s or Docker.

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented May 16, 2019

@RXminuS we have put up this doc https://microk8s.io/docs/registry-images with a few ways you can work in MicroK8s and containerd. In the "Working with locally built images without a registry" we discuss how you can populate containerd with an image you just build (note it will not work for the latest tag). It is not as simple as docker build/pull since you need to docker save and then ctr image import, but it can be automated with a script.

Of course, you can provide a docker socket to kubelet and it would use your docker instead of the containerd. To do that you will need to edit /var/snap/microk8s/current/args/kubelet to remove the entries for containerd (container-runtime*) and make it look like https://github.com/ubuntu/microk8s/blob/1.13/microk8s-resources/default-args/kubelet adjusting the paths to your setup. You will need to do a microk8s.stop and microk8s.start to reload the configuration.

Both solutions (import images to containerd and use an external container runtime) could be automated via microk8s. commands. If you see value in such automation please let us know and even propose a patch.

Let me know what works for you. I am very interested in your feedback.

Loading

@surajbarkale
Copy link

@surajbarkale surajbarkale commented May 16, 2019

@ktsakalozos I has similar concerns as @RXminuS even with the proposed workflow in the docs. At present my development workflow is:

# At start
kubectl apply -f deploy-app.yaml #refers to app:latest
# After a code change
docker build -t app . && kubectl delete pod -l my=app

This is very fast in practice. If I follow "Working with locally built images without a registry" it should change to:

docker build -t app:local . && docker save mynginx | microk8s.ctr -n k8s.io image import && kubectl delete pod -l my=app

I am not sure if ctr image import works as above since there are no docs for it. The attractive option for me is to have kubelet use the local docker daemon instead of containerd. I appreciate if you can provide an option to use the local docker daemon instead of containerd.

Loading

@surajbarkale
Copy link

@surajbarkale surajbarkale commented May 17, 2019

@ktsakalozos I am unable to make microk8s work with docker installed from snap by changing /var/snap/microk8s/current/args/kubelet as you have instructed. I am getting following error in pod event log:

  Warning  FailedCreatePodSandBox  53s                kubelet, bionic    Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "0747a44fa4578f51d2fcc69337c314d4c0aa37a127e1d95ebb9efef69b57cd0c" network for pod "alpine": NetworkPlugin kubenet failed to set up pod "alpine_default" network: Error adding container to network: failed to Statfs "/proc/14902/ns/net": permission denied, failed to clean up sandbox container "0747a44fa4578f51d2fcc69337c314d4c0aa37a127e1d95ebb9efef69b57cd0c" network for pod "alpine": NetworkPlugin kubenet failed to teardown pod "alpine_default" network: Error removing container from network: failed to Statfs "/proc/14902/ns/net": permission denied]
  Warning  MissingClusterDNS       52s (x2 over 54s)  kubelet, bionic    pod: "alpine_default(b113113a-7843-11e9-87dd-3e791e06be64)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.

Here is my edited /var/snap/microk8s/current/args/kubelet file:

--kubeconfig=${SNAP}/configs/kubelet.config
--cert-dir=${SNAP_DATA}/certs
--client-ca-file=${SNAP_DATA}/certs/ca.crt
--anonymous-auth=false
--network-plugin=kubenet
--root-dir=${SNAP_COMMON}/var/lib/kubelet
--fail-swap-on=false
--pod-cidr=10.1.1.0/24
--non-masquerade-cidr=10.152.183.0/24
--cni-bin-dir=${SNAP}/opt/cni/bin/
--feature-gates=DevicePlugins=true
--eviction-hard="memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi"
--container-runtime=docker
--node-labels="microk8s.io/cluster=true"

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented May 20, 2019

@surajbarkale I was not able to work with the docker snap however grabbing docker with sudo apt-get install docker.io worked with the configuration you suggested.

Loading

@beetahnator
Copy link

@beetahnator beetahnator commented Jul 9, 2019

I was running into the same issue that @surajbarkale was facing when trying to use the system docker, even after replacing the snap docker with apt-get's docker.io package.

The below fixed it for me

$ sudo aa-remove-unknown
Removing '/snap/core/6818/usr/lib/snapd/snap-confine'
Removing '/snap/core/6964/usr/lib/snapd/snap-confine'
Removing '/snap/core/7169/usr/lib/snapd/snap-confine'
Removing 'docker-default'
Removing 'snap-update-ns.docker'
Removing 'snap.docker.compose'
Removing 'snap.docker.docker'
Removing 'snap.docker.dockerd'
Removing 'snap.docker.help'
Removing 'snap.docker.hook.install'
Removing 'snap.docker.hook.post-refresh'
Removing 'snap.docker.machine'

Loading

@gpintore82
Copy link

@gpintore82 gpintore82 commented May 10, 2020

Any chance to have a branch or version with docker?

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented May 11, 2020

Any chance to have a branch or version with docker?

It is unlikely we will be shipping docker in MicroK8s. We only need containerd which is what docker is also using to manage containers.

Loading

@mavenir-labs
Copy link

@mavenir-labs mavenir-labs commented Jul 1, 2020

Hi
I have been struggling with containerd to set my private secure registry but without success. followed enormous amount of howto's . only way I got it to work is by providing user/pass in command line which is not acceptable by our Dev.
my private reg is Jfrog .
in addition to microk8s we have also several k8s clusters running k8s 1.17 with containerd engine v 1.3.4 (same as microk8s) . I dont have issues with containerd in the clusteres accessing our Jfrog docker registry.
If anyone has some experience with similar setup, please share .
Thanks
Leon

Loading

@SemanticBeeng
Copy link

@SemanticBeeng SemanticBeeng commented Aug 25, 2020

We gave the heads up on this change in the on this topic https://discuss.kubernetes.io/t/containerd-and-security-updates-on-the-next-microk8s-release/4844 and on the #microk8s channel at https://k8s.slack.com/ some time ago.

This sounds like a problem but still trying to understand because have not developed with containerd.

So basically we can use docker for a production kubernetes but not possible in development now with microk8ss ?
Does not feel right because it affects the way we do both development and deployment, no?

What is the best way/resource to make sense of how a docker based development workflow is affected by the switch to containerd and how to adapt?

Can microk8s still claim that is "MicroK8s is pure upstream Kubernetes, not a subset." after this?

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented Aug 26, 2020

Hi @SemanticBeeng

Some time ago docker donated its runtime [2] called containerd to cncf. Kubernetes needs only that runtime (containerd) to function. K8s does need the docker daemon, it only needs the containerd daemon, have a look at [1].

We do not package docker in MicroK8s. Your developer workload may involve docker for testing and building images or you may use some other tool-chain for building OCI container images or you may not need to build images at all. If you want to install docker next to MicroK8s go ahead, you could even hack MicroK8s to use your local docker, it is up to you.

You may want to have a look at the MicroK8s docs [3] where we discuss how one can work with the registries including the built in one.

Can microk8s still claim that is "MicroK8s is pure upstream Kubernetes, not a subset." after this?

I hope it is clear now why "MicroK8s is pure upstream Kubernetes, not a subset.".

[1] https://kubernetes.io/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/
[2] https://www.docker.com/docker-news-and-press/docker-extracts-and-donates-containerd-its-core-container-runtime-accelerate
[3] https://microk8s.io/docs/registry-images

Loading

@SemanticBeeng
Copy link

@SemanticBeeng SemanticBeeng commented Aug 26, 2020

put up this doc https://microk8s.io/docs/working with a few ways you can work in MicroK8s and containerd.

Link broken atm.

Loading

@tvansteenburgh
Copy link
Member

@tvansteenburgh tvansteenburgh commented Aug 26, 2020

Link broken atm.

I think that originally pointed to https://microk8s.io/docs/registry-images

Loading

@SemanticBeeng
Copy link

@SemanticBeeng SemanticBeeng commented Aug 26, 2020

Your developer workload may involve docker for testing and building images or you may use some other tool-chain for building OCI container images or you may not need to build images at all. If you want to install docker next to MicroK8s go ahead, you could even hack MicroK8s to use your local docker, it is up to you.

@ktsakalozos thanks, this helps.

I am studying a few more resources (below); indeed, based on the evolution of containerd, the change makes perfect sense.
But it is also quite clear that this change affects the docker based development workflow and the impact is quite severe.

For example Intellij IDEA has powerful support for remote debugging of docker containers. The version they just released has improved support. For example we can run and debug Python in a remote container this way.
But now that seems not possible in microk8s with containerd.

If you know more about development workflow with kubernetes based on containerd in general or microk8s and docker in particular which can help with the above ... grievance then please do advise. After all microk8s is for development so the workflow should be productive. (https://microk8s.io/docs/registry-images is not sufficient in this sense)

Quite likely most of the entries above are from people who use docker based workflows and could use a shift to kubernetes cluster based workflows.
Below, from [6] onward will collect references about kubernetes based development workflows; hope it helps people who come to this thread from that angle.

[1] https://kubernetes.io/blog/2017/11/containerd-container-runtime-options-kubernetes/
[2] https://kubernetes.io/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/
[3] https://www.docker.com/docker-news-and-press/docker-extracts-and-donates-containerd-its-core-container-runtime-accelerate
[4] https://www.docker.com/blog/introducing-containerd/
[5] https://www.docker.com/blog/what-is-containerd-runtime/
[6] https://www.novatec-gmbh.de/en/blog/debugging-on-kubernetes-the-perfect-developer-machine/
[7] "Fast development workflow with Docker and Kubernetes" https://www.telepresence.io/tutorials/docker
[8] "Telepresence pod cannot start on GKE with ContainerD" telepresenceio/telepresence#828
[9] "How to Develop and Debug Python Applications in Kubernetes" https://okteto.com/blog/how-to-develop-python-apps-in-kubernetes/index.html

Loading

@AmilaDevops
Copy link

@AmilaDevops AmilaDevops commented Jan 12, 2021

hi @ktsakalozos @SemanticBeeng @ALL m a new to microk8s as a user,, my microk8s using containerd as container run time.
But I really wants to see what docker containers I'm running (not pods just containers itself) and also wants to login to one of those containers inside, by using docker exec command.

But in microk8s there are no output after I execute below commands.
image

could anyone tells me what's going on and how could I see my containers running ?

Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet