Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker disappeared #382

Open
gun1x opened this issue Mar 26, 2019 · 64 comments

Comments

@gun1x
Copy link

commented Mar 26, 2019

Docker disappeared from microk8s:

root@kube:~# microk8s.docker
microk8s.docker: command not found
root@kube:~# ls /snap/bin/  
microk8s.config  microk8s.ctr  microk8s.disable  microk8s.enable  microk8s.inspect  microk8s.istioctl  microk8s.kubectl  microk8s.reset  microk8s.start  microk8s.status  microk8s.stop
root@kube:~# cat /etc/issue
Ubuntu 18.04.2 LTS \n \l

It it still going to be used in the project? Is there an alternative to inspect what kube is doing in the background?

If this was a planned change, is there documentation/release notes?

@mattiasarro

This comment has been minimized.

Copy link

commented Mar 26, 2019

They have replaced docker daemon with containerd.

For me, this is a problem, since I was relying on exposing the Docker daemon on a TCP port. I did this by adding -H tcp://0.0.0.0:$PORT to /var/snap/microk8s/current/args/dockerd, which meant I could execute docker commands from outside the machine (with docker -H tcp://$HOST:$PORT).

Now I don't really know if I can connect to the containerd daemon the same way, and if so, how.

@gun1x

This comment has been minimized.

Copy link
Author

commented Mar 26, 2019

For me, this is a problem

I can't say it's a "problem" for me. It was just a surprise. We were using the that docker to also upload images into the registry, but we just went for apt install docker for that scope.

Well, if that is the case, I guess it's up to the devs if they close the ticket or not.

@ktsakalozos

This comment has been minimized.

Copy link
Member

commented Mar 26, 2019

Hi,

Indeed in the 1.14 release contanerd replaced dockerd. We gave the heads up on this change in the on this topic https://discuss.kubernetes.io/t/containerd-and-security-updates-on-the-next-microk8s-release/4844 and on the #microk8s channel at https://k8s.slack.com/ some time ago. For those who cannot do the transition dockerd is available from the 1.13 channel:

snap install microk8s --classic --channel=1.13/stable

One of the reasons why we moved ahead with containerd is that a few users wanted to deploy MicroK8s next to a local dockerd and that was causing unexpected issues. As @gun1x mentions this should not be a problem anymore.

@mattiasarro, I am not aware of how you are using dockerd. Could you provide more info so we can offer some suggestions?

Apologies for any inconvenience we may have cause.

@termie

This comment has been minimized.

Copy link

commented Mar 26, 2019

this also took me a bit by surprise, i think i didn't read the expectations that snaps would autoupdate and would have pegged a specific version channel if i had. The microk8s docker did indeed cause some conflicts for me when i was first setting it up (and has some restrictions on it that make using it as a normal docker install kind of weird), so i'm generally for this change, but it has left a bunch of systems of mine in a (so far) unrecoverable state. Am hunting through logs to figure out why kubelet-daemon refuses to start after a reboot and will file another issue.

@carlososiel

This comment has been minimized.

Copy link

commented Mar 26, 2019

I cannot configure containerd correctly for use insecure registries after these update. I configure containerd as say in the documentation and kubelet continue trying to pull from https instead of http.

@mattiasarro

This comment has been minimized.

Copy link

commented Mar 27, 2019

Thanks for the quick response @ktsakalozos!

I am developing on macOS and running microk8s on a ubuntu-based PC, to which I connect directly over ethernet. I have enabled the private registry on microk8s (microk8s.enable registry) and would like to push Docker images to it. So I have exposed the dockerd daemon on a tcp port by editing /var/snap/microk8s/current/args/dockerd and adding a line -H 0.0.0.0:$PORT to it, which makes it available on $PORT of my ubuntu instance. Which means I can do on my mac:

docker -H $UBUNTU_MACHINE_IP:$PORT build -t mytag .
docker -H $UBUNTU_MACHINE_IP:$PORT tag mytag localhost:32000/mytag
docker -H $UBUNTU_MACHINE_IP:$PORT push localhost:32000/mytag

and now the image is available to the microk8s cluster.

Do you know if it's possible to expose containerd in a similar way and push images to it over the network?

@ktsakalozos

This comment has been minimized.

Copy link
Member

commented Mar 27, 2019

@mattiasarro in short you will have to use the image build tool chain of your liking and upload the image you build into the insecure registry MicroK8s provides.

Here is a way to do this. I have on my host a VM with MicroK8s:

> multipass list
Name                    State             IPv4             Release
microk8s-vm             RUNNING           10.141.241.134   Ubuntu 18.04 LTS

In that VM I have the registry enabled:

> multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get all -n container-registry
NAME                           READY   STATUS    RESTARTS   AGE
pod/registry-7d65c894c-d88jh   1/1     Running   0          39m

NAME               TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/registry   NodePort   10.152.183.66   <none>        5000:32000/TCP   39m

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/registry   1/1     1            1           39m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/registry-7d65c894c   1         1         1       39m

On the host machine I have docker running (installed via apt install docker.io), you can have any image building toolchain. The dockerd running on the host should trust the registry on the microk8s-vm. To do this:

echo "{
  \"insecure-registries\" : [\"10.141.241.134:32000\"]
}" | sudo tee /etc/docker/daemon.json

systemctl restart docker

Note that we use the IP of the VM as this is where the registry is.

To put an image in the registry we:

docker pull busybox
docker tag busybox 10.141.241.134:32000/my-busybox
docker push 10.141.241.134:32000/my-busybox

Now that the registry is populated we can create a pod in MicroK8s with:

multipass@microk8s-vm:~$ cat > bbox.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: localhost:32000/my-busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
multipass@microk8s-vm:~$ microk8s.kubectl apply -f ./bbox.yaml 
pod/busybox created

If you apt-get install docker.io inside the VM and configure the docker daemon to a) trust the registry at localhost:32000 and b) have it listening on 0.0.0.0:{A_PORT}. You can have exactly the same behavior with the one you had so far.

@gun1x

This comment has been minimized.

Copy link
Author

commented Mar 27, 2019

We gave the heads up on this change in the on this topic https://discuss.kubernetes.io/t/containerd-and-security-updates-on-the-next-microk8s-release/4844 and on the #microk8s channel at https://k8s.slack.com/ some time ago.

Do you think it is possible to document changes in the release history on git? People have a tendency to pick their specific socializing platforms (example: facebook, telegram, irc), so they might miss the news if it's announced on slack, and I also doubt everybody is subscribed to discuss.kubernetes.io. Documenting stuff within git would help a lot, since everybody will go back to the git repo when there is an issue.

I think this would be great as the user base of microk8s grows. It's a very cool product and I think that helping the community with extended documentation would help microk8s become even better.

@RemcoPerlee

This comment has been minimized.

Copy link

commented Mar 29, 2019

I can't seem to configure this correctly. I've used the steps from @ktsakalozos which seemed to work, however the Pod stays in the status Waiting: ContainerCreating, and I have no idea what's going on behind the scenes...

Any way I can debug this (what logs to look at, etc)?

The kubectl logs state the following:

Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: ContainerCreating

When I try this /snap/bin/microk8s.ctr image pull localhost:32000/my-busybox
I get an error, but it might be my lack of understanding CTR:
ctr: failed to resolve reference "localhost:32000/my-busybox": object required

@gun1x gun1x changed the title docker dissapeared docker dissappeared Mar 29, 2019

@gun1x gun1x changed the title docker dissappeared docker disappeared Mar 29, 2019

@gun1x

This comment has been minimized.

Copy link
Author

commented Mar 29, 2019

Quick offtopic: now everybody knows I am bad with grammar. 🤣

@RemcoPerlee

This comment has been minimized.

Copy link

commented Mar 29, 2019

well, I gave up and decided to rollback to 1.13, but now my cluster won't boot again. Can I state here, that I really dislike breaking changes like this?

@ktsakalozos

This comment has been minimized.

Copy link
Member

commented Mar 29, 2019

@RemcoPerlee could you run a microk8s.inspect and attach the produced tarball here. For pods that do not start please share the output of microk8s.kubectl logs and microk8s.kubectl describe.

@RemcoPerlee

This comment has been minimized.

Copy link

commented Mar 29, 2019

Let's see (I did try to reinstall it all, so this is a relatively clean setup - I hope):

Logs for Busybox:
microk8s.kubectl logs busybox Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: ContainerCreating

Describe of the pod:

Name:               busybox
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               jarvis/192.168.1.120
Start Time:         Fri, 29 Mar 2019 20:57:54 +0000
Labels:             <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"busybox","namespace":"default"},"spec":{"containers":[{"command":["sl...
Status:             Pending
IP:
Containers:
  busybox:
    Container ID:
    Image:         localhost:32000/my-busybox
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      3600
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z9grs (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-z9grs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-z9grs
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  2m53s  default-scheduler  Successfully assigned default/busybox to jarvis
  Normal  Pulling    2m53s  kubelet, jarvis    Pulling image "localhost:32000/my-busybox"
```

[inspection-report-20190329_210125.tar.gz](https://github.com/ubuntu/microk8s/files/3024538/inspection-report-20190329_210125.tar.gz)
@RemcoPerlee

This comment has been minimized.

Copy link

commented Apr 1, 2019

@ktsakalozos hey, any ideas or should I do a full reinstall of the server?

@termie

This comment has been minimized.

Copy link

commented Apr 2, 2019

@ktsakalozos

This comment has been minimized.

Copy link
Member

commented Apr 2, 2019

@termie @RemcoPerlee can you please provide me with instructions on exactly what you did so I can reproduce the issue you are facing? Please be as detailed as you possibly can. Also attach any yaml manifests you apply to expose this error. Thank you.

@MingyiZhangQimia

This comment has been minimized.

Copy link

commented Apr 3, 2019

Hi @ktsakalozos, I want to remove a docker image from the private registry at localhost:32000. If there is no microk8s.docker, how could I do?

@gun1x

This comment has been minimized.

Copy link
Author

commented Apr 3, 2019

Hi @ktsakalozos, I want to remove a docker image from the private registry at localhost:32000. If there is no microk8s.docker, how could I do?

did you read every comment within this issue? the answer is written above.

@RemcoPerlee

This comment has been minimized.

Copy link

commented Apr 3, 2019

@ktsakalozos I understand why you're asking this and thanks in advance. It is however pretty difficult to provide that information. The best I can come with is the most recent action, where I uninstalled microk8s and docker.io. I restarted more or less from scratch, using all the defaults and the instructions above to get busybox running. The logs for that are attached above. Unless you can come up with something very clever, I'm about to give this up as a lost cause and reinstall the entire machine :(

@MingyiZhangQimia

This comment has been minimized.

Copy link

commented Apr 3, 2019

Hi @ktsakalozos, I want to remove a docker image from the private registry at localhost:32000. If there is no microk8s.docker, how could I do?

did you read every comment within this issue? the answer is written above.

Could you please be more specific? I only know how to push images to the registry, but how to remove?

@ktsakalozos

This comment has been minimized.

Copy link
Member

commented Apr 3, 2019

Hi @MingyiZhangQimia the registry is not anything special you would use it as any private registry. Here is a link from google: https://stackoverflow.com/questions/37033055/how-can-i-use-the-docker-registry-api-v2-to-delete-an-image-from-a-private-regis

@ktsakalozos

This comment has been minimized.

Copy link
Member

commented Apr 3, 2019

@RemcoPerlee I am sorry. I cannot reproduce the issue you are getting. :(

@RemcoPerlee

This comment has been minimized.

Copy link

commented Apr 3, 2019

@ktsakalozos no worries, thanks for trying, I don't get it either, must be something with the machine, though it's almost default. Gonne plan a day to restart from scratch.

@MingyiZhangQimia

This comment has been minimized.

Copy link

commented Apr 4, 2019

@ktsakalozos Thank you!

@RemcoPerlee

This comment has been minimized.

Copy link

commented Apr 5, 2019

@ktsakalozos ok, did a full reinstall of the server, defaults all the way, ubuntu 18.04

some initial config i need:

apt install avahi-daemon nfs-common docker.io
apt-get install iptables-persistent
sudo iptables -P FORWARD ACCEPT
iptables-save

Jenkins:

apt install openjdk-8-jdk
wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
apt update
apt install jenkins

MicroK8s:

snap install microk8s --classic
microk8s.enable dashboard registry dns

docker:

echo "{
  \"insecure-registries\" : [\”192.168.1.120:32000\"]
}" | sudo tee /etc/docker/daemon.json
systemctl restart docker
sudo groupadd docker
sudo usermod -aG docker remco

testing:

docker pull busybox
docker tag busybox 192.168.1.120:32000/my-busybox
docker push 192.168.1.120:32000/my-busybox

made a bbox.yaml, containing:

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: localhost:32000/my-busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

kubectl apply -f ./bbox.yaml

and checked the dashboard at: https://jarvis.local:16443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default

same result: ContainerWaiting

kubectl logs busybox
Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: ContainerCreating

kubectl describe:

Name:               busybox
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               jarvis/192.168.1.120
Start Time:         Fri, 05 Apr 2019 19:39:50 +0000
Labels:             <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"busybox","namespace":"default"},"spec":{"containers":[{"command":["sl...
Status:             Pending
IP:
Containers:
  busybox:
    Container ID:
    Image:         localhost:32000/my-busybox
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      3600
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gz792 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-gz792:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gz792
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  9m13s  default-scheduler  Successfully assigned default/busybox to jarvis
  Normal  Pulling    9m13s  kubelet, jarvis    Pulling image "localhost:32000/my-busybox"

inspection-report-20190405_195447.tar.gz

@ktsakalozos

This comment has been minimized.

Copy link
Member

commented Apr 6, 2019

@RemcoPerlee can you share the logs of the registry, I want to see if we get a request from containerd to pull the image. Could you also run kubelet with -v=4 so as to collect more logs? Can you also share your /etc/hosts? Finally, is there anything special about this node you can think of?

@muralaris

This comment has been minimized.

Copy link

commented Apr 15, 2019

Iam facing a similar issue when I want my microk8s to use an external private registry.
Any success ?

I also get 'ctr: failed to resolve reference "docker.io/library/busybox": object required for docker. Io repo too

@muralaris

This comment has been minimized.

Copy link

commented Apr 16, 2019

I was successful with the default insecure-registry bundled with microk8s.
I enabled microk8s registry and tagged my images to the local 192.168.99.101:32000/busybox and pushed into the registry. Kubectl was then able to pull images from this local registry.

My problem with SSL enabled private docker registry isn't solved yet.

I tried to configure a private SSL enabled private docker registry with microk8s.
I have added my certs to /etc/docker/certs.d//ca.crt
Also added changes to the containerd-template.toml and restarted the service.
I also tried creating secrets and injecting onto my pod but in vain.

But i always end up with following error on the 'kubectl describe pods '
Warning Failed 54m (x4 over 56m) kubelet, localhost.localdomain Failed to pull image "myreg:5000/busybox:latest": rpc error: code = Unknown desc = failed to resolve image "myreg:5000/busybox:latest": no available registry endpoint: failed to do request: Head https://myreg:5000/v2/busybox/manifests/latest: x509: certificate signed by unknown authority

But i can pull images from docker.io though.

Any help appreciated for enabling a private SSL enabled docker registry in microk8s.

@wdiestel

This comment has been minimized.

Copy link

commented Apr 21, 2019

I get the same issue with a freshly installed Ubuntu 18.04 server with microk8s and enabling the registry. I think the issue is within the network setup. I can push images only on the service IP at port 5000, but kubernetes can't pull it from this address:

wolfram@kubo:~$ curl http://localhost:5000/v2/_catalog
curl: (7) Failed to connect to localhost port 5000: Connection refused
wolfram@kubo:~$ curl http://localhost:32000/v2/_catalog
^C
wolfram@kubo:~$ curl http://10.152.183.77:5000/v2/_catalog
{"repositories":["voko/akrido","voko/cikado","voko/grilo"]}
wolfram@kubo:~$ curl http://10.152.183.77:32000/v2/_catalog
curl: (7) Failed to connect to 10.152.183.77 port 32000: Network is unreachable

Inspecting services
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-apiserver is running
  Service snap.microk8s.daemon-proxy is running
  Service snap.microk8s.daemon-kubelet is running
  Service snap.microk8s.daemon-scheduler is running
  Service snap.microk8s.daemon-controller-manager is running
  Service snap.microk8s.daemon-etcd is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system info
  Copy network configuration to the final report tarball
  Copy processes list to the final report tarball
  Copy snap list to the final report tarball
  Inspect kubernetes cluster

@wdiestel

This comment has been minimized.

Copy link

commented Apr 21, 2019

When using 127.0.0.1 instead of localhost and adjusting this also in
/var/snap/microk8s/current/args/containerd-template.toml at least ctr could access the registry, but is seams during the push it tries to switch from http to https protocol which fails:

$ microk8s.ctr --debug image pull 127.0.0.1:32000/voko/grilo:latest
DEBU[0000] fetching                                      image="127.0.0.1:32000/voko/grilo:latest"
DEBU[0000] resolving                                    
DEBU[0000] do request                                    request.headers=map[Accept:[application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, *]] request.method=HEAD url="https://127.0.0.1:32000/v2/voko/grilo/manifests/latest"
ctr: failed to resolve reference "127.0.0.1:32000/voko/grilo:latest": failed to do request: Head https://127.0.0.1:32000/v2/voko/grilo/manifests/latest: http: server gave HTTP response to HTTPS client

So I think either the registry must be fixed to stay on http or it must be configured to use https at the exposed port instead.

@wdiestel

This comment has been minimized.

Copy link

commented Apr 21, 2019

Does someone know where the source code for this registry is located? The docker image seems to have no Dockerfile or github repo linked: https://hub.docker.com/r/cdkbot/registry-amd64

@ktsakalozos

This comment has been minimized.

Copy link
Member

commented Apr 22, 2019

Hi @wdiestel

The registry code should be https://github.com/docker/distribution/tree/release/2.6

On the issue you are seeing, you may also want to look at this issue: #196 (comment)

@wdiestel

This comment has been minimized.

Copy link

commented Apr 22, 2019

Thanks @ktsakalozos for the link.
The issue with localhost can be fixed manually using the IP instead as described above, but not the HTTP issue:
The Dockerfile in the github repo seems to be based on Alpine whereas according to docker image history the registry image from cdkbot is based on Ubuntu Xenial. I think the issue is between the specific setup of the registry container and pulling images by containerd using HTTP HEAD first instead of GET.
So the options while missing the Ubuntu-Dockerfile for the registry I see:

  1. get the config.yml out of the container, tweak it to https and build a new registry container on top or
  2. use the docker registry image based on Alpine instead the microk8s addon and setup it for HTTPS
  3. wait for the next realease where this issue with the registry is hopefully fixed
    I think I will choose option (2) or (3) e.g. using the older release including dockerd meanwhile...
@jacksontj

This comment has been minimized.

Copy link

commented Apr 23, 2019

I have run into this same issue (http: server gave HTTP response to HTTPS client) and am unable to find a workaround. I'm in a bit of a bind, since downgrading to 1.13 doesn't work (some other but with docker failing with docker-runc) and I can't find a way to configure microk8s to either (1) have an http registry or (2) configure k8s to talk to it over https. So at this point I'm a bit frustrated as without wanting to upgrade (snap upgraded me to 1.14 because it was "stable") I'm now unable to use microk8s locally at all with a local registry :/

@jacksontj

This comment has been minimized.

Copy link

commented Apr 24, 2019

For anyone else that hits this http vs https local registry -- a workaround is described in #384

@ktsakalozos

This comment has been minimized.

Copy link
Member

commented Apr 24, 2019

@jacksontj @wdiestel @DuaneNielsen and all, I am working on a documentation page discussing how you would interact with registries. The PR where you can leave comments is #446 and the document is https://github.com/ubuntu/microk8s/blob/feature/working-with/docs/working.md

I would very much appreciate your feedback. I would like to hear what does not work for you on the "Working with MicroK8s' registry add-on" section (exactly what step is failing and how I can reproduce the failure). Also if you have any suggestions on how to improve the containerd-template.toml I would be grateful.

Thank you

@ktsakalozos

This comment has been minimized.

Copy link
Member

commented Apr 24, 2019

@muralaris @DuaneNielsen could you elaborate on the workflow that includes the microk8s.ctr image pull ? I would like to understand it and see if it makes sense to be added to #446 . Have a look at https://github.com/ubuntu/microk8s/blob/feature/working-with/docs/working.md .

Some points that may get you going with the microk8s.ctr image pull:

  • When pulling from an insecure registry you have to use the --plain-http=true flag. For example:
microk8s.ctr  image pull  10.141.241.175:32000/mynginx:registry
ctr: failed to resolve reference "10.141.241.175:32000/mynginx:registry": failed to do request: Head https://10.141.241.175:32000/v2/mynginx/manifests/registry: http: server gave HTTP response to HTTPS client
> microk8s.ctr  image pull --plain-http 10.141.241.175:32000/mynginx:registry
10.141.241.175:32000/mynginx:registry:                                            resolved       |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:2ec3c026183996087f26c6b38334af80892837ecae1050e0dce599bb5f8e4fd7: done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:d9e2304ab8d0497bd9764b74eee2a6a44436a0cd8b39a26f03d8253af7474c44:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:08d74e155349e3312e6eba3ac5dea0263c43f7ba13ce66245708e18ede53f200:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:bdf0201b3a056acc4d6062cc88cd8a4ad5979983bfb640f15a145e09ed985f92:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:a9e2a0b350608599c47f0ccfd851edde45b352142060dbd1d0e9df57cfed8ccd:    done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:0be75340bd9bc5c3b9476910537275090a9181838415b558d3a16e8155a69860:   done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 0.3 s                                                                    total:  6.7 Mi (22.2 MiB/s)                                      
unpacking linux/amd64 sha256:2ec3c026183996087f26c6b38334af80892837ecae1050e0dce599bb5f8e4fd7...
done
  • When pulling an image you have to provide a tag. For example:
> microk8s.ctr -n default image pull  docker.io/library/busybox
ctr: failed to resolve reference "docker.io/library/busybox": object required
 > microk8s.ctr -n default image pull  docker.io/library/busybox:latest
docker.io/library/busybox:latest:                                                 resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:954e1f01e80ce09d0887ff6ea10b13a812cb01932a0781d6b0cc23f743a874fd:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:f79f7a10302c402c052973e3fa42be0344ae6453245669783a9e16da3d56d5b4: done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:fc1a6b909f82ce4b72204198d49de3aaf757b3ab2bb823cb6e47c416b97c5985:    done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:af2f74c517aac1d26793a6ed05ff45b299a037e1a9eefeae5eacda133e70a825:   done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 4.1 s                                                                    total:  739.4  (180.3 KiB/s)                                     
unpacking linux/amd64 sha256:954e1f01e80ce09d0887ff6ea10b13a812cb01932a0781d6b0cc23f743a874fd...
done
  • Kubernetes is using the k8s.io namespace of containerd. For example:
> microk8s.ctr namespaces ls
NAME    LABELS 
default        
> microk8s.enable dns
Enabling DNS
Applying manifest
service/kube-dns created
serviceaccount/kube-dns created
configmap/kube-dns created
deployment.extensions/kube-dns created
Restarting kubelet
DNS is enabled
> microk8s.ctr namespaces ls
NAME    LABELS 
default        
k8s.io         
> microk8s.ctr -n k8s.io images ls
REF                                                                                                                     TYPE                                                      DIGEST                                                                  SIZE      PLATFORMS                                                   LABELS                          
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7                                                                  application/vnd.docker.distribution.manifest.v2+json      sha256:f5bddc71efe905f4e4b96f3ca346414be6d733610c1525b98fff808f93966680 12.5 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:f5bddc71efe905f4e4b96f3ca346414be6d733610c1525b98fff808f93966680 application/vnd.docker.distribution.manifest.v2+json      sha256:f5bddc71efe905f4e4b96f3ca346414be6d733610c1525b98fff808f93966680 12.5 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
k8s.gcr.io/pause:3.1                                                                                                    application/vnd.docker.distribution.manifest.list.v2+json sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 309.7 KiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed 
k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea                                application/vnd.docker.distribution.manifest.list.v2+json sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 309.7 KiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed 
sha256:5d049a8c4eec92b21ca4be399c260166d96569a1a52d497f4a0365bb55c1a18c                                                 application/vnd.docker.distribution.manifest.v2+json      sha256:f5bddc71efe905f4e4b96f3ca346414be6d733610c1525b98fff808f93966680 12.5 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e                                                 application/vnd.docker.distribution.manifest.list.v2+json sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 309.7 KiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed 
@etheleon

This comment has been minimized.

Copy link

commented May 1, 2019

Hi @MingyiZhangQimia,

I also run into similar problem of wanting to delete a particular image, Here's how I solved it, hope it helps

Firstly, list the repository and associated tags:

curl -k -s -X GET http://localhost:32000/v2/_catalog \
| jq '.repositories[]'  \
| sort \
| xargs -I _ curl -s -k -X GET http://localhost:32000/v2/_/tags/list

which returns:

{"name":"<repository_name>","tags":["<tag1>", "<tag2>"]}

Next get image's digest

curl -v \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
-X HEAD localhost:32000/v2/<repository>/manifests/<tag1>

Example output:

Warning: Setting custom HTTP method to HEAD with -X/--request may not work the
Warning: way you want. Consider using -I/--head instead.
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 32000 (#0)
> HEAD /v2/<repo>/manifests/latest HTTP/1.1
> Host: localhost:32000
> User-Agent: curl/7.47.0
> Accept: application/vnd.docker.distribution.manifest.v2+json
>
< HTTP/1.1 200 OK
< Content-Length: 7470
< Content-Type: application/vnd.docker.distribution.manifest.v2+json
< Docker-Content-Digest: sha256:d971fc1f9473c43ae330948190420a557c65018b5386b78aced0e6c9103c10983
< Docker-Distribution-Api-Version: registry/2.0
< Etag: "sha256:d971fc1f9473c43ae330948190420a557c65018b5386b78aced0e6c9103c10983"
< X-Content-Type-Options: nosniff
< Date: Wed, 01 May 2019 07:40:49 GMT

Keep Docker-Content-Digest's value ie.: sha256:d971fc1f9473c43ae330948190420a557c65018b5386b78aced0e6c9103c10983

Send the DELETE request:

curl -v \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
-X DELETE localhost:32000/v2/<repository>/manifests/<digest>

NOTE: You might get an error message:

{"errors":[{"code":"UNSUPPORTED","message":"The operation is unsupported."}]}

Sadly, it is not enabled in 1.14 by default.

You'll have to edit the registry's deployment to include REGISTRY_STORAGE_DELETE_ENABLED="yes".

kubectl edit deployment registry -n container-registry
    spec:
      containers:
      - env:
        - name: REGISTRY_HTTP_ADDR
          value: :5000
        - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
          value: /var/lib/registry
        - name: REGISTRY_STORAGE_DELETE_ENABLED
          value: "yes"

Rerun the the DELETE request:

curl -v \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
-X DELETE localhost:32000/v2/<repository>/manifests/<digest>

Finally check if the image is deleted

curl -k -s -X GET http://localhost:32000/v2/_catalog \
| jq '.repositories[]'  \
| sort \
| xargs -I _ curl -s -k -X GET http://localhost:32000/v2/_/tags/list
@gousse

This comment has been minimized.

Copy link

commented May 2, 2019

like @etheleon said, you have to add the env REGISTRY_STORAGE_DELETE_ENABLED="yes"

but there is an easier way to manipulate registry after this:
you can also use reg tool: https://github.com/genuinetools/reg

and just do :
to list images
$ reg ls -f localhost:3200

to delete an image
$ reg rm -f localhost:32000/my-busybox

@ktsakalozos

This comment has been minimized.

Copy link
Member

commented May 2, 2019

Thank you @etheleon, @gousse. Do you think we should run with the deletion option enabled by default? If not please -1 the PR above.

@gousse we could pull in the reg binary during the microk8s.enable registry so that it is available as a microk8s.reg command. We do something very similar in the case of linkerd: https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/enable.linkerd.sh#L17

@gousse

This comment has been minimized.

Copy link

commented May 2, 2019

@ktsakalozos I think it is a great idea to have reg tool with the registry plugin.
Manipulating registry if far easier with it !

@ktsakalozos

This comment has been minimized.

Copy link
Member

commented May 2, 2019

@gousse, would you (or anyone else) be interested in providing a PR? I am here to help.

@etheleon

This comment has been minimized.

Copy link

commented May 2, 2019

@ktsakalozos volunteer! Let me put together the PR

etheleon added a commit to etheleon/microk8s that referenced this issue May 4, 2019
@RXminuS

This comment has been minimized.

Copy link

commented May 16, 2019

To be honest I hate the private registry with retagging of images! Because all of our deployment manifests are setup to use certain image names (we have people working on MacOS, Windows and Linux) and normally they just pull the Docker images and it works in Kubernetes. Now all of a sudden we need to re-tag images just so they can be available in microk8s and THEN we also need to update the manifests to use different image tags. This was the whole reason we weren't using Minikube but Docker for Mac/Windows and the setup with docker.sock on microk8s. I think the user story needs to focus back around being able to interact with the Docker ecosystem, regardless of whether or not it uses containerd directly under the hood...this feels like a leaky abstraction.

@RXminuS

This comment has been minimized.

Copy link

commented May 16, 2019

Maybe a stupid idea...but since ContainerD is OCI, can't we still just provide a docker.sock that you can have the Docker client talk with. I just want to be able to run docker build and docker pull I don't care if the images end up in microk8s or Docker.

@ktsakalozos

This comment has been minimized.

Copy link
Member

commented May 16, 2019

@RXminuS we have put up this doc https://microk8s.io/docs/working with a few ways you can work in MicroK8s and containerd. In the "Working with locally built images without a registry" we discuss how you can populate containerd with an image you just build (note it will not work for the latest tag). It is not as simple as docker build/pull since you need to docker save and then ctr image import, but it can be automated with a script.

Of course, you can provide a docker socket to kubelet and it would use your docker instead of the containerd. To do that you will need to edit /var/snap/microk8s/current/args/kubelet to remove the entries for containerd (container-runtime*) and make it look like https://github.com/ubuntu/microk8s/blob/1.13/microk8s-resources/default-args/kubelet adjusting the paths to your setup. You will need to do a microk8s.stop and microk8s.start to reload the configuration.

Both solutions (import images to containerd and use an external container runtime) could be automated via microk8s. commands. If you see value in such automation please let us know and even propose a patch.

Let me know what works for you. I am very interested in your feedback.

@surajbarkale

This comment has been minimized.

Copy link

commented May 16, 2019

@ktsakalozos I has similar concerns as @RXminuS even with the proposed workflow in the docs. At present my development workflow is:

# At start
kubectl apply -f deploy-app.yaml #refers to app:latest
# After a code change
docker build -t app . && kubectl delete pod -l my=app

This is very fast in practice. If I follow "Working with locally built images without a registry" it should change to:

docker build -t app:local . && docker save mynginx | microk8s.ctr -n k8s.io image import && kubectl delete pod -l my=app

I am not sure if ctr image import works as above since there are no docs for it. The attractive option for me is to have kubelet use the local docker daemon instead of containerd. I appreciate if you can provide an option to use the local docker daemon instead of containerd.

@surajbarkale

This comment has been minimized.

Copy link

commented May 17, 2019

@ktsakalozos I am unable to make microk8s work with docker installed from snap by changing /var/snap/microk8s/current/args/kubelet as you have instructed. I am getting following error in pod event log:

  Warning  FailedCreatePodSandBox  53s                kubelet, bionic    Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "0747a44fa4578f51d2fcc69337c314d4c0aa37a127e1d95ebb9efef69b57cd0c" network for pod "alpine": NetworkPlugin kubenet failed to set up pod "alpine_default" network: Error adding container to network: failed to Statfs "/proc/14902/ns/net": permission denied, failed to clean up sandbox container "0747a44fa4578f51d2fcc69337c314d4c0aa37a127e1d95ebb9efef69b57cd0c" network for pod "alpine": NetworkPlugin kubenet failed to teardown pod "alpine_default" network: Error removing container from network: failed to Statfs "/proc/14902/ns/net": permission denied]
  Warning  MissingClusterDNS       52s (x2 over 54s)  kubelet, bionic    pod: "alpine_default(b113113a-7843-11e9-87dd-3e791e06be64)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.

Here is my edited /var/snap/microk8s/current/args/kubelet file:

--kubeconfig=${SNAP}/configs/kubelet.config
--cert-dir=${SNAP_DATA}/certs
--client-ca-file=${SNAP_DATA}/certs/ca.crt
--anonymous-auth=false
--network-plugin=kubenet
--root-dir=${SNAP_COMMON}/var/lib/kubelet
--fail-swap-on=false
--pod-cidr=10.1.1.0/24
--non-masquerade-cidr=10.152.183.0/24
--cni-bin-dir=${SNAP}/opt/cni/bin/
--feature-gates=DevicePlugins=true
--eviction-hard="memory.available<100Mi,nodefs.available<1Gi,imagefs.available<1Gi"
--container-runtime=docker
--node-labels="microk8s.io/cluster=true"
@ktsakalozos

This comment has been minimized.

Copy link
Member

commented May 20, 2019

@surajbarkale I was not able to work with the docker snap however grabbing docker with sudo apt-get install docker.io worked with the configuration you suggested.

@mazamats

This comment has been minimized.

Copy link

commented Jul 9, 2019

I was running into the same issue that @surajbarkale was facing when trying to use the system docker, even after replacing the snap docker with apt-get's docker.io package.

The below fixed it for me

$ sudo aa-remove-unknown
Removing '/snap/core/6818/usr/lib/snapd/snap-confine'
Removing '/snap/core/6964/usr/lib/snapd/snap-confine'
Removing '/snap/core/7169/usr/lib/snapd/snap-confine'
Removing 'docker-default'
Removing 'snap-update-ns.docker'
Removing 'snap.docker.compose'
Removing 'snap.docker.docker'
Removing 'snap.docker.dockerd'
Removing 'snap.docker.help'
Removing 'snap.docker.hook.install'
Removing 'snap.docker.hook.post-refresh'
Removing 'snap.docker.machine'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.