Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cant push image to local registry #196

Closed
lethevimlet opened this issue Nov 19, 2018 · 34 comments · Fixed by #1470
Closed

Cant push image to local registry #196

lethevimlet opened this issue Nov 19, 2018 · 34 comments · Fixed by #1470
Labels

Comments

@lethevimlet
Copy link

lethevimlet commented Nov 19, 2018

I enabled registry with
mircrok8s.enable registry

then I successfully build my image with a Dockerfile.
microk8s.docker build -t localhost:32000/test:1 .

But I can't push a custom image to the local registry
microk8s.docker push localhost:32000/test:1

It will stay stuck with the following message
The push refers to a repository [localhost:32000/test]

Am I missing something? any ideas of what could be wrong with the registry or my setup?

@ktsakalozos
Copy link
Member

Hi @lethevimlet ,

Could you share the report produced by microk8s.inspect? Would you also be able to enable debug logs on dockerd and the docker client? https://github.com/ubuntu/microk8s#configuring-microk8s-services

Thanks

@lethevimlet
Copy link
Author

lethevimlet commented Nov 20, 2018

Unfortunately, the results produced by microk8s.inspect yield sensitive information that I can't share since it's against our privacy policy but I can tell you more information about the setup if that helps to replicate the issue.

I'm using Ubuntu Server 16.04.5 LTS
Microk8s was installed with the following command
snap install microk8s --classic

Inspect shows the following error for docker:

nov 19 18:05:51 kubernetes microk8s.daemon-docker[13518]: time="2018-11-19T18:05:51.402240293+01:00" level=info msg="failed to mount layer sha256:------------------------------------------------------ (sha256:------------------------------------------------------ ) from docker.io/library/node: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
nov 19 18:02:56 kubernetes microk8s.daemon-docker[13518]: time="2018-11-19T18:02:56.785042907+01:00" level=error msg="Attempting next endpoint for push after error: Get https://localhost:32000/v1/_ping: net/http: TLS handshake timeout"
nov 19 18:04:51 kubernetes microk8s.daemon-docker[13518]: time="2018-11-19T18:04:51.926528866+01:00" level=error msg="Attempting next endpoint for push after error: Get https://localhost:32000/v2/: net/http: TLS handshake timeout"
nov 19 18:05:06 kubernetes microk8s.daemon-docker[13518]: time="2018-11-19T18:05:06.927008039+01:00" level=error msg="Attempting next endpoint for push after error: Get http://localhost:32000/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
nov 19 18:05:16 kubernetes microk8s.daemon-docker[13518]: time="2018-11-19T18:05:16.928215983+01:00" level=error msg="Attempting next endpoint for push after error: Get https://localhost:32000/v1/_ping: net/http: TLS handshake timeout"
nov 19 18:05:45 kubernetes microk8s.daemon-docker[13518]: time="2018-11-19T18:05:45.450180199+01:00" level=warning msg="failed to retrieve docker-init version: unknown output format: tini version 0.13.0\n"
nov 19 18:05:46 kubernetes microk8s.daemon-docker[13518]: time="2018-11-19T18:05:46.091115677+01:00" level=error msg="Not continuing with push after error: Get https://localhost:32000/v2/: net/http: TLS handshake timeout"
nov 19 18:05:47 kubernetes microk8s.daemon-docker[13518]: time="2018-11-19T18:05:47.247681559+01:00" level=error msg="Upload failed: denied: requested access to the resource is denied"

@ktsakalozos
Copy link
Member

Do you think you could share a docker image I could build that would cause this error?

Just to give you some context, the docker registry is set to be insecure: https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/default-args/docker-daemon.json and this is how we test the registry before every release https://github.com/ubuntu/microk8s/blob/master/tests/validators.py#L182

@ktsakalozos
Copy link
Member

I am basically asking for some instructions on how to reproduce the error.

@ktsakalozos
Copy link
Member

Just to double check, you do not have a second dockerd installed on your system, right?

@ktsakalozos
Copy link
Member

On a cluster that looks like this:

NAMESPACE            NAME                                       READY   STATUS    RESTARTS   AGE
container-registry   pod/registry-5f6c6bf97f-ds8wd              1/1     Running   0          20m
kube-system          pod/hostpath-provisioner-98d6db847-kpkvh   1/1     Running   0          20m
kube-system          pod/kube-dns-67b548dcff-lh8gm              3/3     Running   0          20m

NAMESPACE            NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
container-registry   service/registry     NodePort    10.152.183.188   <none>        5000:32000/TCP   20m
default              service/kubernetes   ClusterIP   10.152.183.1     <none>        443/TCP          22m
kube-system          service/kube-dns     ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP    20m

NAMESPACE            NAME                                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
container-registry   deployment.apps/registry               1         1         1            1           20m
kube-system          deployment.apps/hostpath-provisioner   1         1         1            1           20m
kube-system          deployment.apps/kube-dns               1         1         1            1           20m

NAMESPACE            NAME                                             DESIRED   CURRENT   READY   AGE
container-registry   replicaset.apps/registry-5f6c6bf97f              1         1         1       20m
kube-system          replicaset.apps/hostpath-provisioner-98d6db847   1         1         1       20m
kube-system          replicaset.apps/kube-dns-67b548dcff              1         1         1       20m

With a docker file like this:

> cat ./Dockerfile 
FROM node:7
ADD hello /
CMD ["/hello"]
> cat ./hello 
#/bin/bash
echo "Hello world"

I saw no issues:

> microk8s.docker build -t  localhost:32000/test:2 .
2018/11/20 19:03:16.368270 cmd.go:203: DEBUG: restarting into "/snap/core/current/usr/bin/snap"
DEBUG: security tag: snap.microk8s.docker
DEBUG: executable:   /snap/core/current/usr/lib/snapd/snap-exec
DEBUG: confinement:  classic
DEBUG: base snap:    core
DEBUG: ruid: 1000, euid: 0, suid: 0
DEBUG: rgid: 1000, egid: 0, sgid: 0
DEBUG: apparmor label on snap-confine is: /snap/core/5897/usr/lib/snapd/snap-confine
DEBUG: apparmor mode is: enforce
DEBUG: skipping sandbox setup, classic confinement in use
DEBUG: creating user data directory: /home/jackal/snap/microk8s/310
DEBUG: requesting changing of apparmor profile on next exec to snap.microk8s.docker
DEBUG: loading bpf program for security tag snap.microk8s.docker
DEBUG: read 14 bytes from /var/lib/snapd/seccomp/bpf//snap.microk8s.docker.bin
DEBUG: execv(/snap/core/current/usr/lib/snapd/snap-exec, /snap/core/current/usr/lib/snapd/snap-exec...)
DEBUG:  argv[1] = microk8s.docker
DEBUG:  argv[2] = build
DEBUG:  argv[3] = -t
DEBUG:  argv[4] = localhost:32000/test:2
DEBUG:  argv[5] = .
[sudo] password for jackal:
Sending build context to Docker daemon 3.072 kB
Step 1/3 : FROM node:7
7: Pulling from library/node
ad74af05f5a2: Pull complete
2b032b8bbe8b: Pull complete
a9a5b35f6ead: Pull complete
3245b5a1c52c: Pull complete
afa075743392: Pull complete
9fb9f21641cd: Pull complete
3f40ad2666bc: Pull complete
49c0ed396b49: Pull complete
Digest: sha256:af5c2c6ac8bc3fa372ac031ef60c45a285eeba7bce9ee9ed66dad3a01e29ab8d
Status: Downloaded newer image for node:7
 ---> d9aed20b68a4
Step 2/3 : ADD hello /
 ---> f4a0967abe99
Removing intermediate container 9a1cfb300d23
Step 3/3 : CMD /hello
 ---> Running in 24b2e8a68aa8
 ---> cde4562d5aa5
Removing intermediate container 24b2e8a68aa8
Successfully built cde4562d5aa5
> microk8s.docker push localhost:32000/test:2
2018/11/20 19:05:44.078466 cmd.go:203: DEBUG: restarting into "/snap/core/current/usr/bin/snap"
DEBUG: security tag: snap.microk8s.docker
DEBUG: executable:   /snap/core/current/usr/lib/snapd/snap-exec
DEBUG: confinement:  classic
DEBUG: base snap:    core
DEBUG: ruid: 1000, euid: 0, suid: 0
DEBUG: rgid: 1000, egid: 0, sgid: 0
DEBUG: apparmor label on snap-confine is: /snap/core/5897/usr/lib/snapd/snap-confine
DEBUG: apparmor mode is: enforce
DEBUG: skipping sandbox setup, classic confinement in use
DEBUG: creating user data directory: /home/jackal/snap/microk8s/310
DEBUG: requesting changing of apparmor profile on next exec to snap.microk8s.docker
DEBUG: loading bpf program for security tag snap.microk8s.docker
DEBUG: read 14 bytes from /var/lib/snapd/seccomp/bpf//snap.microk8s.docker.bin
DEBUG: execv(/snap/core/current/usr/lib/snapd/snap-exec, /snap/core/current/usr/lib/snapd/snap-exec...)
DEBUG:  argv[1] = microk8s.docker
DEBUG:  argv[2] = push
DEBUG:  argv[3] = localhost:32000/test:2
The push refers to a repository [localhost:32000/test]
b98f59145c38: Pushed
ab90d83fa34a: Pushed
8ee318e54723: Pushed
e6695624484e: Pushed
da59b99bbd3b: Pushed
5616a6292c16: Pushed
f3ed6cb59ab0: Pushed
654f45ecb7e3: Pushed
2c40c66f7667: Pushed
2: digest: sha256:b49cb1d22939115eb2c10460db08cbe80e2d0fc8df845df5d2c2b8f5251bf2f9 size: 2213

@lethevimlet
Copy link
Author

lethevimlet commented Nov 20, 2018

Apparently, no registry container is running and this is why push fails regardless of the image used.

NAME                                                  READY   STATUS    RESTARTS                                                                                                   AGE
pod/heapster-v1.5.2-XXXXXXXX-gxhnx                  4/4     Running   12                                                                                                         29h
pod/hostpath-provisioner-XXXXXXXX-XXXXXXXX             1/1     Running   3                                                                                                          29h
pod/kube-dns-XXXXXXXX-XXXXXXXX                         3/3     Running   9                                                                                                          29h
pod/kubernetes-dashboard-XXXXXXXX-XXXXXXXX             1/1     Running   3                                                                                                          29h
pod/monitoring-influxdb-grafana-v4-XXXXXXXX-XXXXXXXX   2/2     Running   6                                                                                                          29h

NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S                                                                                                )             AGE
service/heapster               ClusterIP   XX.XX.XX.XX    <none>        80/TCP                                                                                                              29h
service/kube-dns               ClusterIP   XX.XX.XX.XX    <none>        53/UDP                                                                                                ,53/TCP       29h
service/kubernetes-dashboard   ClusterIP   XX.XX.XX.XX    <none>        443/TC                                                                                                P             29h
service/monitoring-grafana     ClusterIP   XX.XX.XX.XX   <none>        80/TCP                                                                                                              29h
service/monitoring-influxdb    ClusterIP   XX.XX.XX.XX   <none>        8083/T                                                                                                CP,8086/TCP   29h

NAME                                             DESIRED   CURRENT   UP-TO-DATE                                                                                                   AVAILABLE   AGE
deployment.apps/heapster-v1.5.2                  1         1         1                                                                                                            1           29h
deployment.apps/hostpath-provisioner             1         1         1                                                                                                            1           29h
deployment.apps/kube-dns                         1         1         1                                                                                                            1           29h
deployment.apps/kubernetes-dashboard             1         1         1                                                                                                            1           29h
deployment.apps/monitoring-influxdb-grafana-v4   1         1         1                                                                                                            1           29h

NAME                                                        DESIRED   CURRENT                                                                                                   READY   AGE
replicaset.apps/heapster-v1.5.2-XXXXXXXX                  0         0                                                                                                         0       29h
replicaset.apps/heapster-v1.5.2-XXXXXXXX                  0         0                                                                                                         0       29h
replicaset.apps/heapster-v1.5.2-XXXXXXXX                  1         1                                                                                                         1       29h
replicaset.apps/hostpath-provisioner-XXXXXXXX             1         1                                                                                                         1       29h
replicaset.apps/kube-dns-XXXXXXXX                         1         1                                                                                                         1       29h
replicaset.apps/kubernetes-dashboard-XXXXXXXX             1         1                                                                                                         1       29h
replicaset.apps/monitoring-influxdb-grafana-v4-XXXXXXXX   1         1                                                                                                         1       29h

I've created a separate VM with a private Docker registry added it to the insecure array, and had no issues pushing images to that. I even had luck using them with microk8s.

It seems as if although I enable registry with mircrok8s.enable registry the actual registry container creation failed.

@ktsakalozos
Copy link
Member

ktsakalozos commented Nov 21, 2018

This is strange. It is as if the API server did not apply the registry manifest (https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/registry.yaml). Can you share the logs of the API server when you microk8s.enable registry?

Thanks

@lethevimlet
Copy link
Author

lethevimlet commented Nov 21, 2018

Logs for the apiserver after running microk8s.enable registry:

journal.log

-- Logs begin at mar 2018-11-20 20:48:57 CET, end at mié 2018-11-21 12:42:00 CET. --
nov 20 20:49:06 kubernetes systemd[1]: Started Service for snap application microk8s.daemon-apiserver.
nov 20 20:49:24 kubernetes microk8s.daemon-apiserver[1384]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
nov 20 20:49:24 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:24.608395    1384 server.go:681] external host was not specified, using XX.XX.XX.XX
nov 20 20:49:24 kubernetes microk8s.daemon-apiserver[1384]: W1120 20:49:24.608461    1384 authentication.go:383] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
nov 20 20:49:24 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:24.608561    1384 server.go:152] Version: v1.12.2
nov 20 20:49:25 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:25.704859    1384 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
nov 20 20:49:25 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:25.704911    1384 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
nov 20 20:49:25 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:25.706066    1384 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
nov 20 20:49:25 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:25.706092    1384 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
nov 20 20:49:25 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:25.941279    1384 master.go:240] Using reconciler: lease
nov 20 20:49:28 kubernetes microk8s.daemon-apiserver[1384]: W1120 20:49:28.360365    1384 genericapiserver.go:325] Skipping API batch/v2alpha1 because it has no resources.
nov 20 20:49:29 kubernetes microk8s.daemon-apiserver[1384]: W1120 20:49:29.022057    1384 genericapiserver.go:325] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
nov 20 20:49:29 kubernetes microk8s.daemon-apiserver[1384]: W1120 20:49:29.036866    1384 genericapiserver.go:325] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
nov 20 20:49:29 kubernetes microk8s.daemon-apiserver[1384]: W1120 20:49:29.077151    1384 genericapiserver.go:325] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
nov 20 20:49:30 kubernetes microk8s.daemon-apiserver[1384]: W1120 20:49:30.263110    1384 genericapiserver.go:325] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
nov 20 20:49:30 kubernetes microk8s.daemon-apiserver[1384]: [restful] 2018/11/20 20:49:30 log.go:33: [restful/swagger] listing is available at https://XX.XX.XX.XX:6443/swaggerapi
nov 20 20:49:30 kubernetes microk8s.daemon-apiserver[1384]: [restful] 2018/11/20 20:49:30 log.go:33: [restful/swagger] https://XX.XX.XX.XX:6443/swaggerui/ is mapped to folder /swagger-ui/
nov 20 20:49:32 kubernetes microk8s.daemon-apiserver[1384]: [restful] 2018/11/20 20:49:32 log.go:33: [restful/swagger] listing is available at https://XX.XX.XX.XX:6443/swaggerapi
nov 20 20:49:32 kubernetes microk8s.daemon-apiserver[1384]: [restful] 2018/11/20 20:49:32 log.go:33: [restful/swagger] https://XX.XX.XX.XX:6443/swaggerui/ is mapped to folder /swagger-ui/
nov 20 20:49:32 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:32.807223    1384 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
nov 20 20:49:32 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:32.807291    1384 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.390671    1384 deprecated_insecure_serving.go:50] Serving insecurely on [::]:8080
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.395054    1384 secure_serving.go:116] Serving securely on [::]:6443
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.396218    1384 autoregister_controller.go:136] Starting autoregister controller
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.396798    1384 cache.go:32] Waiting for caches to sync for autoregister controller
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.398486    1384 apiservice_controller.go:90] Starting APIServiceRegistrationController
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.399012    1384 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.399450    1384 available_controller.go:278] Starting AvailableConditionController
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.399839    1384 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.401587    1384 crd_finalizer.go:242] Starting CRDFinalizer
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.404412    1384 controller.go:84] Starting OpenAPI AggregationController
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.404866    1384 crdregistration_controller.go:112] Starting crd-autoregister controller
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.405399    1384 controller_utils.go:1027] Waiting for caches to sync for crd-autoregister controller
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.442156    1384 customresource_discovery_controller.go:199] Starting DiscoveryController
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.442211    1384 naming_controller.go:284] Starting NamingConditionController
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.442231    1384 establishing_controller.go:73] Starting EstablishingController
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.497554    1384 cache.go:39] Caches are synced for autoregister controller
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.499634    1384 cache.go:39] Caches are synced for APIServiceRegistrationController controller
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.500983    1384 cache.go:39] Caches are synced for AvailableConditionController controller
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.517416    1384 controller_utils.go:1034] Caches are synced for crd-autoregister controller
nov 20 20:49:39 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:39.407677    1384 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
nov 20 20:49:48 kubernetes microk8s.daemon-apiserver[1384]: E1120 20:49:48.529749    1384 rest.go:485] Address {XX.XX.XX.XX  0xc42b228c10 0xc425403c00} isn't valid (pod ip doesn't match endpoint ip, skipping:  vs XX.XX.XX.XX (kube-system/heapster-v1.5.2-7bb8ccfdf9-gxhnx))
nov 20 20:49:48 kubernetes microk8s.daemon-apiserver[1384]: E1120 20:49:48.529793    1384 rest.go:495] Failed to find a valid address, skipping subset: &{[{XX.XX.XX.XX  XXXXXXXX XXXXXXXX }] [] [{ 8082 TCP}]}
nov 20 20:49:57 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:57.547902    1384 controller.go:608] quota admission added evaluator for: { endpoints}
nov 21 12:39:49 kubernetes microk8s.daemon-apiserver[1384]: I1121 12:39:49.716090    1384 controller.go:608] quota admission added evaluator for: { serviceaccounts}
nov 21 12:41:42 kubernetes microk8s.daemon-apiserver[1384]: I1121 12:41:42.928283    1384 controller.go:608] quota admission added evaluator for: {extensions deployments}
nov 21 12:41:42 kubernetes microk8s.daemon-apiserver[1384]: I1121 12:41:42.943395    1384 controller.go:608] quota admission added evaluator for: {apps replicasets}

systemctl.log

● snap.microk8s.daemon-apiserver.service - Service for snap application microk8s.daemon-apiserver
   Loaded: loaded (/etc/systemd/system/snap.microk8s.daemon-apiserver.service; enabled; vendor preset: enabled)
   Active: active (running) since mar 2018-11-20 20:49:06 CET; 15h ago
 Main PID: 1384 (kube-apiserver)
    Tasks: 21
   Memory: 324.8M
      CPU: 35min 4.601s
   CGroup: /system.slice/snap.microk8s.daemon-apiserver.service
           └─1384 /snap/microk8s/266/kube-apiserver --insecure-bind-address=0.0.0.0 --cert-dir=/var/snap/microk8s/266 --etcd-servers=unix://etcd.socket:2379 --service-cluster-ip-range=XX.XX.XX.XX/24 --authorization-mode=AlwaysAllow --basic-auth-file=/snap/microk8s/266/basic_auth.csv --token-auth-file=/snap/microk8s/266/known_token.csv --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --service-account-key-file=/var/snap/microk8s/266/certs/serviceaccount.key --client-ca-file=/var/snap/microk8s/266/certs/ca.crt --tls-cert-file=/var/snap/microk8s/266/certs/server.crt --tls-private-key-file=/var/snap/microk8s/266/certs/server.key --requestheader-client-ca-file=/var/snap/microk8s/266/certs/ca.crt

nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.499634    1384 cache.go:39] Caches are synced for APIServiceRegistrationController controller
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.500983    1384 cache.go:39] Caches are synced for AvailableConditionController controller
nov 20 20:49:38 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:38.517416    1384 controller_utils.go:1034] Caches are synced for crd-autoregister controller
nov 20 20:49:39 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:39.407677    1384 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
nov 20 20:49:48 kubernetes microk8s.daemon-apiserver[1384]: E1120 20:49:48.529749    1384 rest.go:485] Address {XX.XX.XX.XX  0xc42b228c10 0xc425403c00} isn't valid (pod ip doesn't match endpoint ip, skipping:  vs XX.XX.XX.XX (kube-system/heapster-v1.5.2-7bb8ccfdf9-gxhnx))
nov 20 20:49:48 kubernetes microk8s.daemon-apiserver[1384]: E1120 20:49:48.529793    1384 rest.go:495] Failed to find a valid address, skipping subset: &{[{XX.XX.XX.XX  XXXXXXXX XXXXXXXX}] [] [{ 8082 TCP}]}
nov 20 20:49:57 kubernetes microk8s.daemon-apiserver[1384]: I1120 20:49:57.547902    1384 controller.go:608] quota admission added evaluator for: { endpoints}
nov 21 12:39:49 kubernetes microk8s.daemon-apiserver[1384]: I1121 12:39:49.716090    1384 controller.go:608] quota admission added evaluator for: { serviceaccounts}
nov 21 12:41:42 kubernetes microk8s.daemon-apiserver[1384]: I1121 12:41:42.928283    1384 controller.go:608] quota admission added evaluator for: {extensions deployments}
nov 21 12:41:42 kubernetes microk8s.daemon-apiserver[1384]: I1121 12:41:42.943395    1384 controller.go:608] quota admission added evaluator for: {apps replicasets}

@lethevimlet
Copy link
Author

lethevimlet commented Nov 21, 2018

Actually, I just check with all namespaces and the docker-registry created by microk8s.enable registry is running:

NAMESPACE            NAME                                                  READY   STATUS    RESTARTS   AGE
container-registry   pod/registry-XXXXXX-XXXXXX                         1/1     Running   1          156m
default              pod/pod1                                              1/1     Running   3          21h
kube-system          pod/heapster-v1.5.2-7XXXXXX-XXXXXX                  4/4     Running   16         45h
kube-system          pod/hostpath-provisioner-XXXXXX-XXXXXX             1/1     Running   4          45h
kube-system          pod/kube-dns-XXXXXX-XXXXXX                         3/3     Running   12         45h
kube-system          pod/kubernetes-dashboard-XXXXXX-XXXXXX             1/1     Running   4          45h
kube-system          pod/monitoring-influxdb-grafana-v4-XXXXXX-XXXXXX   2/2     Running   8          45h

But the problem persists, I can't push any image to that registry.

@taintedkernel
Copy link

taintedkernel commented Nov 30, 2018

I'm having the same issue and seeing similar evidence on my system with log message errors (Address isn't valid, Failed to find a valid address, etc). I also see the container-registry pod running.

Also running Ubuntu Server 16.04.5 LTS
Installed microk8s with snap install microk8s --classic --edge

I tried the above "hello world" docker container and resulted in the same thing: The push refers to a repository and then sitting idle.

@cneberg
Copy link

cneberg commented Dec 3, 2018

I tried the above "hello world" docker container and resulted in the same thing: The push refers to a repository and then sitting idle.

same hang also on Ubuntu Server 16.04

@cneberg
Copy link

cneberg commented Dec 3, 2018

Ok here is the reason for me localhost refers to the IPV6 version - and the command below hangs

wget http://localhost:32000/

--2018-12-03 13:59:58--  http://localhost:32000/
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:32000... connected.
HTTP request sent, awaiting response...

But if I comment out the line below from my /etc/hosts

#::1 localhost ip6-localhost ip6-loopback

it works now.

@taintedkernel
Copy link

@cneberg great find, I found the same thing on my end. But, I'm not sure how that works because ping, dig, etc all return the IPv4 address.

I tried re-building/tag and push with 127.0.0.1 instead of localhost and it worked just fine, without modifying /etc/hosts.

@cneberg
Copy link

cneberg commented Dec 29, 2018

Someone replied to this thread and I received the message from Gelinger Media through email from github (I'm not sure how it was sent because its not in the issue history) but I'm replying here in hopes it gets back to them.

It appears in push example to push example-php-dbconnect you tag as 32 hundred as your port and not 32 thousand. Later in your next examples you push my-busybox to port 32000 and it appears to work.


From: Gelinger Media

We have the same problem here with custom built images

microk8s.docker push localhost:3200/heptio/example-php-dbconnect
The push refers to repository [localhost:3200/heptio/example-php-dbconnect]
Get http://localhost:3200/v2/: dial tcp 127.0.0.1:3200: connect: connection refused

In comparison to it sample pull-tag-push worked from docks

root@kubernetes-2gb-nbg1-1:/kuber_files/kuber_files/example-lamp/php# microk8s.docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
Digest: sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796
Status: Image is up to date for busybox:latest
root@kubernetes-2gb-nbg1-1:
/kuber_files/kuber_files/example-lamp/php# microk8s.docker tag busybox localhost:32000/my-busybox
root@kubernetes-2gb-nbg1-1:~/kuber_files/kuber_files/example-lamp/php# microk8s.docker push localhost:32000/my-busybox
The push refers to repository [localhost:32000/my-busybox]
23bc2b70b201: Layer already exists
latest: digest: sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 size: 527

Any Ideas?

@maherkamal
Copy link

Any update on this issue, when it is expected to be resolved?

@ktsakalozos
Copy link
Member

I am going to close this issue since I was not able to reproduce the initial case.

I am also not sure what can be done in the case of IPv6. If anyone has a suggestion please step forward to discuss how we can move forward.

@maherkamal, if you are running into trouble pushing to the registry please open a new issue describing your setup and a way to reproduce.

Thank you all.

@maherkamal
Copy link

I disabled IPv6 on my linux machine and everything is working fine, Thank you ktsakalozos

@koxu1996
Copy link

koxu1996 commented Jul 8, 2019

I dont't like workaround with disabling IPv6 loopback and it seems there is better way:

docker build . -t 127.0.0.1:32000/myimage:1.2.3
docker push 127.0.0.1:32000/myimage:1.2.3

Notice that I used 127.0.0.1 instead of localhost.

@nicks
Copy link
Contributor

nicks commented Oct 17, 2019

The 127.0.0.1:32000 work-around didn't work for me, because 127.0.0.1 isn't listed as an insecure registry :(

@suharevA
Copy link

rebuilt with tag 127.0.0.1 instead of localhost and everything worked fine

@justinjohn83
Copy link

justinjohn83 commented Jan 20, 2020

For me I was able to push to the private registry on 127.0.0.1 in Vagrant, but then when the pod was deployed, it would fail to pull with http: server gave HTTP response to HTTPS client. This was even after following instructions in https://microk8s.io/docs/registry-private and updating the /var/snap/microk8s/current/args/containerd-template.toml configuration and reloading microk8s. I also tried to just change the pull policy to Never and use the locally built image but this seems to not be supported. The only combination that worked for me was to restore containerd-template.toml to use localhost:32000, add localhost:32000 to /etc/docker/daemon.json, and then comment out the localhost ipv6 line in the /etc/hosts file on the Vagrant vm as suggested by @cneberg

@candlerb
Copy link

candlerb commented Feb 24, 2020

I see the same problem. Reproduced by snap install microk8s (1176) and snap install docker (423).

The initial /etc/hosts inside the vagrant VM looks like this:

127.0.0.1       localhost
127.0.1.1       vagrant.vm      vagrant

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

The local registry hangs when you try to communicate with it over IPv6, but not over IPv4:

$ curl localhost:32000/v2/
<< HANGS >>

$ curl -v '[::1]:32000/v2/'
*   Trying ::1...
* TCP_NODELAY set
* Connected to ::1 (::1) port 32000 (#0)
> GET /v2/ HTTP/1.1
> Host: [::1]:32000
> User-Agent: curl/7.58.0
> Accept: */*
>
<< HANGS >>

$ curl 127.0.0.1:32000/v2/
{}

I can remove localhost from the ::1 line in /etc/hosts, and then curl localhost works:

$ curl localhost:32000/v2/
{}

However, even after sudo snap restart docker, docker is still trying to connect to ::1 when I do a docker push localhost:32000/.... This is shown by tcpdump:

# tcpdump -i lo -nn tcp port 32000 or icmp
...
14:06:26.534307 IP6 ::1.40744 > ::1.32000: Flags [S], seq 3165876644, win 43690, options [mss 65476,sackOK,TS val 2360741131 ecr 0,nop,wscale 7], length 0
14:06:26.534321 IP6 ::1.32000 > ::1.40744: Flags [S.], seq 2136322806, ack 3165876645, win 43690, options [mss 65476,sackOK,TS val 2360741131 ecr 2360741131,nop,wscale 7], length 0
14:06:26.534332 IP6 ::1.40744 > ::1.32000: Flags [.], ack 1, win 342, options [nop,nop,TS val 2360741131 ecr 2360741131], length 0
<< HANGS - until docker times out >>

There seem to be two issues here.

The first (and most important) is that the microk8s bundled registry accepts but blackholes requests on ::1. The second is that the docker snap doesn't honour /etc/hosts (I am guessing it is constrained to whatever is in the snap environment).

To workaround this with docker, I can tag and push images to 127.0.0.1:32000 instead of localhost:32000. However then I get problems with microk8s pulling from this registry:

    spec:
      containers:
      - name: xxx
        image: 127.0.0.1:32000/xxx:DEV

It gives errors saying that it gets a HTTP response when trying to talk to a HTTPS endpoint.

  Normal   Pulling    9s         kubelet, vagrant   Pulling image "127.0.0.1:32000/xxx:DEV"
  Warning  Failed     9s         kubelet, vagrant   Failed to pull image "127.0.0.1:32000/xxx:DEV": rpc error: code = Unknown desc = failed to resolve image "127.0.0.1:32000/xxx:DEV": no available registry endpoint: failed to do request: Head https://127.0.0.1:32000/v2/xxx/manifests/DEV: http: server gave HTTP response to HTTPS client
  Warning  Failed     9s         kubelet, vagrant   Error: ErrImagePull
  Normal   BackOff    9s         kubelet, vagrant   Back-off pulling image "127.0.0.1:32000/xxx:DEV"
  Warning  Failed     9s         kubelet, vagrant   Error: ImagePullBackOff

To fix this, edit /var/snap/microk8s/current/args/containerd-template.toml:

    [plugins.cri.registry]
      [plugins.cri.registry.mirrors]
        [plugins.cri.registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]
        [plugins.cri.registry.mirrors."localhost:32000"]
          endpoint = ["http://127.0.0.1:32000"]
        [plugins.cri.registry.mirrors."127.0.0.1:32000"]
          endpoint = ["http://127.0.0.1:32000"]

(and microk8s.stop; microk8s.start). And finally, microk8s can pull from the local registry.

It is not a good user experience to have to do all this debugging to get to this point :-(

Could I request that this ticket be re-opened? It is very simple to reproduce that the local registry is listening on ::1, accepts connections and hangs. If that problem were fixed, none of the workarounds would be required.

@candlerb
Copy link

Extra data point:

$ kubectl -it exec deployment/registry /bin/bash -n container-registry
# apt-get update
...
# apt-get install -y curl net-tools
...
# netstat -natp | grep registry
tcp6       0      0 :::5000                 :::*                    LISTEN      1/registry
# curl localhost:5000/v2/
{}
# curl '[::1]:5000/v2/'
{}

So the problem is not with the registry container itself: it's with the k8s infrastructure which is forwarding port 32000 to container port 5000.

$ kubectl get all -n container-registry
NAME                           READY   STATUS    RESTARTS   AGE
pod/registry-d7d7c8bc9-7zpdh   1/1     Running   3          137m

NAME               TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/registry   NodePort   10.152.183.66   <none>        5000:32000/TCP   137m

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/registry   1/1     1            1           137m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/registry-d7d7c8bc9   1         1         1       137m

Relates to:

The latter claims to be fixed by recent commit kubernetes/kubernetes#87699, so should eventually get picked up in a future k8s release.

From reading the above: it seems that k8s binds a socket purely as a way of reserving the port number - the actual (IPv4) traffic is redirected through iptables and never touches the socket. Unfortunately, there are no IPv6 rules and so IPv6 traffic does still hit the socket :-( The commit changes it to listen on IPv4 only (for a v4 cluster anyway).

So another workaround might be to add a manual ip6tables rule rejecting connections to [::1]:32000 - clients should then fall back to IPv4.

@CrossBound
Copy link

Commenting out the IPV6 entries in my /etc/hosts file resolved the problem for me

@Envek
Copy link

Envek commented Mar 11, 2020

In my case registry didn't work after microk8s.enable registry too.

Commenting out the IPV6 entries in my /etc/hosts file resolved it. Thanks folks for pointing it out.

@alexgottscha
Copy link

This seriously needs some built-in documentation - here we are several years later and people (like me) are only getting the solution from a closed github issue?

@nicks
Copy link
Contributor

nicks commented Jul 31, 2020

@invertigo "several years"??? O.o

@ktsakalozos if you're willing to re-open this issue, i have a suggestion on how to move forward. What we did in Tilt is try to detect if someone is using a microk8s registry, and if they are, double-check if localhost resolves to an ipv4 address. If it doesn't resolve, we error out and don't let them push to the registry.

Here's what it looks like: https://github.com/tilt-dev/tilt/pull/2370/files

I think it would make sense to put a similar check here: https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/enable.registry.sh#L6, happy to send a PR if that makes sense

@ktsakalozos
Copy link
Member

@nicks sounds like a plan. Do you think we should fail the installation of the registry or throw a warning on what to do?

@stale
Copy link

stale bot commented Jun 26, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the inactive label Jun 26, 2021
@stale stale bot closed this as completed Jul 27, 2021
@jdkern11
Copy link

jdkern11 commented Jul 1, 2022

If you are still having issues despite commenting out the IPV6 entry, double check the /etc/docker/daemon.json file. I copy pasted the required lines in using vim and I think there was some character in there messing things up because it worked once I deleted any excess white spaces.

@SYBIOTE
Copy link

SYBIOTE commented Feb 14, 2023

still having this issue in WSL2 on windows in 2023
commented out the IPV6 , and set my daemon.json file form docker desktop so i dont think is the cause
i still cant push images to the local microk8s registry

@thebirdgr
Copy link

I had this issue. Turns out I had minikube registry running as well which was interfering with microk8s registry localhost though they were on different ports, 5000 vs 32000. Once i stopped my minikube cluster i was able to push it to microk8s registry.
I used these commands to see what was happening.
microk8s kubectl describe <resource-type> <resource-name>
microk8s kubectl get all -A

The following page helped me understand what to look for.

microk8s kubectl get all --namespace=container-registry
---------------------------------------------------------------
NAME                            READY   STATUS    RESTARTS   AGE
pod/registry-77c7575667-j5m9d   0/1     Pending   0          2d1h

@hakonosterbo
Copy link

hakonosterbo commented Mar 10, 2023

still having this issue in WSL2 on windows in 2023 commented out the IPV6 , and set my daemon.json file form docker desktop so i dont think is the cause i still cant push images to the local microk8s registry

Did you find a solution on wls? I am in the exact same situation.

As a workaround for now I created a registry in docker:

docker run -d -p 5000:5000 --restart=always --name registry registry:2

and pushed to that instead.

docker image tag localhost:32000:/myimage:test localhost:5000:/myimage:test
docker push localhost:5000:/myimage:test

and updated my image path in the deployment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.