Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pull from KIND local container image registry: server gave HTTP response to HTTPS client #2604

Closed
keypointt opened this issue Jan 26, 2022 · 19 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@keypointt
Copy link
Contributor

keypointt commented Jan 26, 2022

Hi,

Following guide at https://kind.sigs.k8s.io/docs/user/local-registry/, I created a local registry, and set reg_port='5050'.

All works fine on my local laptop for docker pull/push, but when I deploy my app into the Kubernetes cluster and running it in the pod, then I got error on

#3 [internal] load metadata for localhost:5050/app:0.1
#3 ERROR: failed to do request: Head https://localhost:5050/v2/app/manifests/0.1: dial tcp [::1]:5050: connect: connection refused

#8 [internal] load build context
#8 DONE 0.0s

Then I googled and enabled insecure registry connection by adding insecure_skip_verify = true following some stackoverflow posts

# create a cluster with the local registry enabled in containerd
cat <<EOF | kind create cluster --name myapp --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"]
    endpoint = ["http://${reg_name}:5000"]
  [plugins."io.containerd.grpc.v1.cri".registry.configs]
    [plugins."io.containerd.grpc.v1.cri".registry.configs."localhost:${reg_port}".tls]
      insecure_skip_verify = true
nodes:
- role: control-plane
- role: worker
EOF

But when I exec to the pod and run img pull against local registry, still error on http/https. (but img pull from DockerHub is working)

bash-5.0$ img pull kind-registry:5000/app:0.1
Pulling kind-registry:5000/app:0.1
Error: failed to do request: Head https://kind-registry:5000/v2/app/manifests/0.1: http: server gave HTTP response to HTTPS client

I'm thinking of setting HTTPS for the local registry I created, but just curious is there a better way to address this issue of pull/push from pod against local registry?

Thank you very much!

@keypointt keypointt added the kind/support Categorizes issue or PR as a support question. label Jan 26, 2022
@keypointt
Copy link
Contributor Author

keypointt commented Jan 26, 2022

I found in Slack channel this discussion https://kubernetes.slack.com/archives/CEKK1KTN2/p1642486484105500 , and now experimenting on ideas proposed.

Above discussion not working for me.

And also tried:

  1. comment out the tls part
 #[plugins.cri.registry.configs."my.registry.domain".tls]
 # insecure_skip_verify = true
  1. or adding a auth in config
containerdConfigPatches:
- |-
 [plugins.cri.registry]
 [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"]
 endpoint = ["http://${reg_name}:5000"]
 [plugins."io.containerd.grpc.v1.cri".registry.auths]
 [plugins.cri.registry.auths."localhost:${reg_port}"]
 username = "admin"
 password = "password"

But neither above worked...

@BenTheElder
Copy link
Member

BenTheElder commented Jan 28, 2022

for the registry configured in https://kind.sigs.k8s.io/docs/user/local-registry/, the appropriate containerd config is included in the script

But when I exec to the pod and run img pull against local registry, still error on http/https. (but img pull from DockerHub is working)

that's not going to work, because localhost is local to each container, which is why we configure containerd to treat localhost:${registry_port} as ${registry_container_name}:500 so you can use a consistent hostname when pushing / deploying in cluster, but the actual networking is not trying to reach localhost as the same thing from all locations (which will not work, localhost on your host is not localhost in your cluster node is not localhost in your pod, they're all distinct).

you can have your in-cluster app pull at ${registry_name}:500 or similarly configure a mirror mapping (not sure how img handles this)

@BenTheElder
Copy link
Member

this part of the script:

containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"]
    endpoint = ["http://${reg_name}:5000"]

is where we tell containerd "when you want to pull an image from localhost:${reg_port}, actually use http://${reg_name}:5000"

@keypointt
Copy link
Contributor Author

Thank you Benjamin!
I'll try it out.

@keypointt
Copy link
Contributor Author

keypointt commented Feb 7, 2022

I just gave it a shot telling my in-cluster app pull at ${registry_name}:5050, and failed on failed to do request: Head "https://kind-registry:5050/v2/.
Seems it's by default connecting to HTTPS of kind-registry:5050 when img is making the pull operation...


Normal   Pulling    5m56s (x4 over 7m25s)   kubelet            Pulling image "kind-registry:5050/img:mytag"
Warning  Failed     5m56s (x4 over 7m25s)   kubelet            Failed to pull image "kind-registry:5050/img:mytag": rpc error: code = Unknown desc = failed to pull and unpack image "kind-registry:5050/img:mytag": failed to resolve reference "kind-registry:5050/img:mytag": failed to do request: Head "https://kind-registry:5050/v2/img/manifests/mytag": dial tcp [fc00:f853:ccd:e793::4]:5050: connect: connection refused
Warning  Failed     5m56s (x4 over 7m25s)   kubelet            Error: ErrImagePull
Warning  Failed     5m44s (x6 over 7m24s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m21s (x21 over 7m24s)  kubelet            Back-off pulling image "kind-registry:5050/img:mytag"

my config

reg_name='kind-registry'
reg_port='5050'
running="$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)"
if [ "${running}" != 'true' ]; then
  docker run \
    -d --restart=always -p "127.0.0.1:${reg_port}:5000" --name "${reg_name}" \
    registry:2
fi

@BenTheElder
Copy link
Member

I just gave it a shot telling my in-cluster app pull at ${registry_name}:5050, and failed on failed to do request: Head "https://kind-registry:5050/v2/.

Inside the docker network / containers the port is always 5000, the reg_port in the script is only the port being forwarded on the host.

@BenTheElder
Copy link
Member

You probably need to:

  1. when pushing from inside the cluster, use ${reg_name}:5000
  2. set --insecure-registry flag to img https://github.com/genuinetools/img#push-an-image, docker and other tools tend to default to this for localhost but not any other address, which is why we setup the mirror config and port forward, but neither of these apply to an application inside a pod in the cluster.

@keypointt
Copy link
Contributor Author

Thank you Benjamin for this instant reply, will try it out now.

@keypointt
Copy link
Contributor Author

keypointt commented Feb 7, 2022

Hi Benjamin, I updated it to use ${reg_name}:5000, but still the same error.

Normal   Pulling    23s (x2 over 35s)  kubelet            Pulling image "kind-registry:5000/img:mytag"
Warning  Failed     23s (x2 over 35s)  kubelet            Failed to pull image "kind-registry:5000/img:mytag": rpc error: code = Unknown desc = failed to pull and unpack image "kind-registry:5000/img:mytag": failed to resolve reference "kind-registry:5000/img:mytag": failed to do request: Head "https://kind-registry:5000/v2/img/manifests/mytag": http: server gave HTTP response to HTTPS client

I also updated the img push --insecure-registry for in cluster app, but above error is during pod creation step and didn't reach the img push --insecure-registry step yet.

Update: and maybe add more context here, basically its airflow schedule pod creating another airflow executor pod, and the error is from when creating the executor pod, which maybe makes it more complicated.

Quick search and it seems I may miss some configuration to specify insecure registry, like

but I didn't find much for containerd of KIND, and seems I need some configs in [plugins."io.containerd.grpc.v1.cri".registry] to specify [plugins."io.containerd.grpc.v1.cri".registry.mirrors.${reg_name}:5000"] endpoint = ["http://${reg_name}:5000"] , referring to https://mrzik.medium.com/how-to-configure-private-registry-for-kubernetes-cluster-running-with-containerd-cf74697fa382

where it seems ${reg_name}:5000 is being mapped to https endpoint = ["https://${reg_name}:5000"] while it really should http endpoint = ["http://${reg_name}:5000"]

then my config will be like

  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"]
    endpoint = ["http://${reg_name}:5000"]
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors.${reg_name}:5000"]
    endpoint = ["http://${reg_name}:5000"]
  [plugins."io.containerd.grpc.v1.cri".registry.configs]
    [plugins."io.containerd.grpc.v1.cri".registry.configs."localhost:${reg_port}".tls]
      insecure_skip_verify = true

but not sure, maybe you have some quick idea on top of your mind how to configure it?

@zknill
Copy link

zknill commented Feb 16, 2022

I had this, or a very similar problem. @keypointt

For me, when I first started the kind docker container the containerdConfigPatches were applied to the file /etc/containerd/config.toml but were not reflected in the config that crictl info reported.

docker exec -it kind-control-plane cat /etc/containerd/config.toml

version = 2

[plugins]
  [plugins."io.containerd.grpc.v1.cri"]
    // trimmed
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5000"]
          endpoint = ["http://kind-registry:5000"]
docker exec -it kind-control-plane crictl info

// trimmed
"registry": {
  "configPath": "",
  "mirrors": null,  // <-- this should have been set based on config above in toml
  "configs": null,
  "auths": null,
  "headers": null
}

I pushed the image gcr.io/google-samples/hello-app:1.0 to my local registry at localhost:5000 for testing.

When I ran;

docker exec -it kind-control-plane bash -c "crictl -r unix:///var/run/containerd/containerd.sock pull localhost:5000/hello-app:1.0"

FATA[0000] pulling image: rpc error: code = Unknown desc = failed to pull and unpack image "localhost:5000/hello-app:1.0": failed to resolve reference "localhost:5000/hello-app:1.0": failed to do request: Head "http://localhost:5000/v2/hello-app/manifests/1.0": dial tcp [::1]:5000: connect: connection refused 

✅ to fix the problem, I had to restart to docker container docker restart kind-control-plane

Now;

docker exec -it kind-control-plane crictl info

//trimmed
"registry": {
  "configPath": "",
  "mirrors": {   // <-- mirrors is now set
    "localhost:5000": {
      "endpoint": [
        "http://kind-registry:5000"
      ]
    }
  },
  "configs": null,
  "auths": null,
  "headers": null
},

And;

docker exec -it kind-control-plane bash -c "crictl -r unix:///var/run/containerd/containerd.sock pull localhost:5000/hello-app:1.0"

Image is up to date for sha256:33daf70e76896d4205853d9d941db1afd6edd19036d544df2de770bb7b7cbb7c

Containers were now pulled from my local registry.

@anthony-arnold
Copy link

anthony-arnold commented Feb 17, 2022

@zknill

to fix the problem, I had to restart to docker container docker restart kind-control-plane

I can confirm that restarting the control plane container worked for me too. After restart, crictl info reported the correct registry settings.

After doing so, Helm is all funky. I installed a helm chart successfully (the nginx sample from the helm create template) but kubectl get pods reports no pods. Previously it had at least shown the pods failing to pull the requested image. Now they're just not there.

@anthony-arnold
Copy link

According to #2262 this is fixed in #2382.

@BenTheElder
Copy link
Member

BenTheElder commented Feb 17, 2022

For me, when I first started the kind docker container the containerdConfigPatches were applied to the file /etc/containerd/config.toml but were not reflected in the config that crictl info reported.

aha, that's #2262

this is fixed at HEAD but a pile of things has led to us only recently reaching a ~releasable state. if you install kind from main it should be fixed. we should have a release soon.

EDIT: jinx :^)

@anthony-arnold
Copy link

An interim fix in case you don't want to install from main is to do docker exec -it kind-control-plane systemctl restart containerd. Although I found that I actually needed to restart the worker node(s) because they're the ones doing the pulling.

@BenTheElder
Copy link
Member

you should be able to do this with the shell line: for no in $(kubectl get no); do docker exec $no systemctl restart containerd; done

(might want to do for no in $(kubectl --context=kind-kind get no); do docker exec $no systemctl restart containerd; done, replace kind-kind with kind- + --name= value from kind create cluster --name=...)

@keypointt
Copy link
Contributor Author

keypointt commented Feb 19, 2022

Hi folks thank you so much and it seems working for me to restart the docker nodes!

Meanwhile, after restart, I got similiar issue but different from Zak.

In my case it's requesting to https while in Zak's case it's to http

Extracted from below full log, my case is failed to do request: Head https://localhost:5050


> [ 1/10] FROM localhost:5050/myapp:0.7.4:
------
------
> [internal] load metadata for localhost:5050/myapp:0.7.4:
------
#4 ERROR: failed to do request: Head https://localhost:5050/v2/myapp/manifests/0.7.4: dial tcp [::1]:5050: connect: connection refused
#4 resolve localhost:5050/myapp:0.7.4 done
#4 [ 1/10] FROM localhost:5050/myapp:0.7.4
#8 DONE 0.0s
#8 [internal] load build context
#3 ERROR: failed to do request: Head https://localhost:5050/v2/myapp/manifests/0.7.4: dial tcp [::1]:5050: connect: connection refused
#3 [internal] load metadata for localhost:5050/myapp:0.7.4
#2 DONE 0.0s
#2 transferring context: 369B done
#2 [internal] load .dockerignore
#1 DONE 0.0s
#1 transferring dockerfile: 3.33kB done
#1 [internal] load build definition from Dockerfile_dynamic
Error: getting image "localhost:5050/mymodel:tag0.1" failed: image "localhost:5050/mymodel:tag0.1": not found
Pushing localhost:5050/mymodel:tag0.1...
Docker image successfully updated
Task exited without error
=========

I believe the issue I have is because I'm using img https://github.com/genuinetools/img#push-an-image which is defaulting to use https.

I'll try to

  1. try another attempt, using img push $MODEL_IMAGE_URI --insecure-registry , as Benjamin suggested previously
  2. or getting rid of img is possible in my app

Will report back and then close issue :)

@keypointt
Copy link
Contributor Author

Marking this issue as closed, since the issue is no longer related to KIND itself.

Thank you all for your help! 👍

@hockic
Copy link

hockic commented Nov 20, 2023

The following configuration works for me:

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."locahost:5000"]
        endpoint = ["http://kind-registry:5000"]
      [plugins."io.containerd.grpc.v1.cri".registry.configs."kind-registry:5000".tls]
        insecure_skip_verify = true
        cert_file = ""
        key_file = ""
        ca_file = ""
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."kind-registry:5000"]
        endpoint = ["kind-registry:5000"]
      [plugins."io.containerd.grpc.v1.cri".registry.configs."kind-registry:5000".tls]
        insecure_skip_verify = true
        cert_file = ""
        key_file = ""
        ca_file = ""

@anselmobattisti
Copy link

In my case nothing abover words.

What works for me was define the digest in the image parameter:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: imgblack
spec:
  template:
    spec:
      containers:
        - image: localhost:5001/textproc/textproc@sha256:984ffb065dd9616e883819488b0dd5bfbe47e7f944bcd1fb2ac908f0fe1ed98d
          env:
            - name: FUNC_TYPE
              value: "black"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

6 participants