Document Private Registry Authentication #499

Closed
pixie79 opened this Issue Jul 17, 2014 · 44 comments

Comments

@pixie79

pixie79 commented Jul 17, 2014

How do we set up kubernetes to works with private registries?

Currently we can put a remote registry into the image tag for the pod but there appears to be nowhere to enter the login details if these images are private.

Ideally I guess we should be able to set these in the global config script, as the login details for a single registry would be the same for each pod, rather than repeating them in each pod's spec.

@proppy

This comment has been minimized.

Show comment
Hide comment
@proppy

proppy Jul 17, 2014

Contributor

If you're using Google Cloud Platform, one possibility is to use the google/docker-registry image to push your images to Google Cloud Storage.

You should then be able to add google/docker-registry to one of your pod and pull from localhost:5000/myimagename.

Contributor

proppy commented Jul 17, 2014

If you're using Google Cloud Platform, one possibility is to use the google/docker-registry image to push your images to Google Cloud Storage.

You should then be able to add google/docker-registry to one of your pod and pull from localhost:5000/myimagename.

@brendandburns

This comment has been minimized.

Show comment
Hide comment
@brendandburns

brendandburns Jul 17, 2014

Contributor

There's an example of a container manifest which uses a private registry
here:

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/build/master-manifest.yaml

On Wed, Jul 16, 2014 at 10:36 PM, Johan Euphrosine <notifications@github.com

wrote:

If you're using Google Cloud Platform, one possibility is to use the
google/docker-registry https://index.docker.io/u/google/docker-registry
image to push your images to Google Cloud Storage.

You should then be able to add google/docker-registry to one of your pod
and pull from localhost:5000/myimagename.


Reply to this email directly or view it on GitHub
#499 (comment)
.

Contributor

brendandburns commented Jul 17, 2014

There's an example of a container manifest which uses a private registry
here:

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/build/master-manifest.yaml

On Wed, Jul 16, 2014 at 10:36 PM, Johan Euphrosine <notifications@github.com

wrote:

If you're using Google Cloud Platform, one possibility is to use the
google/docker-registry https://index.docker.io/u/google/docker-registry
image to push your images to Google Cloud Storage.

You should then be able to add google/docker-registry to one of your pod
and pull from localhost:5000/myimagename.


Reply to this email directly or view it on GitHub
#499 (comment)
.

@brendandburns

This comment has been minimized.

Show comment
Hide comment
@brendandburns

brendandburns Jul 17, 2014

Contributor

in case it wasn't clear, if you use the Google storage private registry,
the credentials are supplied with the service account that is available
from your GCE VM.

--brendan

On Wed, Jul 16, 2014 at 10:52 PM, Brendan Burns bburns@google.com wrote:

There's an example of a container manifest which uses a private registry
here:

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/build/master-manifest.yaml

On Wed, Jul 16, 2014 at 10:36 PM, Johan Euphrosine <
notifications@github.com> wrote:

If you're using Google Cloud Platform, one possibility is to use the
google/docker-registry https://index.docker.io/u/google/docker-registry
image to push your images to Google Cloud Storage.

You should then be able to add google/docker-registry to one of your pod
and pull from localhost:5000/myimagename.


Reply to this email directly or view it on GitHub
#499 (comment)
.

Contributor

brendandburns commented Jul 17, 2014

in case it wasn't clear, if you use the Google storage private registry,
the credentials are supplied with the service account that is available
from your GCE VM.

--brendan

On Wed, Jul 16, 2014 at 10:52 PM, Brendan Burns bburns@google.com wrote:

There's an example of a container manifest which uses a private registry
here:

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/build/master-manifest.yaml

On Wed, Jul 16, 2014 at 10:36 PM, Johan Euphrosine <
notifications@github.com> wrote:

If you're using Google Cloud Platform, one possibility is to use the
google/docker-registry https://index.docker.io/u/google/docker-registry
image to push your images to Google Cloud Storage.

You should then be able to add google/docker-registry to one of your pod
and pull from localhost:5000/myimagename.


Reply to this email directly or view it on GitHub
#499 (comment)
.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 17, 2014

Contributor

In general we'd want to support a wide range of authentications for private registries. OAuth is a good first step, but in multi-tenant setups you can't rely on host trust relationships. Also, folks should be able to use api keys from private repos in the DockerHub, and eventually even passwords.

  1. Infrastructure trust with one or more remote registries (the apiserver communicates the pod "owner's" identity to the kubelet, the host is set up to pass that identity through Docker to the remote repo)
  2. Delegated trust between individual users and registries (user A creates a token enabling access to image B in repo C, that token can be passed down to the Kubelet via the manifest and then on to Docker)
  3. Use direct credentials to connect to registries (user/passwords, client certs)

Lots of complexity here in op shops. Kerberos and SSL client certs are the most common complex solutions, but an OAuth trust relationship, properly configured, would be better than most of the others.

Contributor

smarterclayton commented Jul 17, 2014

In general we'd want to support a wide range of authentications for private registries. OAuth is a good first step, but in multi-tenant setups you can't rely on host trust relationships. Also, folks should be able to use api keys from private repos in the DockerHub, and eventually even passwords.

  1. Infrastructure trust with one or more remote registries (the apiserver communicates the pod "owner's" identity to the kubelet, the host is set up to pass that identity through Docker to the remote repo)
  2. Delegated trust between individual users and registries (user A creates a token enabling access to image B in repo C, that token can be passed down to the Kubelet via the manifest and then on to Docker)
  3. Use direct credentials to connect to registries (user/passwords, client certs)

Lots of complexity here in op shops. Kerberos and SSL client certs are the most common complex solutions, but an OAuth trust relationship, properly configured, would be better than most of the others.

@bcwaldon

This comment has been minimized.

Show comment
Hide comment
@bcwaldon

bcwaldon Jul 18, 2014

Contributor

It may be helpful in the short term to fail softly in the event that a pull operation fails due to an auth issue. I can easily run docker login registry.example.com and pull the images I need manually, and k8s should be able to run them just fine without a successful pull.

Contributor

bcwaldon commented Jul 18, 2014

It may be helpful in the short term to fail softly in the event that a pull operation fails due to an auth issue. I can easily run docker login registry.example.com and pull the images I need manually, and k8s should be able to run them just fine without a successful pull.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Jul 18, 2014

Member

This soft failure mode devolves into #504

On Fri, Jul 18, 2014 at 4:19 PM, Brian Waldon notifications@github.com
wrote:

It may be helpful in the short term to fail softly in the event that a
pull operation fails due to an auth issue. I can easily run docker login
registry.example.com and pull the images I need manually, and k8s should
be able to run them just fine without a successful pull.

Reply to this email directly or view it on GitHub
#499 (comment)
.

Member

thockin commented Jul 18, 2014

This soft failure mode devolves into #504

On Fri, Jul 18, 2014 at 4:19 PM, Brian Waldon notifications@github.com
wrote:

It may be helpful in the short term to fail softly in the event that a
pull operation fails due to an auth issue. I can easily run docker login
registry.example.com and pull the images I need manually, and k8s should
be able to run them just fine without a successful pull.

Reply to this email directly or view it on GitHub
#499 (comment)
.

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Feb 28, 2015

Member

@erictune @derekwaynecarr @pmorie What else needs to be done on this?

Member

bgrant0607 commented Feb 28, 2015

@erictune @derekwaynecarr @pmorie What else needs to be done on this?

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Feb 28, 2015

Contributor

As per our discussion at the face to face, one option would be to deliver a secret type that would convey authorization to pull the image. The kubelet would need to know something about that secret type, and have a way of defending against malicious use of the secret.

Contributor

smarterclayton commented Feb 28, 2015

As per our discussion at the face to face, one option would be to deliver a secret type that would convey authorization to pull the image. The kubelet would need to know something about that secret type, and have a way of defending against malicious use of the secret.

@pmorie

This comment has been minimized.

Show comment
Hide comment
@pmorie

pmorie Feb 28, 2015

Member

Do we want to do pulls in a user container as a long-term goal?
On Fri, Feb 27, 2015 at 11:45 PM Clayton Coleman notifications@github.com
wrote:

As per our discussion at the face to face, one option would be to deliver
a secret type that would convey authorization to pull the image. The
kubelet would need to know something about that secret type, and have a way
of defending against malicious use of the secret.


Reply to this email directly or view it on GitHub
#499 (comment)
.

Member

pmorie commented Feb 28, 2015

Do we want to do pulls in a user container as a long-term goal?
On Fri, Feb 27, 2015 at 11:45 PM Clayton Coleman notifications@github.com
wrote:

As per our discussion at the face to face, one option would be to deliver
a secret type that would convey authorization to pull the image. The
kubelet would need to know something about that secret type, and have a way
of defending against malicious use of the secret.


Reply to this email directly or view it on GitHub
#499 (comment)
.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Feb 28, 2015

Contributor

I don't know. It seems to me that the goal of the platform is to abstract image pulling - it's not something a user is concerned with. It should be fast and ideally delivered by an efficient network abstraction. Cgroup constraints on pulls in some cases - sure. Special pullers - maybe.

Tim mentioned something last week that has also come up from the network filesystems guys - having a mount already established on disk of images so that you don't even need to download anything. In that model we would need to still check some permissions, and the user container wouldn't play into it too much.

On Feb 28, 2015, at 12:36 AM, Paul Morie notifications@github.com wrote:

Do we want to do pulls in a user container as a long-term goal?
On Fri, Feb 27, 2015 at 11:45 PM Clayton Coleman notifications@github.com
wrote:

As per our discussion at the face to face, one option would be to deliver
a secret type that would convey authorization to pull the image. The
kubelet would need to know something about that secret type, and have a way
of defending against malicious use of the secret.


Reply to this email directly or view it on GitHub
#499 (comment)
.


Reply to this email directly or view it on GitHub.

Contributor

smarterclayton commented Feb 28, 2015

I don't know. It seems to me that the goal of the platform is to abstract image pulling - it's not something a user is concerned with. It should be fast and ideally delivered by an efficient network abstraction. Cgroup constraints on pulls in some cases - sure. Special pullers - maybe.

Tim mentioned something last week that has also come up from the network filesystems guys - having a mount already established on disk of images so that you don't even need to download anything. In that model we would need to still check some permissions, and the user container wouldn't play into it too much.

On Feb 28, 2015, at 12:36 AM, Paul Morie notifications@github.com wrote:

Do we want to do pulls in a user container as a long-term goal?
On Fri, Feb 27, 2015 at 11:45 PM Clayton Coleman notifications@github.com
wrote:

As per our discussion at the face to face, one option would be to deliver
a secret type that would convey authorization to pull the image. The
kubelet would need to know something about that secret type, and have a way
of defending against malicious use of the secret.


Reply to this email directly or view it on GitHub
#499 (comment)
.


Reply to this email directly or view it on GitHub.

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Feb 28, 2015

Member

+1 to remote mounting of images. That's the only way we're going to get to ~instantaneous container start.

Pulls (and builds) that remain will need to be constrained, however.

Member

bgrant0607 commented Feb 28, 2015

+1 to remote mounting of images. That's the only way we're going to get to ~instantaneous container start.

Pulls (and builds) that remain will need to be constrained, however.

@pmorie

This comment has been minimized.

Show comment
Hide comment
@pmorie

pmorie Feb 28, 2015

Member

+1 also to remote mounting; we'll need to distribute secrets for whatever
ultimately dies the pull for private registries
On Sat, Feb 28, 2015 at 11:48 AM Brian Grant notifications@github.com
wrote:

+1 to remote mounting of images. That's the only way we're going to get to
~instantaneous container start.

Pulls (and builds) that remain will need to be constrained, however.


Reply to this email directly or view it on GitHub
#499 (comment)
.

Member

pmorie commented Feb 28, 2015

+1 also to remote mounting; we'll need to distribute secrets for whatever
ultimately dies the pull for private registries
On Sat, Feb 28, 2015 at 11:48 AM Brian Grant notifications@github.com
wrote:

+1 to remote mounting of images. That's the only way we're going to get to
~instantaneous container start.

Pulls (and builds) that remain will need to be constrained, however.


Reply to this email directly or view it on GitHub
#499 (comment)
.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Feb 28, 2015

Contributor

A bunch of folks are about to start working on this on our end - I don't know who will own the proposal but expect one soon.

On Feb 28, 2015, at 11:48 AM, Brian Grant notifications@github.com wrote:

+1 to remote mounting of images. That's the only way we're going to get to ~instantaneous container start.

Pulls (and builds) that remain will need to be constrained, however.


Reply to this email directly or view it on GitHub.

Contributor

smarterclayton commented Feb 28, 2015

A bunch of folks are about to start working on this on our end - I don't know who will own the proposal but expect one soon.

On Feb 28, 2015, at 11:48 AM, Brian Grant notifications@github.com wrote:

+1 to remote mounting of images. That's the only way we're going to get to ~instantaneous container start.

Pulls (and builds) that remain will need to be constrained, however.


Reply to this email directly or view it on GitHub.

@hobti01

This comment has been minimized.

Show comment
Hide comment
@hobti01

hobti01 Apr 15, 2015

@smarterclayton Did a proposal come from this issue?

hobti01 commented Apr 15, 2015

@smarterclayton Did a proposal come from this issue?

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Apr 15, 2015

Contributor

@liggitt is working on service accounts right now - his plan was to follow that up with being able to pull images with the secrets associated with the service account.

On Apr 15, 2015, at 7:03 AM, Tim H notifications@github.com wrote:

@smarterclayton Did a proposal come from this issue?


Reply to this email directly or view it on GitHub.

Contributor

smarterclayton commented Apr 15, 2015

@liggitt is working on service accounts right now - his plan was to follow that up with being able to pull images with the secrets associated with the service account.

On Apr 15, 2015, at 7:03 AM, Tim H notifications@github.com wrote:

@smarterclayton Did a proposal come from this issue?


Reply to this email directly or view it on GitHub.

@vially

This comment has been minimized.

Show comment
Hide comment
@vially

vially Jun 8, 2015

Any update on this?

vially commented Jun 8, 2015

Any update on this?

@liggitt

This comment has been minimized.

Show comment
Hide comment
@liggitt

liggitt Jun 8, 2015

Member
  1. Pods now have an imagePullSecrets field that lists the secrets to use for pulling the container images
  2. That field must reference secrets of type "kubernetes.io/dockercfg", with a ".dockercfg" key containing dockercfg file credentials.
  3. If a pod references a service account, that service account's imagePullSecrets are automatically added to the pod's imagePullSecrets field.
Member

liggitt commented Jun 8, 2015

  1. Pods now have an imagePullSecrets field that lists the secrets to use for pulling the container images
  2. That field must reference secrets of type "kubernetes.io/dockercfg", with a ".dockercfg" key containing dockercfg file credentials.
  3. If a pod references a service account, that service account's imagePullSecrets are automatically added to the pod's imagePullSecrets field.
@liggitt

This comment has been minimized.

Show comment
Hide comment
@liggitt

liggitt Jun 8, 2015

Member

@deads2k any pointers to docs or better explanations?

Member

liggitt commented Jun 8, 2015

@deads2k any pointers to docs or better explanations?

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
Member

bgrant0607 commented Jun 23, 2015

@brendandburns brendandburns modified the milestones: v1.0, v1.0-candidate Jun 24, 2015

@bgrant0607 bgrant0607 changed the title from Private Registry Authentication to Document Private Registry Authentication Jun 24, 2015

@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune

erictune Jun 25, 2015

Member

@bgrant0607 what else do you want to see in that documentation.

Member

erictune commented Jun 25, 2015

@bgrant0607 what else do you want to see in that documentation.

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Jun 26, 2015

Member

On second glance, the coverage seems fine, actually.

It wasn't clear to me that the bullet list was a table of contents for the succeeding sections. Also, I'd describe imagePullSecrets before either copying .dockercfg to nodes or pre-pulling (that seems like a niche case). And it's not "ImagePullKeys" -- it's "imagePullSecrets".

If we add a convenience command for bundling secrets, we should document that there, also. @liggitt

Member

bgrant0607 commented Jun 26, 2015

On second glance, the coverage seems fine, actually.

It wasn't clear to me that the bullet list was a table of contents for the succeeding sections. Also, I'd describe imagePullSecrets before either copying .dockercfg to nodes or pre-pulling (that seems like a niche case). And it's not "ImagePullKeys" -- it's "imagePullSecrets".

If we add a convenience command for bundling secrets, we should document that there, also. @liggitt

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Jul 6, 2015

Member

I think this is done.

Member

bgrant0607 commented Jul 6, 2015

I think this is done.

@bgrant0607 bgrant0607 closed this Jul 6, 2015

@mattma

This comment has been minimized.

Show comment
Hide comment
@mattma

mattma Aug 11, 2015

Following the documentation, I have successfully added a secret

 ➜ kubectl get secrets                                                                                                                                                                                                                                       ✹
NAME            TYPE                      DATA
myregistrykey   kubernetes.io/dockercfg   1

My question, all example of using imagePullSecrets is in the POD, how can i use it in the RC? E.G: this is my xxx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: $NAME
    namespace: default
  name: $NAME
spec:
  replicas: 2
  selector:
    name: $NAME
    version: $NAME
  template:
    metadata:
      labels:
        name: $NAME
        version: $NAME
    spec:
      containers:
      - image: $PRIVATE_REPO:tag
        name:$NAME
        ports:
        - containerPort: 8000
      imagePullSecrets:
         - name: myregistrykey

mattma commented Aug 11, 2015

Following the documentation, I have successfully added a secret

 ➜ kubectl get secrets                                                                                                                                                                                                                                       ✹
NAME            TYPE                      DATA
myregistrykey   kubernetes.io/dockercfg   1

My question, all example of using imagePullSecrets is in the POD, how can i use it in the RC? E.G: this is my xxx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: $NAME
    namespace: default
  name: $NAME
spec:
  replicas: 2
  selector:
    name: $NAME
    version: $NAME
  template:
    metadata:
      labels:
        name: $NAME
        version: $NAME
    spec:
      containers:
      - image: $PRIVATE_REPO:tag
        name:$NAME
        ports:
        - containerPort: 8000
      imagePullSecrets:
         - name: myregistrykey
@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Aug 11, 2015

Contributor

It works in the RC the same way - just make sure it's in the right place in
the pod template.

On Aug 11, 2015, at 7:12 PM, Matt Ma notifications@github.com wrote:

Following the documentation, I have successfully added a secret

➜ kubectl get secrets

NAME TYPE DATA
myregistrykey kubernetes.io/dockercfg 1

My question, all example of using imagePullSecrets is in the POD, how can i
use it in the RC? E.G: this is my xxx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: $NAME
namespace: default
name: $NAME
spec:
replicas: 2
selector:
name: $NAME
version: $NAME
template:
metadata:
labels:
name: $NAME
version: $NAME
spec:
containers:
- image: $PRIVATE_REPO:tag
name:$NAME
ports:
- containerPort: 8000
imagePullSecrets:
- name: myregistrykey


Reply to this email directly or view it on GitHub
#499 (comment)
.

Contributor

smarterclayton commented Aug 11, 2015

It works in the RC the same way - just make sure it's in the right place in
the pod template.

On Aug 11, 2015, at 7:12 PM, Matt Ma notifications@github.com wrote:

Following the documentation, I have successfully added a secret

➜ kubectl get secrets

NAME TYPE DATA
myregistrykey kubernetes.io/dockercfg 1

My question, all example of using imagePullSecrets is in the POD, how can i
use it in the RC? E.G: this is my xxx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: $NAME
namespace: default
name: $NAME
spec:
replicas: 2
selector:
name: $NAME
version: $NAME
template:
metadata:
labels:
name: $NAME
version: $NAME
spec:
containers:
- image: $PRIVATE_REPO:tag
name:$NAME
ports:
- containerPort: 8000
imagePullSecrets:
- name: myregistrykey


Reply to this email directly or view it on GitHub
#499 (comment)
.

@mattma

This comment has been minimized.

Show comment
Hide comment
@mattma

mattma Aug 12, 2015

@smarterclayton With my setting above, it did not work.

 ➜ kubectl get po                                                                                                                                                                                                                                            ✹
NAME             READY     STATUS                                      RESTARTS   AGE
kube-dns-5ny1k   3/3       Running                                     0          1h
xx-pdl0j         0/1       Error: image xxx:xxx not found   0          37m
xx-zj2rf         0/1       Error: image xxx:xxx not found   0          37m

Could you help me what is the right place in rc?

If I manually go in the machine, i could use docker pull to pull from docker hub private repo.

 ➜ kubectl describe secrets myregistrykey                                                                                                                                                                                                                    ✹
Name:       myregistrykey
Namespace:  default
Labels:     <none>
Annotations:    <none>

Type:   kubernetes.io/dockercfg

Data
====
.dockercfg: 128 bytes

mattma commented Aug 12, 2015

@smarterclayton With my setting above, it did not work.

 ➜ kubectl get po                                                                                                                                                                                                                                            ✹
NAME             READY     STATUS                                      RESTARTS   AGE
kube-dns-5ny1k   3/3       Running                                     0          1h
xx-pdl0j         0/1       Error: image xxx:xxx not found   0          37m
xx-zj2rf         0/1       Error: image xxx:xxx not found   0          37m

Could you help me what is the right place in rc?

If I manually go in the machine, i could use docker pull to pull from docker hub private repo.

 ➜ kubectl describe secrets myregistrykey                                                                                                                                                                                                                    ✹
Name:       myregistrykey
Namespace:  default
Labels:     <none>
Annotations:    <none>

Type:   kubernetes.io/dockercfg

Data
====
.dockercfg: 128 bytes
@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Aug 12, 2015

Contributor

It's possible it also needs to be added to the service account, @deads2k?

On Aug 11, 2015, at 8:02 PM, Matt Ma notifications@github.com wrote:

@smarterclayton https://github.com/smarterclayton With my setting above,
it did not work.

➜ kubectl get po

NAME READY STATUS
RESTARTS AGE
kube-dns-5ny1k 3/3 Running
0 1h
xx-pdl0j 0/1 Error: image xxx:xxx not found 0 37m
xx-zj2rf 0/1 Error: image xxx:xxx not found 0 37m

Could you help me what is the right place in rc?


Reply to this email directly or view it on GitHub
#499 (comment)
.

Contributor

smarterclayton commented Aug 12, 2015

It's possible it also needs to be added to the service account, @deads2k?

On Aug 11, 2015, at 8:02 PM, Matt Ma notifications@github.com wrote:

@smarterclayton https://github.com/smarterclayton With my setting above,
it did not work.

➜ kubectl get po

NAME READY STATUS
RESTARTS AGE
kube-dns-5ny1k 3/3 Running
0 1h
xx-pdl0j 0/1 Error: image xxx:xxx not found 0 37m
xx-zj2rf 0/1 Error: image xxx:xxx not found 0 37m

Could you help me what is the right place in rc?


Reply to this email directly or view it on GitHub
#499 (comment)
.

@mattma

This comment has been minimized.

Show comment
Hide comment
@mattma

mattma Aug 12, 2015

docs is not clear what to do to fix this use case.

On the other hand, docs explains very clearly to address the issue of Use a hosted private Docker registry. Is this a bug?

mattma commented Aug 12, 2015

docs is not clear what to do to fix this use case.

On the other hand, docs explains very clearly to address the issue of Use a hosted private Docker registry. Is this a bug?

@deads2k

This comment has been minimized.

Show comment
Hide comment
@deads2k

deads2k Aug 12, 2015

Contributor

It's possible it also needs to be added to the service account, @deads2k?

ImagePullSecrets in a podspec are not gated by the list of secrets on the service account. What you specify is what gets used. I don't think we have good logging around exactly which secret is attempted from the keyring, but that might be the next step.

Contributor

deads2k commented Aug 12, 2015

It's possible it also needs to be added to the service account, @deads2k?

ImagePullSecrets in a podspec are not gated by the list of secrets on the service account. What you specify is what gets used. I don't think we have good logging around exactly which secret is attempted from the keyring, but that might be the next step.

@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune

erictune Aug 13, 2015

Member

@mattma I think it is possible that the pod template and RC aren't being created the way you think they are. Would you please follow the steps in https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/application-troubleshooting.md#my-pod-is-running-but-not-doing-what-i-told-it-to-do and see if that fixes it? If that still doesn't help, please post the exact RC yaml or json that you are trying to create (anonymize if necessary).

Member

erictune commented Aug 13, 2015

@mattma I think it is possible that the pod template and RC aren't being created the way you think they are. Would you please follow the steps in https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/application-troubleshooting.md#my-pod-is-running-but-not-doing-what-i-told-it-to-do and see if that fixes it? If that still doesn't help, please post the exact RC yaml or json that you are trying to create (anonymize if necessary).

@mattma

This comment has been minimized.

Show comment
Hide comment
@mattma

mattma Aug 13, 2015

@erictune Thank you for the tips. I will give it a try.

mattma commented Aug 13, 2015

@erictune Thank you for the tips. I will give it a try.

@mattma

This comment has been minimized.

Show comment
Hide comment
@mattma

mattma Aug 13, 2015

Containers:
  ts:
    Image:      xxx/xxx
    State:      Waiting
      Reason:       Image: xxx/xxx is not ready on the node
    Ready:      False
    Restart Count:  0
Conditions:
  Type      Status
  Ready     False
Events:
  FirstSeen             LastSeen            Count   From            SubobjectPath   Reason      Message
  Thu, 13 Aug 2015 11:16:05 -0700   Thu, 13 Aug 2015 11:16:05 -0700 1   {scheduler }                scheduled   Successfully assigned xxx-xxx to 172.17.8.101
  Thu, 13 Aug 2015 11:16:05 -0700   Thu, 13 Aug 2015 11:17:16 -0700 8   {kubelet 172.17.8.101}          failedSync  Error syncing pod, skipping: secrets "myregistrykey" not found

My rc file is posted above which is the full version. I am sure it is related to the secret. How do I add a service account?

mattma commented Aug 13, 2015

Containers:
  ts:
    Image:      xxx/xxx
    State:      Waiting
      Reason:       Image: xxx/xxx is not ready on the node
    Ready:      False
    Restart Count:  0
Conditions:
  Type      Status
  Ready     False
Events:
  FirstSeen             LastSeen            Count   From            SubobjectPath   Reason      Message
  Thu, 13 Aug 2015 11:16:05 -0700   Thu, 13 Aug 2015 11:16:05 -0700 1   {scheduler }                scheduled   Successfully assigned xxx-xxx to 172.17.8.101
  Thu, 13 Aug 2015 11:16:05 -0700   Thu, 13 Aug 2015 11:17:16 -0700 8   {kubelet 172.17.8.101}          failedSync  Error syncing pod, skipping: secrets "myregistrykey" not found

My rc file is posted above which is the full version. I am sure it is related to the secret. How do I add a service account?

@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune

erictune Aug 14, 2015

Member

@mattma adding a service account won't fix the problem, because you already have the imagePullSecret field on your pod.

Two things that might be going on:

You might be using the new dockercfg format, which we don't support yet. See this issue for details: #12626
@deads2k does #12626 affect imagePullSecrets, in terms of not supporting the new dockercfg format?

  • you might have bad data in your secret. do this:
kubectl get secrets myregistrykey -o yaml | grep dockercfg: | cut -f 2 -d : | base64 -D > actual.dockercfg

and make sure that file matches what you are expecting.

Member

erictune commented Aug 14, 2015

@mattma adding a service account won't fix the problem, because you already have the imagePullSecret field on your pod.

Two things that might be going on:

You might be using the new dockercfg format, which we don't support yet. See this issue for details: #12626
@deads2k does #12626 affect imagePullSecrets, in terms of not supporting the new dockercfg format?

  • you might have bad data in your secret. do this:
kubectl get secrets myregistrykey -o yaml | grep dockercfg: | cut -f 2 -d : | base64 -D > actual.dockercfg

and make sure that file matches what you are expecting.

@deads2k

This comment has been minimized.

Show comment
Hide comment
@deads2k

deads2k Aug 14, 2015

Contributor

secrets "myregistrykey" not found means that the call to /api/v1/namespaces/<pod-namespace>/secrets/myregistrykey, returned a 404. Since you explicitly set the secret, the kubelet is currently coded to not even attempt the image pull. #12736 relaxes the behavior, but without that image pull secret, you probably won't progress.

I don't think #12626 applies in this case since the secret wasn't found at all. We don't attempt to parse until the keyring is created.

Contributor

deads2k commented Aug 14, 2015

secrets "myregistrykey" not found means that the call to /api/v1/namespaces/<pod-namespace>/secrets/myregistrykey, returned a 404. Since you explicitly set the secret, the kubelet is currently coded to not even attempt the image pull. #12736 relaxes the behavior, but without that image pull secret, you probably won't progress.

I don't think #12626 applies in this case since the secret wasn't found at all. We don't attempt to parse until the keyring is created.

@deads2k

This comment has been minimized.

Show comment
Hide comment
@deads2k

deads2k Aug 14, 2015

Contributor

For reference, I would expect an unmarshalling failure due to the new dockercfg format would fail here: https://github.com/kubernetes/kubernetes/blob/master/pkg/credentialprovider/keyring.go#L282, but the not found comes from here: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kubelet.go#L1290

Contributor

deads2k commented Aug 14, 2015

For reference, I would expect an unmarshalling failure due to the new dockercfg format would fail here: https://github.com/kubernetes/kubernetes/blob/master/pkg/credentialprovider/keyring.go#L282, but the not found comes from here: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kubelet.go#L1290

@mattma

This comment has been minimized.

Show comment
Hide comment
@mattma

mattma Aug 16, 2015

@erictune @deads2k

I tear down the whole kubernetes cluster, and rebuild it, regenerate the new base64 token from .dockercfg. It seems working now with this time around.

What possible went wrong?

I was using the coreos-alpha@773.1.0 (release 1 week ago) previously, now I upgrade to the new one coreos-alpha@774.0.0(just released). It maybe a bug from the previous system, but I really doubt it. It could be my error too. Anyway, it works as docs said. Thanks.

mattma commented Aug 16, 2015

@erictune @deads2k

I tear down the whole kubernetes cluster, and rebuild it, regenerate the new base64 token from .dockercfg. It seems working now with this time around.

What possible went wrong?

I was using the coreos-alpha@773.1.0 (release 1 week ago) previously, now I upgrade to the new one coreos-alpha@774.0.0(just released). It maybe a bug from the previous system, but I really doubt it. It could be my error too. Anyway, it works as docs said. Thanks.

@drora

This comment has been minimized.

Show comment
Hide comment
@drora

drora Aug 18, 2015

How is it working for you? no matter which approach I try, it fails to pull from my private repo
Error: image privateRepo/someTag:latest not found.

  • minions with .dockercfg didn't work. tried all suggested permutations/location.
  • pre-fetch is out of the question.
  • imagePullSecrets: can anyone publish his working yamls?
    below is my setup, can anybody spot something wrong?

my setup (aws ec2 + coreos alpha-v774.0.0):

secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: myregistrykey
  namespace: staging
data:
  .dockercfg: ${MY_BASE64}
type: kubernetes.io/dockercfg

dbRC.yaml:

apiVersion: v1
kind: ReplicationController
metadata:
  name: db
  namespace: staging
  labels:
    name: db
spec:
  replicas: 2
  selector:
    name: db
  template:
    metadata:
      labels:
         name: db
    spec:
      imagePullSecrets:
      - name: myregistrykey
      containers:
      - name: db
        image: privateRepo/someTag:latest

kubectl version

Client Version: version.Info{Major:"1", Minor:"0.1", GitVersion:"v1.0.1", GitCommit:"", GitTreeState:"not a git tree"}
Server Version: version.Info{Major:"0", Minor:"19", GitVersion:"v0.19.3", GitCommit:"3103c8ca0f24514bc39b6e2b7d909bbf46af8d11", GitTreeState:"clean"}

drora commented Aug 18, 2015

How is it working for you? no matter which approach I try, it fails to pull from my private repo
Error: image privateRepo/someTag:latest not found.

  • minions with .dockercfg didn't work. tried all suggested permutations/location.
  • pre-fetch is out of the question.
  • imagePullSecrets: can anyone publish his working yamls?
    below is my setup, can anybody spot something wrong?

my setup (aws ec2 + coreos alpha-v774.0.0):

secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: myregistrykey
  namespace: staging
data:
  .dockercfg: ${MY_BASE64}
type: kubernetes.io/dockercfg

dbRC.yaml:

apiVersion: v1
kind: ReplicationController
metadata:
  name: db
  namespace: staging
  labels:
    name: db
spec:
  replicas: 2
  selector:
    name: db
  template:
    metadata:
      labels:
         name: db
    spec:
      imagePullSecrets:
      - name: myregistrykey
      containers:
      - name: db
        image: privateRepo/someTag:latest

kubectl version

Client Version: version.Info{Major:"1", Minor:"0.1", GitVersion:"v1.0.1", GitCommit:"", GitTreeState:"not a git tree"}
Server Version: version.Info{Major:"0", Minor:"19", GitVersion:"v0.19.3", GitCommit:"3103c8ca0f24514bc39b6e2b7d909bbf46af8d11", GitTreeState:"clean"}
@deads2k

This comment has been minimized.

Show comment
Hide comment
@deads2k

deads2k Aug 18, 2015

Contributor

can anyone publish his working yamls? below is my setup, can anybody spot something wrong?

Can you confirm that your base64 secret matches correctly using the command referenced here: #499 (comment) ? The imagePullSecret is loaded into a keyring that matches based on the URL of your pull spec, so that needs to match correctly as well.

Contributor

deads2k commented Aug 18, 2015

can anyone publish his working yamls? below is my setup, can anybody spot something wrong?

Can you confirm that your base64 secret matches correctly using the command referenced here: #499 (comment) ? The imagePullSecret is loaded into a keyring that matches based on the URL of your pull spec, so that needs to match correctly as well.

@drora

This comment has been minimized.

Show comment
Hide comment
@drora

drora Aug 19, 2015

apparently it was a base64 issue. thanks for the lead @deads2k.
when I compered the original file to the stored one as suggested I had noticed that another jsonobject called "auth" was encapsulating my .dockerconfig file json structure from the top level.
as it turns out (I could be stating the obvious here but what the hell, it could help somebody), base64 relies on local encoding that varies between different linux distros (e.g. UTF8, UTF16, etc).
here's what fixed it for me:

  1. remove al whitespaces from your .dockerconfig file (result is a 1 liner json)
  2. use the same os to encode your .dockercfg file as in your target kubelet host.

drora commented Aug 19, 2015

apparently it was a base64 issue. thanks for the lead @deads2k.
when I compered the original file to the stored one as suggested I had noticed that another jsonobject called "auth" was encapsulating my .dockerconfig file json structure from the top level.
as it turns out (I could be stating the obvious here but what the hell, it could help somebody), base64 relies on local encoding that varies between different linux distros (e.g. UTF8, UTF16, etc).
here's what fixed it for me:

  1. remove al whitespaces from your .dockerconfig file (result is a 1 liner json)
  2. use the same os to encode your .dockercfg file as in your target kubelet host.
@iameli

This comment has been minimized.

Show comment
Hide comment
@iameli

iameli Sep 10, 2015

I asked a StackOverflow question with largely the same problem as @drora -- cross-referencing here because I used this thread to troubleshoot.

iameli commented Sep 10, 2015

I asked a StackOverflow question with largely the same problem as @drora -- cross-referencing here because I used this thread to troubleshoot.

@streamnsight

This comment has been minimized.

Show comment
Hide comment
@streamnsight

streamnsight Sep 27, 2015

got this working when:

  1. I remove the 'auths' wrapping on the object in the config.json
  2. add 'https://' in front of the URL
  3. make it one line (not sure this is so critical but it seems to cause issues (invalid json) if not in one line

-> base64 encode and make sure it is still one line when listin as entry for .dockercfg

got this working when:

  1. I remove the 'auths' wrapping on the object in the config.json
  2. add 'https://' in front of the URL
  3. make it one line (not sure this is so critical but it seems to cause issues (invalid json) if not in one line

-> base64 encode and make sure it is still one line when listin as entry for .dockercfg

@plamer

This comment has been minimized.

Show comment
Hide comment
@plamer

plamer Oct 2, 2015

@streamnsight thank you! I was going nuts with this util I saw your directions :)

plamer commented Oct 2, 2015

@streamnsight thank you! I was going nuts with this util I saw your directions :)

@ntquyen

This comment has been minimized.

Show comment
Hide comment
@ntquyen

ntquyen Oct 18, 2015

@streamnsight Thanks a lot! Your solution saves my day!

ntquyen commented Oct 18, 2015

@streamnsight Thanks a lot! Your solution saves my day!

@chesleybrown

This comment has been minimized.

Show comment
Hide comment
@chesleybrown

chesleybrown Dec 17, 2015

Thank you @streamnsight. I think I had to remove all spaces from it as well.

Thank you @streamnsight. I think I had to remove all spaces from it as well.

@streamnsight

This comment has been minimized.

Show comment
Hide comment
@streamnsight

streamnsight Dec 17, 2015

I pushed this PR #17286 for the documentation over a month ago but it's still has not made it into the repo.
@erictune @brendandburns @smarterclayton

I pushed this PR #17286 for the documentation over a month ago but it's still has not made it into the repo.
@erictune @brendandburns @smarterclayton

metadave pushed a commit to metadave/kubernetes that referenced this issue Feb 22, 2017

[incubator/elasticsearch] Remove helm.sh/created annotations (#499)
* [incubator/elasticsearch] Remove helm.sh/created annotations

* elasticsearch: bump version to 0.1.3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment