Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document Private Registry Authentication #499

Closed
pixie79 opened this issue Jul 17, 2014 · 44 comments
Closed

Document Private Registry Authentication #499

pixie79 opened this issue Jul 17, 2014 · 44 comments
Labels
area/images-registry area/security kind/documentation Categorizes issue or PR as related to documentation. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Milestone

Comments

@pixie79
Copy link

pixie79 commented Jul 17, 2014

How do we set up kubernetes to works with private registries?

Currently we can put a remote registry into the image tag for the pod but there appears to be nowhere to enter the login details if these images are private.

Ideally I guess we should be able to set these in the global config script, as the login details for a single registry would be the same for each pod, rather than repeating them in each pod's spec.

@proppy
Copy link
Contributor

proppy commented Jul 17, 2014

If you're using Google Cloud Platform, one possibility is to use the google/docker-registry image to push your images to Google Cloud Storage.

You should then be able to add google/docker-registry to one of your pod and pull from localhost:5000/myimagename.

@brendandburns
Copy link
Contributor

There's an example of a container manifest which uses a private registry
here:

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/build/master-manifest.yaml

On Wed, Jul 16, 2014 at 10:36 PM, Johan Euphrosine <notifications@github.com

wrote:

If you're using Google Cloud Platform, one possibility is to use the
google/docker-registry https://index.docker.io/u/google/docker-registry
image to push your images to Google Cloud Storage.

You should then be able to add google/docker-registry to one of your pod
and pull from localhost:5000/myimagename.


Reply to this email directly or view it on GitHub
#499 (comment)
.

@brendandburns
Copy link
Contributor

in case it wasn't clear, if you use the Google storage private registry,
the credentials are supplied with the service account that is available
from your GCE VM.

--brendan

On Wed, Jul 16, 2014 at 10:52 PM, Brendan Burns bburns@google.com wrote:

There's an example of a container manifest which uses a private registry
here:

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/build/master-manifest.yaml

On Wed, Jul 16, 2014 at 10:36 PM, Johan Euphrosine <
notifications@github.com> wrote:

If you're using Google Cloud Platform, one possibility is to use the
google/docker-registry https://index.docker.io/u/google/docker-registry
image to push your images to Google Cloud Storage.

You should then be able to add google/docker-registry to one of your pod
and pull from localhost:5000/myimagename.


Reply to this email directly or view it on GitHub
#499 (comment)
.

@smarterclayton
Copy link
Contributor

In general we'd want to support a wide range of authentications for private registries. OAuth is a good first step, but in multi-tenant setups you can't rely on host trust relationships. Also, folks should be able to use api keys from private repos in the DockerHub, and eventually even passwords.

  1. Infrastructure trust with one or more remote registries (the apiserver communicates the pod "owner's" identity to the kubelet, the host is set up to pass that identity through Docker to the remote repo)
  2. Delegated trust between individual users and registries (user A creates a token enabling access to image B in repo C, that token can be passed down to the Kubelet via the manifest and then on to Docker)
  3. Use direct credentials to connect to registries (user/passwords, client certs)

Lots of complexity here in op shops. Kerberos and SSL client certs are the most common complex solutions, but an OAuth trust relationship, properly configured, would be better than most of the others.

@bcwaldon
Copy link

It may be helpful in the short term to fail softly in the event that a pull operation fails due to an auth issue. I can easily run docker login registry.example.com and pull the images I need manually, and k8s should be able to run them just fine without a successful pull.

@thockin
Copy link
Member

thockin commented Jul 18, 2014

This soft failure mode devolves into #504

On Fri, Jul 18, 2014 at 4:19 PM, Brian Waldon notifications@github.com
wrote:

It may be helpful in the short term to fail softly in the event that a
pull operation fails due to an auth issue. I can easily run docker login
registry.example.com and pull the images I need manually, and k8s should
be able to run them just fine without a successful pull.

Reply to this email directly or view it on GitHub
#499 (comment)
.

@bgrant0607 bgrant0607 added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Dec 4, 2014
@bgrant0607 bgrant0607 added the sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. label Feb 28, 2015
@bgrant0607
Copy link
Member

@erictune @derekwaynecarr @pmorie What else needs to be done on this?

@smarterclayton
Copy link
Contributor

As per our discussion at the face to face, one option would be to deliver a secret type that would convey authorization to pull the image. The kubelet would need to know something about that secret type, and have a way of defending against malicious use of the secret.

@pmorie
Copy link
Member

pmorie commented Feb 28, 2015

Do we want to do pulls in a user container as a long-term goal?
On Fri, Feb 27, 2015 at 11:45 PM Clayton Coleman notifications@github.com
wrote:

As per our discussion at the face to face, one option would be to deliver
a secret type that would convey authorization to pull the image. The
kubelet would need to know something about that secret type, and have a way
of defending against malicious use of the secret.


Reply to this email directly or view it on GitHub
#499 (comment)
.

@smarterclayton
Copy link
Contributor

I don't know. It seems to me that the goal of the platform is to abstract image pulling - it's not something a user is concerned with. It should be fast and ideally delivered by an efficient network abstraction. Cgroup constraints on pulls in some cases - sure. Special pullers - maybe.

Tim mentioned something last week that has also come up from the network filesystems guys - having a mount already established on disk of images so that you don't even need to download anything. In that model we would need to still check some permissions, and the user container wouldn't play into it too much.

On Feb 28, 2015, at 12:36 AM, Paul Morie notifications@github.com wrote:

Do we want to do pulls in a user container as a long-term goal?
On Fri, Feb 27, 2015 at 11:45 PM Clayton Coleman notifications@github.com
wrote:

As per our discussion at the face to face, one option would be to deliver
a secret type that would convey authorization to pull the image. The
kubelet would need to know something about that secret type, and have a way
of defending against malicious use of the secret.


Reply to this email directly or view it on GitHub
#499 (comment)
.


Reply to this email directly or view it on GitHub.

@bgrant0607
Copy link
Member

+1 to remote mounting of images. That's the only way we're going to get to ~instantaneous container start.

Pulls (and builds) that remain will need to be constrained, however.

@pmorie
Copy link
Member

pmorie commented Feb 28, 2015

+1 also to remote mounting; we'll need to distribute secrets for whatever
ultimately dies the pull for private registries
On Sat, Feb 28, 2015 at 11:48 AM Brian Grant notifications@github.com
wrote:

+1 to remote mounting of images. That's the only way we're going to get to
~instantaneous container start.

Pulls (and builds) that remain will need to be constrained, however.


Reply to this email directly or view it on GitHub
#499 (comment)
.

@smarterclayton
Copy link
Contributor

A bunch of folks are about to start working on this on our end - I don't know who will own the proposal but expect one soon.

On Feb 28, 2015, at 11:48 AM, Brian Grant notifications@github.com wrote:

+1 to remote mounting of images. That's the only way we're going to get to ~instantaneous container start.

Pulls (and builds) that remain will need to be constrained, however.


Reply to this email directly or view it on GitHub.

@hobti01
Copy link

hobti01 commented Apr 15, 2015

@smarterclayton Did a proposal come from this issue?

@smarterclayton
Copy link
Contributor

@liggitt is working on service accounts right now - his plan was to follow that up with being able to pull images with the secrets associated with the service account.

On Apr 15, 2015, at 7:03 AM, Tim H notifications@github.com wrote:

@smarterclayton Did a proposal come from this issue?


Reply to this email directly or view it on GitHub.

@vially
Copy link

vially commented Jun 8, 2015

Any update on this?

@liggitt
Copy link
Member

liggitt commented Jun 8, 2015

  1. Pods now have an imagePullSecrets field that lists the secrets to use for pulling the container images
  2. That field must reference secrets of type "kubernetes.io/dockercfg", with a ".dockercfg" key containing dockercfg file credentials.
  3. If a pod references a service account, that service account's imagePullSecrets are automatically added to the pod's imagePullSecrets field.

@liggitt
Copy link
Member

liggitt commented Jun 8, 2015

@deads2k any pointers to docs or better explanations?

@bgrant0607 bgrant0607 added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. kind/documentation Categorizes issue or PR as related to documentation. labels Jun 23, 2015
@brendandburns brendandburns removed the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jun 24, 2015
@bgrant0607 bgrant0607 changed the title Private Registry Authentication Document Private Registry Authentication Jun 24, 2015
@erictune
Copy link
Member

@bgrant0607 what else do you want to see in that documentation.

@bgrant0607
Copy link
Member

On second glance, the coverage seems fine, actually.

It wasn't clear to me that the bullet list was a table of contents for the succeeding sections. Also, I'd describe imagePullSecrets before either copying .dockercfg to nodes or pre-pulling (that seems like a niche case). And it's not "ImagePullKeys" -- it's "imagePullSecrets".

If we add a convenience command for bundling secrets, we should document that there, also. @liggitt

@bgrant0607
Copy link
Member

I think this is done.

@mattma
Copy link

mattma commented Aug 11, 2015

Following the documentation, I have successfully added a secret

 ➜ kubectl get secrets                                                                                                                                                                                                                                       ✹
NAME            TYPE                      DATA
myregistrykey   kubernetes.io/dockercfg   1

My question, all example of using imagePullSecrets is in the POD, how can i use it in the RC? E.G: this is my xxx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: $NAME
    namespace: default
  name: $NAME
spec:
  replicas: 2
  selector:
    name: $NAME
    version: $NAME
  template:
    metadata:
      labels:
        name: $NAME
        version: $NAME
    spec:
      containers:
      - image: $PRIVATE_REPO:tag
        name:$NAME
        ports:
        - containerPort: 8000
      imagePullSecrets:
         - name: myregistrykey

@smarterclayton
Copy link
Contributor

It works in the RC the same way - just make sure it's in the right place in
the pod template.

On Aug 11, 2015, at 7:12 PM, Matt Ma notifications@github.com wrote:

Following the documentation, I have successfully added a secret

➜ kubectl get secrets

NAME TYPE DATA
myregistrykey kubernetes.io/dockercfg 1

My question, all example of using imagePullSecrets is in the POD, how can i
use it in the RC? E.G: this is my xxx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: $NAME
namespace: default
name: $NAME
spec:
replicas: 2
selector:
name: $NAME
version: $NAME
template:
metadata:
labels:
name: $NAME
version: $NAME
spec:
containers:
- image: $PRIVATE_REPO:tag
name:$NAME
ports:
- containerPort: 8000
imagePullSecrets:
- name: myregistrykey


Reply to this email directly or view it on GitHub
#499 (comment)
.

@mattma
Copy link

mattma commented Aug 12, 2015

@smarterclayton With my setting above, it did not work.

 ➜ kubectl get po                                                                                                                                                                                                                                            ✹
NAME             READY     STATUS                                      RESTARTS   AGE
kube-dns-5ny1k   3/3       Running                                     0          1h
xx-pdl0j         0/1       Error: image xxx:xxx not found   0          37m
xx-zj2rf         0/1       Error: image xxx:xxx not found   0          37m

Could you help me what is the right place in rc?

If I manually go in the machine, i could use docker pull to pull from docker hub private repo.

 ➜ kubectl describe secrets myregistrykey                                                                                                                                                                                                                    ✹
Name:       myregistrykey
Namespace:  default
Labels:     <none>
Annotations:    <none>

Type:   kubernetes.io/dockercfg

Data
====
.dockercfg: 128 bytes

@smarterclayton
Copy link
Contributor

It's possible it also needs to be added to the service account, @deads2k?

On Aug 11, 2015, at 8:02 PM, Matt Ma notifications@github.com wrote:

@smarterclayton https://github.com/smarterclayton With my setting above,
it did not work.

➜ kubectl get po

NAME READY STATUS
RESTARTS AGE
kube-dns-5ny1k 3/3 Running
0 1h
xx-pdl0j 0/1 Error: image xxx:xxx not found 0 37m
xx-zj2rf 0/1 Error: image xxx:xxx not found 0 37m

Could you help me what is the right place in rc?


Reply to this email directly or view it on GitHub
#499 (comment)
.

@mattma
Copy link

mattma commented Aug 12, 2015

docs is not clear what to do to fix this use case.

On the other hand, docs explains very clearly to address the issue of Use a hosted private Docker registry. Is this a bug?

@deads2k
Copy link
Contributor

deads2k commented Aug 12, 2015

It's possible it also needs to be added to the service account, @deads2k?

ImagePullSecrets in a podspec are not gated by the list of secrets on the service account. What you specify is what gets used. I don't think we have good logging around exactly which secret is attempted from the keyring, but that might be the next step.

@erictune
Copy link
Member

@mattma I think it is possible that the pod template and RC aren't being created the way you think they are. Would you please follow the steps in https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/application-troubleshooting.md#my-pod-is-running-but-not-doing-what-i-told-it-to-do and see if that fixes it? If that still doesn't help, please post the exact RC yaml or json that you are trying to create (anonymize if necessary).

@mattma
Copy link

mattma commented Aug 13, 2015

@erictune Thank you for the tips. I will give it a try.

@mattma
Copy link

mattma commented Aug 13, 2015

Containers:
  ts:
    Image:      xxx/xxx
    State:      Waiting
      Reason:       Image: xxx/xxx is not ready on the node
    Ready:      False
    Restart Count:  0
Conditions:
  Type      Status
  Ready     False
Events:
  FirstSeen             LastSeen            Count   From            SubobjectPath   Reason      Message
  Thu, 13 Aug 2015 11:16:05 -0700   Thu, 13 Aug 2015 11:16:05 -0700 1   {scheduler }                scheduled   Successfully assigned xxx-xxx to 172.17.8.101
  Thu, 13 Aug 2015 11:16:05 -0700   Thu, 13 Aug 2015 11:17:16 -0700 8   {kubelet 172.17.8.101}          failedSync  Error syncing pod, skipping: secrets "myregistrykey" not found

My rc file is posted above which is the full version. I am sure it is related to the secret. How do I add a service account?

@erictune
Copy link
Member

@mattma adding a service account won't fix the problem, because you already have the imagePullSecret field on your pod.

Two things that might be going on:

You might be using the new dockercfg format, which we don't support yet. See this issue for details: #12626
@deads2k does #12626 affect imagePullSecrets, in terms of not supporting the new dockercfg format?

  • you might have bad data in your secret. do this:
kubectl get secrets myregistrykey -o yaml | grep dockercfg: | cut -f 2 -d : | base64 -D > actual.dockercfg

and make sure that file matches what you are expecting.

@deads2k
Copy link
Contributor

deads2k commented Aug 14, 2015

secrets "myregistrykey" not found means that the call to /api/v1/namespaces/<pod-namespace>/secrets/myregistrykey, returned a 404. Since you explicitly set the secret, the kubelet is currently coded to not even attempt the image pull. #12736 relaxes the behavior, but without that image pull secret, you probably won't progress.

I don't think #12626 applies in this case since the secret wasn't found at all. We don't attempt to parse until the keyring is created.

@deads2k
Copy link
Contributor

deads2k commented Aug 14, 2015

For reference, I would expect an unmarshalling failure due to the new dockercfg format would fail here: https://github.com/kubernetes/kubernetes/blob/master/pkg/credentialprovider/keyring.go#L282, but the not found comes from here: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kubelet.go#L1290

@mattma
Copy link

mattma commented Aug 16, 2015

@erictune @deads2k

I tear down the whole kubernetes cluster, and rebuild it, regenerate the new base64 token from .dockercfg. It seems working now with this time around.

What possible went wrong?

I was using the coreos-alpha@773.1.0 (release 1 week ago) previously, now I upgrade to the new one coreos-alpha@774.0.0(just released). It maybe a bug from the previous system, but I really doubt it. It could be my error too. Anyway, it works as docs said. Thanks.

@drora
Copy link

drora commented Aug 18, 2015

How is it working for you? no matter which approach I try, it fails to pull from my private repo
Error: image privateRepo/someTag:latest not found.

  • minions with .dockercfg didn't work. tried all suggested permutations/location.
  • pre-fetch is out of the question.
  • imagePullSecrets: can anyone publish his working yamls?
    below is my setup, can anybody spot something wrong?

my setup (aws ec2 + coreos alpha-v774.0.0):

secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: myregistrykey
  namespace: staging
data:
  .dockercfg: ${MY_BASE64}
type: kubernetes.io/dockercfg

dbRC.yaml:

apiVersion: v1
kind: ReplicationController
metadata:
  name: db
  namespace: staging
  labels:
    name: db
spec:
  replicas: 2
  selector:
    name: db
  template:
    metadata:
      labels:
         name: db
    spec:
      imagePullSecrets:
      - name: myregistrykey
      containers:
      - name: db
        image: privateRepo/someTag:latest

kubectl version

Client Version: version.Info{Major:"1", Minor:"0.1", GitVersion:"v1.0.1", GitCommit:"", GitTreeState:"not a git tree"}
Server Version: version.Info{Major:"0", Minor:"19", GitVersion:"v0.19.3", GitCommit:"3103c8ca0f24514bc39b6e2b7d909bbf46af8d11", GitTreeState:"clean"}

@deads2k
Copy link
Contributor

deads2k commented Aug 18, 2015

can anyone publish his working yamls? below is my setup, can anybody spot something wrong?

Can you confirm that your base64 secret matches correctly using the command referenced here: #499 (comment) ? The imagePullSecret is loaded into a keyring that matches based on the URL of your pull spec, so that needs to match correctly as well.

@drora
Copy link

drora commented Aug 19, 2015

apparently it was a base64 issue. thanks for the lead @deads2k.
when I compered the original file to the stored one as suggested I had noticed that another jsonobject called "auth" was encapsulating my .dockerconfig file json structure from the top level.
as it turns out (I could be stating the obvious here but what the hell, it could help somebody), base64 relies on local encoding that varies between different linux distros (e.g. UTF8, UTF16, etc).
here's what fixed it for me:

  1. remove al whitespaces from your .dockerconfig file (result is a 1 liner json)
  2. use the same os to encode your .dockercfg file as in your target kubelet host.

@iameli
Copy link

iameli commented Sep 10, 2015

I asked a StackOverflow question with largely the same problem as @drora -- cross-referencing here because I used this thread to troubleshoot.

@streamnsight
Copy link

got this working when:

  1. I remove the 'auths' wrapping on the object in the config.json
  2. add 'https://' in front of the URL
  3. make it one line (not sure this is so critical but it seems to cause issues (invalid json) if not in one line

-> base64 encode and make sure it is still one line when listin as entry for .dockercfg

@plamer
Copy link

plamer commented Oct 2, 2015

@streamnsight thank you! I was going nuts with this util I saw your directions :)

@ntquyen
Copy link

ntquyen commented Oct 18, 2015

@streamnsight Thanks a lot! Your solution saves my day!

@chesleybrown
Copy link

Thank you @streamnsight. I think I had to remove all spaces from it as well.

@streamnsight
Copy link

I pushed this PR #17286 for the documentation over a month ago but it's still has not made it into the repo.
@erictune @brendandburns @smarterclayton

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/images-registry area/security kind/documentation Categorizes issue or PR as related to documentation. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

No branches or pull requests