Skip to content
This repository has been archived by the owner on May 6, 2020. It is now read-only.

private registry not working with DockerHub #919

Closed
bacongobbler opened this issue Jul 27, 2016 · 7 comments · Fixed by #920
Closed

private registry not working with DockerHub #919

bacongobbler opened this issue Jul 27, 2016 · 7 comments · Fixed by #920
Assignees
Labels

Comments

@bacongobbler
Copy link
Member

Given a private image hosted at DockerHub with Workflow v2.2.0, @jdumars and I cannot seem to be able to get Workflow to pull it down.

$ deis create bar
$ deis config:set PORT=8080
$ deis registry:set username=jsingerdumars password=********
$ deis pull jsingerdumars/*****

The private-registry secret is created, however this is the error in the controller:

INFO:scheduler:waiting for 1 pods in bar namespace to be in services (125 timeout)
INFO scaling RC bar-v6-cmd in Namespace bar from 1 to 0 replicas
INFO:scheduler:scaling RC bar-v6-cmd in Namespace bar from 1 to 0 replicas
INFO waiting for 1 pods in bar namespace to be terminated (30s timeout)
INFO:scheduler:waiting for 1 pods in bar namespace to be terminated (30s timeout)
INFO 1 pods in namespace bar are terminated
INFO:scheduler:1 pods in namespace bar are terminated
ERROR [bar]: bar-v6-cmd (app::deploy): Could not scale bar-v6-cmd to 1. Deleting and going back to old release
ERROR:api.models.app:[bar]: bar-v6-cmd (app::deploy): Could not scale bar-v6-cmd to 1. Deleting and going back to old release
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): 10.3.0.1
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): 10.3.0.1
ERROR:root:bar-v6-cmd (app::deploy): Could not scale bar-v6-cmd to 1. Deleting and going back to old release
Traceback (most recent call last):
  File "/app/scheduler/__init__.py", line 156, in deploy
    self._scale_rc(namespace, new_name, count)
  File "/app/scheduler/__init__.py", line 852, in _scale_rc
    self._wait_until_pods_are_ready(namespace, container, labels, desired)
  File "/app/scheduler/__init__.py", line 790, in _wait_until_pods_are_ready
    self._handle_pod_image_errors(pod, reason, message)
  File "/app/scheduler/__init__.py", line 1343, in _handle_pod_image_errors
    raise KubeException(message)
scheduler.KubeException: Error: image jsingerdumars/******** not found

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/api/models/app.py", line 445, in deploy
    **kwargs
  File "/app/scheduler/__init__.py", line 179, in deploy
    ) from e
scheduler.KubeException: Could not scale bar-v6-cmd to 1. Deleting and going back to old release

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/api/models/build.py", line 64, in create
    self.app.deploy(new_release)
  File "/app/api/models/app.py", line 457, in deploy
    raise ServiceUnavailable(err) from e
api.exceptions.ServiceUnavailable: bar-v6-cmd (app::deploy): Could not scale bar-v6-cmd to 1. Deleting and going back to old release

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/rest_framework/views.py", line 463, in dispatch
    response = handler(request, *args, **kwargs)
  File "/app/api/views.py", line 171, in create
    return super(AppResourceViewSet, self).create(request, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/rest_framework/mixins.py", line 21, in create
    self.perform_create(serializer)
  File "/app/api/viewsets.py", line 21, in perform_create
    self.post_save(obj)
  File "/app/api/views.py", line 246, in post_save
    self.release = build.create(self.request.user)
  File "/app/api/models/build.py", line 71, in create
    raise DeisException(str(e)) from e
api.exceptions.DeisException: bar-v6-cmd (app::deploy): Could not scale bar-v6-cmd to 1. Deleting and going back to old release
10.2.60.7 "POST /v2/apps/bar/builds/ HTTP/1.1" 400 110 "Deis Client v2.2.0"

However, this image does indeed exist because if we docker login locally and pull it, docker pull works just fine.

Quay seems to work because I tested with the e2e credentials in https://github.com/deis/workflow-e2e/blob/07509ad1c2f5e21ef4e4c9c9b98e0c42578af1d4/tests/registry_test.go#L102-L115 and that worked fine.

CC @jdumars

@helgi
Copy link
Contributor

helgi commented Jul 27, 2016

Wonder if it has to do with the fact there is no hostname on it and Kubernetes gets confused

Can you verify that the secret is written out properly and the base64 does contain everything you need?

@jdumars
Copy link

jdumars commented Jul 27, 2016

Is this what you need?
decode

@helgi
Copy link
Contributor

helgi commented Jul 27, 2016

@jdumars yeah, does that match with what you have in your local .docker/ config?

@jdumars
Copy link

jdumars commented Jul 27, 2016

there isn't a local docker installation on this box -- is that a requirement?

@helgi
Copy link
Contributor

helgi commented Jul 27, 2016

No, but docker makes the same type of file / structure locally when you do docker login or similar. It's a good way to see if things are being created right

@bacongobbler
Copy link
Member Author

bacongobbler commented Jul 27, 2016

FYI I noticed there was a new command called kubectl create secret docker-registry, and when I tried to create that manually the two resultant docker configs seem to differ. More specifically, the one we generate is of type kubernetes.io/dockerconfigjson and the one generated with kubectl create secret docker-registry created a secret of type kubernetes.io/dockercfg, and the resultant hashes differed as well as the keys (.dockercfg and .config.json, IIRC).

kubectl --namespace=bar create secret docker-registry my-secret --docker-username=jsingerdumars --docker-password="******" --docker-email="jsingerdumars@gmail.com"

@helgi
Copy link
Contributor

helgi commented Jul 27, 2016

Yeah, but you can create both. .dockercfg is just the old config format for Docker

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants