Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addons cannot be installed on v0.20.0 #243

giner opened this issue Aug 10, 2018 · 5 comments

Addons cannot be installed on v0.20.0 #243

giner opened this issue Aug 10, 2018 · 5 comments


Copy link

@giner giner commented Aug 10, 2018

What happened:
Starting from release 0.20.0 addons cannot be installed

What you expected to happen:
bosh -d cfcr run-errand apply-specs should install addons

How to reproduce it (as minimally and precisely as possible):
Run bosh -d cfcr run-errand apply-specs on fresh deploy.

Anything else we need to know?:
Some log extracts:

Instance   apply-addons/fe1ac694-d99d-499a-801c-33eb64d3b8b7  
Exit Code  1  
Stdout     Deploying /var/vcap/jobs/apply-specs/specs/kube-dns.yml  
           service/kube-dns created  
           serviceaccount/kube-dns created  
           configmap/kube-dns-auth created  
           configmap/kube-dns created  
           deployment.extensions/kube-dns created  
           Waiting for deployment "kube-dns" rollout to finish: 0 of 1 updated replicas are available...  
           failed to start all system specs after 1200 with exit code 1  
Stderr     error: deployment "kube-dns" exceeded its progress deadline  

1 errand(s)

Errand 'apply-specs' completed with error (exit code 1)

Exit code 1

It is trying to download image for "pause:3.1" from Internet instead of using preloaded one.

  Warning  FailedCreatePodSandBox  4m (x128 over 1h)  kubelet,  Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "": Error response from daemon: Get net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Run this on all workers:

docker tag

Why is this happening?
Because of this kubernetes/kubernetes#65920

How to fix?
Probably by stripping -amd64 here May be there is a better way.

Copy link

@cf-gitbot cf-gitbot commented Aug 10, 2018

We have created an issue in Pivotal Tracker to manage this:

The labels on this github issue will be updated when the story is started.

Copy link

@alex-slynko alex-slynko commented Aug 14, 2018

Hi @giner

Thank you for reporting this issue. We have implemented the fix and expect to release it as part of CFCR 0.21

Copy link

@giner giner commented Aug 23, 2018

@alex-slynko, any hints on when 0.21 is going to be cut?

Copy link

@alex-slynko alex-slynko commented Aug 23, 2018

We are trying to solve a very annoying issue that breaks upgrades in AWS. We will release as soon as we finish it.

You can check the progress in our tracker in future

Copy link

@alex-slynko alex-slynko commented Aug 30, 2018

Hi @giner
We have released 0.21 that fixes that issue. Additionally, we have improved our tests, so we would see the issues like this earlier.
Thank you again for using CFCR and reporting this issue!

@cf-gitbot cf-gitbot removed the accepted label Aug 30, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants
You can’t perform that action at this time.