Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

garden plugins kubernetes cluster-init cannot find garden-docker-registry pod #1635

Closed
10ko opened this issue Feb 21, 2020 · 6 comments
Closed
Labels
bug priority:medium Medium priority issue or feature provider/k8s stale Label that's automatically set by stalebot. Stale issues get closed after 14 days of inactivity.

Comments

@10ko
Copy link
Member

10ko commented Feb 21, 2020

Bug

Current Behavior

@mitchfriedman reports that occasionally when running garden plugins kubernetes cluster-init the script might fail wit hthe following error message:

Error: Could not find running pod for Deployment/garden-docker-registry
    at Object.<anonymous> (/snapshot/project/garden-service/tmp/dist/build/src/plugins/kubernetes/container/exec.js:0)
    at Generator.next (<anonymous>)
    at fulfilled (/snapshot/project/garden-service/tmp/dist/build/src/plugins/kubernetes/container/exec.js:0)
    at processTicksAndRejections (internal/process/task_queues.js:93:5)

Expected behavior

The command should not fail

Reproducible example

It's not consistently happening but just running it multiple time would make the error happen.

Workaround

Running the command again usually will succeed.

Suggested solution(s)

It looks like this is happening because the way we check if a resource is ready in the waitForResources function.
It might happen that waitForResources returns before the garden-docker-registry pods are running, making the execInWorkload function after that fail.

@stale
Copy link

stale bot commented Apr 22, 2020

This issue has been automatically marked as stale because it hasn't had any activity in 60 days. It will be closed in 14 days if no further activity occurs (e.g. changing labels, comments, commits, etc.). Please feel free to tag a maintainer and ask them to remove the label if you think it doesn't apply. Thank you for submitting this issue and helping make Garden a better product!

@stale stale bot added the stale Label that's automatically set by stalebot. Stale issues get closed after 14 days of inactivity. label Apr 22, 2020
@eysi09
Copy link
Collaborator

eysi09 commented Apr 22, 2020

Is this still an issue?

@stale stale bot removed the stale Label that's automatically set by stalebot. Stale issues get closed after 14 days of inactivity. label Apr 22, 2020
@mitchfriedman
Copy link
Contributor

I haven't seen it recently, but I also haven't been frequently initializing a new cluster.

@eysi09
Copy link
Collaborator

eysi09 commented Apr 23, 2020

Ok. Perhaps we let stalebot do it's thing and re-open if we bump into this again.

@eysi09
Copy link
Collaborator

eysi09 commented Jun 22, 2020

Bumping into this one again on our end.

@stale
Copy link

stale bot commented Aug 21, 2020

This issue has been automatically marked as stale because it hasn't had any activity in 60 days. It will be closed in 14 days if no further activity occurs (e.g. changing labels, comments, commits, etc.). Please feel free to tag a maintainer and ask them to remove the label if you think it doesn't apply. Thank you for submitting this issue and helping make Garden a better product!

@stale stale bot added the stale Label that's automatically set by stalebot. Stale issues get closed after 14 days of inactivity. label Aug 21, 2020
@stale stale bot closed this as completed Sep 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug priority:medium Medium priority issue or feature provider/k8s stale Label that's automatically set by stalebot. Stale issues get closed after 14 days of inactivity.
Projects
None yet
Development

No branches or pull requests

3 participants