-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ignore unavailable deployments and handle kubeconfig secret rotation #41
Conversation
f768dcc
to
38437b4
Compare
@unmarshall -- As suggested by you, I've introduced a retry mechanism to handle the kubeconfig rotation scenario. Also refined the logs overall and made them leaner for happy path. Moved some of the logs to Level 5. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added comments
Thanks Madhav, I agree we can make a total separation of concerns. |
15f476a
to
1d5b24b
Compare
Fixed test errors with missing arguments https://github.com/gardener/dependency-watchdog/compare/15f476af3a05b03b7922b3866cf2a46df1687c51..1d5b24b617cefdf54bef6493eddbec22aecf9108 |
/lgtm |
…in-rel-0.7.0 [rel-0.7.0] Automated cherry pick of #41: Ignore unavailable deployments and handle kubeconfig secret rotation
What this PR does / why we need it:
This PR handles skipping the scaling operation for deployment when they are not available in the cluster.
The logs are also refined to avoid noise and clutter.
It also handled the bug introduced with reloading of shoot kubeconfig as reported with #36.
Which issue(s) this PR fixes:
Fixes #40 #36
Special notes for your reviewer:
Currently it is a hacky way with 2 sec delay as per what was observed from the tests.
Without this delay also the scaling operation happens but it is honored in the next reconciliation cycle. This creates an overall delay of 30s scaling delay per deployment requiring scaling.
Release note: