New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make kubectl Scale function for RS, Deployments, Job, etc watch-based #56071
Comments
/sig apps |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Automatic merge from submit-queue (batch tested with PRs 60470, 59149, 56075, 60280, 60504). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Make Scale() for RC poll-based until #31345 is fixed Fixes #56064 ,in the short-term until issue #31345 is fixed. We should eventually move RS, job, deployment, etc all to watch-based (#56071) /cc @wojtek-t - SGTY? ```release-note NONE ```
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale
…On Sat, May 19, 2018, 7:40 PM fejta-bot ***@***.***> wrote:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually
close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta
<https://github.com/fejta>.
/lifecycle stale
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#56071 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEIhk_vL1ckiTBKs6NLjXA490So_ddwnks5t0FkVgaJpZM4Qkf6c>
.
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale
/lifecycle frozen
…On Fri, Aug 17, 2018, 7:58 PM fejta-bot ***@***.***> wrote:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually
close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta
<https://github.com/fejta>.
/lifecycle stale
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#56071 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEIhk2HZMGm0icJcPmEE8wXXpAPdLMEkks5uRwQtgaJpZM4Qkf6c>
.
|
/assign |
@wojtek-t this was causing performance regressions if I recall correctly - if so I assume we want to target v1.12? I am gonna make most of the Until related fixes for v1.13 (to limit potential bugs) but I can take a second one for v1.12 (kubectl rollout status is the first one). |
This wasn't causing regression iirc. What happened was we were seeing flakes in our performance tests when RC scaling was watch-based (failing due to bug with Until). So we switched RC scaling to poll-based to avoid those flakes (and leaving open to the long-term solution of fixing Until, so we can go back to using it). |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
I've started removing the pooling but the subresource lack list/watch methods :( kubernetes/staging/src/k8s.io/client-go/scale/interfaces.go Lines 30 to 39 in dd4a8f9
|
/cc @kubernetes/sig-autoscaling-pr-reviews ^ we are condemned to pooling with such interface |
/unassign |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Offshoot of #56064 (comment)
As part of the kubectl Scale() function, we can optionally waitForReplicas to reach the desired count.
Currently, we're polling periodically to check that happens (for RS, jobs, etc). We should move them to use Watch instead - like we're doing for RCs currently:
kubernetes/pkg/kubectl/scale.go
Lines 233 to 246 in 98fb71e
/area kubectl
/kind cleanup
The text was updated successfully, but these errors were encountered: