Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eliminate latency phase from density test #1311

Open
wojtek-t opened this issue Jun 5, 2020 · 42 comments
Open

Eliminate latency phase from density test #1311

wojtek-t opened this issue Jun 5, 2020 · 42 comments
Assignees
Labels
area/clusterloader area/slo lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@wojtek-t
Copy link
Member

wojtek-t commented Jun 5, 2020

PodStartup from SaturationPhase signiifcantly dropped recently:

Screenshot from 2020-06-05 13-58-57

This seem to be the diff between those two runs:
kubernetes/kubernetes@bded41a...1700acb

We should understand why that happened, as if this is expected (not a bug somewhere), this would potentially allow us to achieve our long-standing goal to get rid of latency phase from density test completely.

@mm4tt @mborsz - FYI

@wojtek-t
Copy link
Member Author

wojtek-t commented Jun 5, 2020

Nothing interesting happened neither in test-infra nor in perf-tests repo at that time...

@mm4tt
Copy link
Contributor

mm4tt commented Jun 5, 2020

Interesting. Looks like we had a similar drop in load test pod startup, but it got back to normal after a few runs and it completely doesn't coincide in time:
http://perf-dash.k8s.io/#/?jobname=gce-5000Nodes&metriccategoryname=E2E&metricname=LoadPodStartup&Metric=pod_startup
5L9Y6Eh6jaZ

Strange...

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2020
@wojtek-t
Copy link
Member Author

wojtek-t commented Sep 3, 2020

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 2, 2020
@wojtek-t
Copy link
Member Author

wojtek-t commented Dec 2, 2020

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 2, 2020
@mborsz
Copy link
Member

mborsz commented Dec 8, 2020

I checked https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1335992827380764672/ where we see 57s latency for only stateless pods.

Sample slow pod:

{test-xrxpvb-40/small-deployment-63-677d88d774-xgj2d 1m5.155890502s}

Timeline:

  • Created at 18:05:11.015687
I1207 18:05:11.015687      11 event.go:291] "Event occurred" object="test-xrxpvb-40/small-deployment-63-677d88d774" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: small-deployment-63-677d88d774-xgj2d"
  • Scheduled at 18:06:10.760230
I1207 18:06:10.760230      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-40/small-deployment-63-677d88d774-xgj2d" node="gce-scale-cluster-minion-group-2-v779" evaluatedNodes=500 feasibleNodes=500
  • Running at 18:06:15.139943
kube-apiserver.log-20201207-1607369112.gz:I1207 18:06:15.139943      11 httplog.go:89] "HTTP" verb="PATCH" URI="/api/v1/namespaces/test-xrxpvb-40/pods/small-deployment-63-677d88d774-xgj2d/status" latency="7.568642ms" userAgent="kubelet/v1.21.0 (linux/amd64) kubernetes/e1c617a" srcIP="10.40.7.99:35600" resp=200

So it looks like most of the time it was waiting on kube-scheduler, but also latency on kubelet was higher than expected (~5s).

In kube-scheduler logs I see a block of daemonset schedule events right before small-deployment-63-677d88d774-xgj2d has been created.

I1207 18:05:03.359461      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-1/daemonset-0-79d6p" node="gce-scale-cluster-minion-group-4-pvnb" evaluatedNodes=5001 feasibleNodes=1
I1207 18:05:03.394203      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-1/daemonset-0-vtjxw" node="gce-scale-cluster-minion-group-3-srr7" evaluatedNodes=5001 feasibleNodes=1
I1207 18:05:03.411793      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-1/daemonset-0-hpcmd" node="gce-scale-cluster-minion-group-4-w7r7" evaluatedNodes=5001 feasibleNodes=1
I1207 18:05:03.425993      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-1/daemonset-0-62xjz" node="gce-scale-cluster-minion-group-3-w142" evaluatedNodes=5001 feasibleNodes=1
I1207 18:05:03.432111      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-1/daemonset-0-zvnxl" node="gce-scale-cluster-minion-group-3-61mt" evaluatedNodes=5001 feasibleNodes=1
I1207 18:05:03.446798      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-1/daemonset-0-czxw7" node="gce-scale-cluster-minion-group-1-z28j" evaluatedNodes=5001 feasibleNodes=1
I1207 18:05:03.457188      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-1/daemonset-0-xw4g5" node="gce-scale-cluster-minion-group-1-bfqp" evaluatedNodes=5001 feasibleNodes=1
I1207 18:05:03.484511      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-1/daemonset-0-5cff4" node="gce-scale-cluster-minion-group-bj2t" evaluatedNodes=5001 feasibleNodes=1
(...)
I1207 18:06:04.470902      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-1/daemonset-0-flv5s" node="gce-scale-cluster-minion-group-1-2r82" evaluatedNodes=5001 feasibleNodes=1
I1207 18:06:04.481599      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-1/daemonset-0-zmvzg" node="gce-scale-cluster-minion-group-f2zv" evaluatedNodes=5001 feasibleNodes=1
I1207 18:06:04.493779      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-1/daemonset-0-sczfs" node="gce-scale-cluster-minion-group-3-gd6s" evaluatedNodes=5001 feasibleNodes=1
I1207 18:06:04.505584      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-1/daemonset-0-k5zkx" node="gce-scale-cluster-minion-group-3-3437" evaluatedNodes=5001 feasibleNodes=1
I1207 18:06:04.513842      11 scheduler.go:592] "Successfully bound pod to node" pod="test-xrxpvb-1/daemonset-0-bfq96" node="gce-scale-cluster-minion-group-1-0vjk" evaluatedNodes=5001 feasibleNodes=1

So between 18:05:03 and 18:06:04 kube-scheduler was scheduling only daemonsets.

So it looks like due to fact that replicaset-controller and daemonset-controller have separate rate limiters for api calls, they can generate more pods (200 qps/s) than kube-scheduler is able to schedule (100 qps/s) and managed to generate O(minute) backlog of work that slowed down "normal" pods binding.

@wojtek-t
Copy link
Member Author

This is great finding.
Given we already have P&F enabled, I'm wondering if we can consider bumping scheduler QPS limits in our 5k-node job to 200 to accommodate for those scenarios (without changing anything on controller-manager side).
This would also allow us to validate that further.

@mborsz - once kubernetes/kubernetes#97798 is debugged and fixed, WDYT about this?

@mm4tt
Copy link
Contributor

mm4tt commented Jan 29, 2021

Another important bit of information here is that the deamonset in the load test has its own priorityClass (https://github.com/kubernetes/perf-tests/blob/6aa08f8817fd347b3ccf4d18d29260ce2f57a0a1/clusterloader2/testing/load/daemonset-priorityclass.yaml.). This is why the daemonset pods are starving other pods during scheduling phase.

I believe the main issue here is that one of the load test assumptions is that it should create pods with the given throughput (set via LOAD_TEST_THROUGHPUT). This assumption stopped being true when we introduced daemonsets. What's more, given that deamonsets have higher priority than any other pods in the tests, it's really hard to actually create/update them in parallel with other pods without running into this kind of issues.

I discussed this with Wojtek, and I believe with both agree that it might make sense to move the daemonset operations to a separate CL2 Step. Because steps are executed in serial, it'll stop creating/updating deamonsets in parallel with other pods. This might make the load test minimally slower, but will definitely help with the issue that Maciek found.

AIs

  1. Move the daemonset creation phase to a separate step. We'll also need to wait for the daemonset pods to be created before we start creating other objects. To summarize the current flow is:

    1. Create objects (including daemonsets)
    2. Wait for pods (including daemonsets) to be running (code

    It should be changed to:

    1. Create daemonsets
    2. Wait for daemonset pods to be running
    3. Create objects (excluding daemonsets)
    4. Wait for pods to be running (exluding daemonsets)
  2. Do the same for the daemonset rolling upgrade phase -

    - namespaceRange:
    min: 1
    max: 1
    replicasPerNamespace: 1
    tuningSet: RandomizedScalingTimeLimited
    objectBundle:
    - basename: daemonset
    objectTemplatePath: daemonset.yaml
    templateFillMap:
    Image: k8s.gcr.io/pause:3.1
    - namespaceRange:

/good-first-issue
/help-wanted

@k8s-ci-robot
Copy link
Contributor

@mm4tt:
This request has been marked as suitable for new contributors.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

Another important bit of information here is that the deamonset in the load test has its own priorityClass (https://github.com/kubernetes/perf-tests/blob/6aa08f8817fd347b3ccf4d18d29260ce2f57a0a1/clusterloader2/testing/load/daemonset-priorityclass.yaml.). This is why the daemonset pods are starving other pods during scheduling phase.

I believe the main issue here is that one of the load test assumptions is that it should create pods with the given throughput (set via LOAD_TEST_THROUGHPUT). This assumption stopped being true when we introduced daemonsets. What's more, given that deamonsets have higher priority than any other pods in the tests, it's really hard to actually create/update them in parallel with other pods without running into this kind of issues.

I discussed this with Wojtek, and I believe with both agree that it might make sense to move the daemonset operations to a separate CL2 Step. Because steps are executed in serial, it'll stop creating/updating deamonsets in parallel with other pods. This might make the load test minimally slower, but will definitely help with the issue that Maciek found.

AIs

  1. Move the daemonset creation phase to a separate step. We'll also need to wait for the daemonset pods to be created before we start creating other objects. To summarize the current flow is:
  2. Create objects (including daemonsets)
  3. Wait for pods (including daemonsets) to be running (code

It should be changed to:
1. Create daemonsets
2. Wait for daemonset pods to be running
3. Create objects (excluding daemonsets)
4. Wait for pods to be running (exluding daemonsets)

  1. Do the same for the daemonset rolling upgrade phase -
    - namespaceRange:
    min: 1
    max: 1
    replicasPerNamespace: 1
    tuningSet: RandomizedScalingTimeLimited
    objectBundle:
    - basename: daemonset
    objectTemplatePath: daemonset.yaml
    templateFillMap:
    Image: k8s.gcr.io/pause:3.1
    - namespaceRange:

/good-first-issue
/help-wanted

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Jan 29, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2021
@wojtek-t
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2021
@k8s-triage-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 28, 2021
@wojtek-t
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 28, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 26, 2021
@wojtek-t
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 26, 2021
@wojtek-t
Copy link
Member Author

OK - so the above moved us a long way (in fact got down to 5s for 100th percentile based on last two runs of 5k-node test).

That said, we're still high on the main "pod startup latency".

I took a quick look at the last run:
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1519723567455932416

I took at O(10) pods with largest startup_time and almost all of them didn't have any other pods starting on the same node at the same time. So the problem doesn't seem to be scheduling-later anymore.

Picking one of the pods, I'm seeing the following in kubelet logs:

I0428 19:10:25.026495    1680 kubelet.go:2060] "SyncLoop ADD" source="api" pods=[test-4i97xb-45/medium-deployment-20-7857898896-bddht]
...
E0428 19:10:26.534520    1680 projected.go:192] Error preparing data for projected volume kube-api-access-b4wph for pod test-4i97xb-45/medium-deployment-20-7857898896-bddht: [failed to fetch token: serviceaccounts "default" is forbidden: User "system:node:gce-scale-cluster-minion-group-ph7w" cannot create resource "serviceaccounts/token" in API group "" in the namespace "test-4i97xb-45": no relationship found between node 'gce-scale-cluster-minion-group-ph7w' and this object, failed to sync configmap cache: timed out waiting for the condition]
...
E0428 19:10:28.025197    1680 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/87f89c3c-3837-4bca-a2d6-c5430cfc411a-configmap podName:87f89c3c-3837-4bca-a2d6-c5430cfc411a nodeName:}" failed. No retries permitted until 2022-04-28 19:10:29.02516469 +0000 UTC m=+7114.679135177 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap" (UniqueName: "kubernetes.io/configmap/87f89c3c-3837-4bca-a2d6-c5430cfc411a-configmap") pod "medium-deployment-20-7857898896-bddht" (UID: "87f89c3c-3837-4bca-a2d6-c5430cfc411a") : failed to sync configmap cache: timed out waiting for the condition
...
I0428 19:10:29.026588    1680 operation_generator.go:703] "MountVolume.SetUp succeeded for volume \"configmap\" (UniqueName: \"kubernetes.io/configmap/87f89c3c-3837-4bca-a2d6-c5430cfc411a-configmap\") pod \"medium-deployment-20-7857898896-bddht\" (UID: \"87f89c3c-3837-4bca-a2d6-c5430cfc411a\") " pod="test-4i97xb-45/medium-deployment-20-7857898896-bddht"

It took longer then expected later too, but that partially may be a consequence of some backoffs or sth.

Looking into other pods, e.g. medium-deployment-2-c99c86f6b-ch699 gives pretty much the same situation.

I initially was finding pods scheduling around the same time. But it's not limited to a single time.
As an example, for the pod we're also getting the same:

I0428 18:20:22.921344    1686 kubelet.go:2060] "SyncLoop ADD" source="api" pods=[test-4i97xb-2/small-deployment-62-6db75944b4-jz2gh]
...
E0428 18:20:24.194978    1686 projected.go:192] Error preparing data for projected volume kube-api-access-gbjrx for pod test-4i97xb-2/small-deployment-62-6db75944b4-jz2gh: [failed to fetch token: serviceaccounts "default" is forbidden: User "system:node:gce-scale-cluster-minion-group-ds35" cannot create resource "serviceaccounts/token" in API group "" in the namespace "test-4i97xb-2": no relationship found between node 'gce-scale-cluster-minion-group-ds35' and this object, failed to sync configmap cache: timed out waiting for the condition]
...
I0428 18:20:27.421608    1686 operation_generator.go:703] "MountVolume.SetUp succeeded for volume \"secret\" (UniqueName: \"kubernetes.io/secret/7739fba6-34f5-46be-9f6c-79c9f8477a5d-secret\") pod \"small-deployment-62-6db75944b4-jz2gh\" (UID: \"7739fba6-34f5-46be-9f6c-79c9f8477a5d\") " pod="test-4i97xb-2/small-deployment-62-6db75944b4-jz2gh"

So it seems that latency in the node-authorizer is the biggest bottleneck for pod startup. I will try to look a bit into it.

@wojtek-t
Copy link
Member Author

wojtek-t commented May 2, 2022

With PR decreasing the indexing threshold in node-authorizer:
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/109067/pull-kubernetes-e2e-gce-scale-performance-manual/1521023813385457664

seems to no longer have those, but it didn't really move the needle...
It seems to now mostly boil-down to slow nodes on the container runtime side...

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 31, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 30, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 29, 2022
@wojtek-t
Copy link
Member Author

/remove-lifecycle rotten
/reopen

@k8s-ci-robot
Copy link
Contributor

@wojtek-t: Reopened this issue.

In response to this:

/remove-lifecycle rotten
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Nov 21, 2022
@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 21, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 19, 2023
@wojtek-t
Copy link
Member Author

/remove-lifecycle rotten

@wojtek-t
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 20, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 21, 2023
@wojtek-t
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 24, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 21, 2024
@wojtek-t
Copy link
Member Author

/remove-lifecycle stale
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/clusterloader area/slo lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

7 participants