Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to delete targetGroup: timed out waiting for the condition #3037

Closed
rushikesh-outbound opened this issue Feb 8, 2023 · 15 comments
Closed
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@rushikesh-outbound
Copy link

rushikesh-outbound commented Feb 8, 2023

Describe the bug
Sometimes aws-load-balancer-controller is stuck deleting the load balancers and target groups it has created.
I think it may be because it tries to delete the target group before deleting the load balancer or listeners. in that case, it's obvious that target groups will not be deleted because they are associated with the listeners.

Most of the time it works correctly, but not always.

Steps to reproduce

Expected outcome
Load balancer, target groups and other resources must be deleted on every attempt

Environment

  • AWS Load Balancer controller version: v1.4.7
  • Kubernetes version: 1.24
  • Using EKS (yes/no), if so version? Yes

Additional Context:
Snap of the logs from aws-load-balancer-controller:

{"level":"info","ts":1675872425.1883543,"msg":"authorized securityGroup ingress","securityGroupID":"sg-00882a88190777992"}
{"level":"info","ts":1675872425.1884189,"msg":"registering targets","arn":"arn:aws:elasticloadbalancing:us-west-2:831870631916:targetgroup/k8s-ingress-nginxint-59a304fdc0/f649f341d58e986e","targets":[{"AvailabilityZone":null,"Id":"i-0e05b0dc02fb32906","Port":32079}]}
{"level":"info","ts":1675872425.3369126,"msg":"registered targets","arn":"arn:aws:elasticloadbalancing:us-west-2:831870631916:targetgroup/k8s-ingress-nginxint-59a304fdc0/f649f341d58e986e"}
{"level":"info","ts":1675872425.3375862,"msg":"authorizing securityGroup ingress","securityGroupID":"sg-00882a88190777992","permission":[{"FromPort":31821,"IpProtocol":"tcp","IpRanges":[{"CidrIp":"10.0.0.0/18","Description":"elbv2.k8s.aws/targetGroupBinding=shared"}],"Ipv6Ranges":null,"PrefixListIds":null,"ToPort":31821,"UserIdGroupPairs":null}]}
{"level":"info","ts":1675872425.6117675,"msg":"authorized securityGroup ingress","securityGroupID":"sg-00882a88190777992"}
{"level":"info","ts":1675872425.612174,"msg":"registering targets","arn":"arn:aws:elasticloadbalancing:us-west-2:831870631916:targetgroup/k8s-ingress-nginxint-97eb159968/30ffbd48c52e8c3c","targets":[{"AvailabilityZone":null,"Id":"i-0e05b0dc02fb32906","Port":31821}]}
{"level":"info","ts":1675872425.7602892,"msg":"registered targets","arn":"arn:aws:elasticloadbalancing:us-west-2:831870631916:targetgroup/k8s-ingress-nginxint-97eb159968/30ffbd48c52e8c3c"}
{"level":"error","ts":1675872426.5959582,"logger":"controller.service","msg":"Reconciler error","name":"nginx-external-ingress-nginx-controller","namespace":"ingress","error":"failed to delete targetGroup: timed out waiting for the condition"}
{"level":"info","ts":1675872426.6062481,"logger":"controllers.service","msg":"successfully built model","model":"{\"id\":\"ingress/nginx-external-ingress-nginx-controller\",\"resources\":{}}"}
{"level":"info","ts":1675872426.9867387,"logger":"controllers.service","msg":"deleting targetGroup","arn":"arn:aws:elasticloadbalancing:us-west-2:831870631916:targetgroup/k8s-ingress-nginxext-116feab49a/5dd7b7c2e08d3472"}
{"level":"error","ts":1675872447.190961,"logger":"controller.service","msg":"Reconciler error","name":"nginx-external-ingress-nginx-controller","namespace":"ingress","error":"failed to delete targetGroup: timed out waiting for the condition"}
{"level":"info","ts":1675872447.211665,"logger":"controllers.service","msg":"successfully built model","model":"{\"id\":\"ingress/nginx-external-ingress-nginx-controller\",\"resources\":{}}"}
{"level":"info","ts":1675872447.610661,"logger":"controllers.service","msg":"deleting targetGroup","arn":"arn:aws:elasticloadbalancing:us-west-2:831870631916:targetgroup/k8s-ingress-nginxext-116feab49a/5dd7b7c2e08d3472"}
{"level":"error","ts":1675872467.789285,"logger":"controller.service","msg":"Reconciler error","name":"nginx-external-ingress-nginx-controller","namespace":"ingress","error":"failed to delete targetGroup: timed out waiting for the condition"}
{"level":"info","ts":1675872467.830484,"logger":"controllers.service","msg":"successfully built model","model":"{\"id\":\"ingress/nginx-external-ingress-nginx-controller\",\"resources\":{}}"}
{"level":"info","ts":1675872468.125962,"logger":"controllers.service","msg":"deleting targetGroup","arn":"arn:aws:elasticloadbalancing:us-west-2:831870631916:targetgroup/k8s-ingress-nginxext-116feab49a/5dd7b7c2e08d3472"}
{"level":"error","ts":1675872488.34732,"logger":"controller.service","msg":"Reconciler error","name":"nginx-external-ingress-nginx-controller","namespace":"ingress","error":"failed to delete targetGroup: timed out waiting for the condition"}
{"level":"info","ts":1675872488.429214,"logger":"controllers.service","msg":"successfully built model","model":"{\"id\":\"ingress/nginx-external-ingress-nginx-controller\",\"resources\":{}}"}
{"level":"info","ts":1675872488.7165732,"logger":"controllers.service","msg":"deleting targetGroup","arn":"arn:aws:elasticloadbalancing:us-west-2:831870631916:targetgroup/k8s-ingress-nginxext-116feab49a/5dd7b7c2e08d3472"}
{"level":"error","ts":1675872508.961759,"logger":"controller.service","msg":"Reconciler error","name":"nginx-external-ingress-nginx-controller","namespace":"ingress","error":"failed to delete targetGroup: timed out waiting for the condition"}
{"level":"info","ts":1675872509.1228898,"logger":"controllers.service","msg":"successfully built model","model":"{\"id\":\"ingress/nginx-external-ingress-nginx-controller\",\"resources\":{}}"}
{"level":"info","ts":1675872509.4044993,"logger":"controllers.service","msg":"deleting targetGroup","arn":"arn:aws:elasticloadbalancing:us-west-2:831870631916:targetgroup/k8s-ingress-nginxext-116feab49a/5dd7b7c2e08d3472"}
{"level":"error","ts":1675872529.5944238,"logger":"controller.service","msg":"Reconciler error","name":"nginx-external-ingress-nginx-controller","namespace":"ingress","error":"failed to delete targetGroup: timed out waiting for the condition"}
{"level":"info","ts":1675872529.9146726,"logger":"controllers.service","msg":"successfully built model","model":"{\"id\":\"ingress/nginx-external-ingress-nginx-controller\",\"resources\":{}}"}
{"level":"info","ts":1675872530.214854,"logger":"controllers.service","msg":"deleting targetGroup","arn":"arn:aws:elasticloadbalancing:us-west-2:831870631916:targetgroup/k8s-ingress-nginxext-116feab49a/5dd7b7c2e08d3472"}
{"level":"error","ts":1675872550.4505224,"logger":"controller.service","msg":"Reconciler error","name":"nginx-external-ingress-nginx-controller","namespace":"ingress","error":"failed to delete targetGroup: timed out waiting for the condition"}
@M00nF1sh
Copy link
Collaborator

M00nF1sh commented Feb 8, 2023

@rushikesh-outbound

For the logs, seems you are deleting ingress/nginx-external-ingress-nginx-controller, and the model is correctly computed to have no resource: "resources":{}

The controller will first delete the LoadBalancer and then keep retry delete the TargetGroup. Would you help try check from cloudTrail on why the TargetGroup deletion failed? If it's due to the LoadBalancer not deleted, would you help check the AWS Tags on the LoadBalancer?

If the LoadBalancer using that TargetGroup is still around, one possible scenario that could cause this is the AWS Tags on the LoadBalancer is removed by other external process(or manually) that is not the controller, so the controller won't delete the LoadBalancer(it only deletes a LoadBalancer if it thinks the LB is created by it via checking the AWS Tags).

If there is no LoadBalancer using that TargetGroup, it could be a ELBv2 bug in terms of their eventual consistency model.

@M00nF1sh M00nF1sh added the triage/needs-information Indicates an issue needs more information in order to work on it. label Feb 8, 2023
@rushikesh-outbound
Copy link
Author

Hi @M00nF1sh Thank you for your response. As requested, I checked the following things.

  1. CloudTrail logs: Error says, it's unable to delete target group because its in use within the Listener
    Screenshot 2023-02-09 at 3 44 46 PM

  2. Load Balancer still exists in the account and has all the tags assigned to it.
    Screenshot 2023-02-09 at 3 27 29 PM

I think the tags are correct, but can you confirm what specific tags it's actually looking for?

Also, I think it's trying to delete the target group before deleting the LB Listeners (not Load Balancer)

@hitsub2
Copy link

hitsub2 commented Feb 16, 2023

Same issue here.

Describe the bug
Migrated from ingress controller v1 to v2.4.4, after done the migration, try to delete the old ingress(created by ingress controller). The logs comes up with the following error:

{"level":"info","ts":1676524227.1348596,"logger":"controllers.ingress","msg":"successfully built model","model":"{\"id\":\"dongdgy/django-app\",\"resources\":{}}"}
{"level":"info","ts":1676524227.8399084,"logger":"controllers.ingress","msg":"deleting securityGroup","securityGroupID":"sg-049319ea8a45797f1"}
{"level":"error","ts":1676524348.9529638,"logger":"controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"django-app","namespace":"dongdgy","error":"failed to delete securityGroup: timed out waiting for the condition"}
{"level":"info","ts":1676525348.9535875,"logger":"controllers.ingress","msg":"successfully built model","model":"{\"id\":\"dongdgy/django-app\",\"resources\":{}}"}
{"level":"info","ts":1676525349.8079634,"logger":"controllers.ingress","msg":"deleting securityGroup","securityGroupID":"sg-049319ea8a45797f1"}
{"level":"error","ts":1676525471.042075,"logger":"controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"django-app","namespace":"dongdgy","error":"failed to delete securityGroup: timed out waiting for the condition"}
{"level":"info","ts":1676526471.0429358,"logger":"controllers.ingress","msg":"successfully built model","model":"{\"id\":\"dongdgy/django-app\",\"resources\":{}}"}
{"level":"info","ts":1676526471.7085032,"logger":"controllers.ingress","msg":"deleting securityGroup","securityGroupID":"sg-049319ea8a45797f1"}
{"level":"error","ts":1676526592.7582066,"logger":"controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"django-app","namespace":"dongdgy","error":"failed to delete securityGroup: timed out waiting for the condition"}
{"level":"info","ts":1676527592.7585614,"logger":"controllers.ingress","msg":"successfully built model","model":"{\"id\":\"dongdgy/django-app\",\"resources\":{}}"}
{"level":"info","ts":1676527593.424313,"logger":"controllers.ingress","msg":"deleting securityGroup","securityGroupID":"sg-049319ea8a45797f1"}
{"level":"error","ts":1676527714.5189643,"logger":"controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"django-app","namespace":"dongdgy","error":"failed to delete securityGroup: timed out waiting for the condition"}

The securtity group created by the ALB is referenced by EKS Control Plane Security Group as follows, so failed to delete the ingress as the sg got dependent object:

image

Here are the cloudtrail logs:

{
    "eventVersion": "1.08",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AROA2RRFIHV62HYHNH25V:1676517067916154489",
        "arn": "arn:aws:sts::123456:assumed-role/AmazonEKSLoadBalancerControllerRole-eks-1-20/1676517067916154489",
        "accountId": "123456",
        "accessKeyId": "ASIA2RRFIHV673SALUCG",
        "sessionContext": {
            "sessionIssuer": {
                "type": "Role",
                "principalId": "AROA2RRFIHV62HYHNH25V",
                "arn": "arn:aws:iam::123456:role/AmazonEKSLoadBalancerControllerRole-eks-1-20",
                "accountId": "123456",
                "userName": "AmazonEKSLoadBalancerControllerRole-eks-1-20"
            },
            "webIdFederationData": {
                "federatedProvider": "arn:aws:iam::123456:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/123456",
                "attributes": {}
            },
            "attributes": {
                "creationDate": "2023-02-16T03:11:08Z",
                "mfaAuthenticated": "false"
            }
        }
    },
    "eventTime": "2023-02-16T03:35:59Z",
    "eventSource": "ec2.amazonaws.com",
    "eventName": "DeleteSecurityGroup",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "123456",
    "userAgent": "elbv2.k8s.aws/v2.4.4 aws-sdk-go/1.42.27 (go1.18.6; linux; amd64)",
    "errorCode": "Client.DependencyViolation",
    "errorMessage": "resource sg-049319ea8a45797f1 has a dependent object",
    "requestParameters": {
        "groupId": "sg-049319ea8a45797f1"
    },
    "responseElements": null,
    "requestID": "d2229571-6c6b-42bf-a93a-bbd939124a4f",
    "eventID": "6d7adc5b-5748-4d22-acee-864f1fdf2b11",
    "readOnly": false,
    "eventType": "AwsApiCall",
    "managementEvent": true,
    "recipientAccountId": "123456",
    "eventCategory": "Management",
    "tlsDetails": {
        "tlsVersion": "TLSv1.2",
        "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
        "clientProvidedHostHeader": "ec2.us-east-1.amazonaws.com"
    }
}

With delete the rule manually, the ingress can be deleted.

Environment

Version: 1.20
AWS Load Balancer Version: 2.4.4

Is this issue already documented on https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/upgrade/migrate_v1_v2/ as follows?

when security-groups annotation isn't used:

a managed SecurityGroup will be created and attached to ALB. This SecurityGroup will be preserved.
an inbound rule will be added to your worker node securityGroups which allow traffic from the above managed SecurityGroup for ALB.

The AWSALBIngressController didn't add any description for that inbound rule.

The AWSLoadBalancerController will use elbv2.k8s.aws/targetGroupBinding=shared for that inbound rule

You'll need to manually add elbv2.k8s.aws/targetGroupBinding=shared description to that inbound rule so that

AWSLoadBalancerController can delete such rule when you delete your Ingress.

The following shell pipeline can be used to update the rules automatically. Replace $REGION and $SG_ID with your own values. After running it change DryRun: true to DryRun: false to have it actually update your security group:

@hitsub2
Copy link

hitsub2 commented Feb 16, 2023

After added description to the rule, the auto created sg is deleted, still ingress stuck

LOGS

{"level":"info","ts":1676542266.5007129,"logger":"controllers.ingress","msg":"deleting loadBalancer","arn":"arn:aws:elasticloadbalancing:us-east-1:123456:loadbalancer/app/a7b3dc48-dongdgy-testapp-5620/3e4c4ea48a816a42"}
{"level":"info","ts":1676542266.5845156,"logger":"controllers.ingress","msg":"deleted loadBalancer","arn":"arn:aws:elasticloadbalancing:us-east-1:123456:loadbalancer/app/a7b3dc48-dongdgy-testapp-5620/3e4c4ea48a816a42"}
{"level":"info","ts":1676542266.5846033,"logger":"controllers.ingress","msg":"deleting securityGroup","securityGroupID":"sg-01b8891afec5c8cae"}
{"level":"info","ts":1676542283.356179,"logger":"controllers.ingress","msg":"deleted securityGroup","securityGroupID":"sg-01b8891afec5c8cae"}
{"level":"info","ts":1676542283.3562045,"logger":"controllers.ingress","msg":"successfully deployed model","ingressGroup":"dongdgy/test-app"}
{"level":"info","ts":1676542283.3562562,"logger":"backend-sg-provider","msg":"No ingress found, backend SG can be deleted","SG ID":"sg-03a3ef9a0e380bb70"}
{"level":"info","ts":1676542283.35627,"logger":"backend-sg-provider","msg":"No ingress found, backend SG can be deleted","SG ID":"sg-03a3ef9a0e380bb70"}

@M00nF1sh
Copy link
Collaborator

M00nF1sh commented Feb 17, 2023

@rushikesh-outbound
The tags on the LoadBalancer is external-ingress-nginx-controller instead of nginx-external-ingress-nginx-controller.
Did you used service.beta.kubernetes.io/aws-load-balancer-name annotation on your Service?

If so, you are likely hit by this bug, where you created the service external-ingress-nginx-controller with service.beta.kubernetes.io/aws-load-balancer-name annotation and later created a new service nginx-external-ingress-nginx-controller with same value of service.beta.kubernetes.io/aws-load-balancer-name annotation.

You should avoid create two service with same service.beta.kubernetes.io/aws-load-balancer-name annotation. We'll do a fix to validate against it.

@M00nF1sh
Copy link
Collaborator

@hitsub2
You are correct, you should manually add such rule description for the migration.
seems your ingress is reconciled successfully after you added such description: successfully deployed model, what do you mean by "still ingress stuck"?

@hitsub2
Copy link

hitsub2 commented Feb 17, 2023

The ingress still can not be deleted and stuck, the load balancer controller outputs are as follows when try to delete the ingress.

{"level":"info","ts":1676597388.777729,"logger":"backend-sg-provider","msg":"No ingress found, backend SG can be deleted","SG ID":"sg-03a3ef9a0e380bb70"}
{"level":"info","ts":1676597388.7777433,"logger":"backend-sg-provider","msg":"No ingress found, backend SG can be deleted","SG ID":"sg-03a3ef9a0e380bb70"}

{"level":"error","ts":1676597509.8300896,"logger":"controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"test-app","namespace":"dongdgy","error":"failed to delete securityGroup: timed out waiting for the condition"}

Here is the logs from cloudtrail

{
    "eventVersion": "1.08",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AROA2RRFIHV62HYHNH25V:1676595144596144559",
        "arn": "arn:aws:sts::123456:assumed-role/AmazonEKSLoadBalancerControllerRole-eks-1-20/1676595144596144559",
        "accountId": "123456",
        "accessKeyId": "ASIA2RRFIHV64E77JXPQ",
        "sessionContext": {
            "sessionIssuer": {
                "type": "Role",
                "principalId": "AROA2RRFIHV62HYHNH25V",
                "arn": "arn:aws:iam::123456:role/AmazonEKSLoadBalancerControllerRole-eks-1-20",
                "accountId": "123456",
                "userName": "AmazonEKSLoadBalancerControllerRole-eks-1-20"
            },
            "webIdFederationData": {
                "federatedProvider": "arn:aws:iam::123456:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/123456",
                "attributes": {}
            },
            "attributes": {
                "creationDate": "2023-02-17T00:52:24Z",
                "mfaAuthenticated": "false"
            }
        }
    },
    "eventTime": "2023-02-17T01:31:49Z",
    "eventSource": "ec2.amazonaws.com",
    "eventName": "DeleteSecurityGroup",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "123456",
    "userAgent": "elbv2.k8s.aws/v2.4.4 aws-sdk-go/1.42.27 (go1.18.6; linux; amd64)",
    "errorCode": "Client.DependencyViolation",
    "errorMessage": "resource sg-03a3ef9a0e380bb70 has a dependent object",
    "requestParameters": {
        "groupId": "sg-03a3ef9a0e380bb70"
    },
    "responseElements": null,
    "requestID": "1fc5038c-dfc1-4c8e-8444-9115277226fa",
    "eventID": "48426b0d-113c-4615-9c75-ad7a2b2df94d",
    "readOnly": false,
    "eventType": "AwsApiCall",
    "managementEvent": true,
    "recipientAccountId": "123456",
    "eventCategory": "Management",
    "tlsDetails": {
        "tlsVersion": "TLSv1.2",
        "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
        "clientProvidedHostHeader": "ec2.us-east-1.amazonaws.com"
    }
}

The sg has two ENIs and ENI bounded to another two sgs.

@rushikesh-outbound
Copy link
Author

@rushikesh-outbound The tags on the LoadBalancer is external-ingress-nginx-controller instead of nginx-external-ingress-nginx-controller. Did you used service.beta.kubernetes.io/aws-load-balancer-name annotation on your Service?

If so, you are likely hit by this bug, where you created the service external-ingress-nginx-controller with service.beta.kubernetes.io/aws-load-balancer-name annotation and later created a new service nginx-external-ingress-nginx-controller with same value of service.beta.kubernetes.io/aws-load-balancer-name annotation.

You should avoid create two service with same service.beta.kubernetes.io/aws-load-balancer-name annotation. We'll do a fix to validate against it.

Hi @M00nF1sh sorry for the long delay in response.

Yes, I used service.beta.kubernetes.io/aws-load-balancer-name annotation.
even though we use the service.beta.kubernetes.io/aws-load-balancer-name annotation, the value for both of them are different so there should not be kind of a conflicting case. Do you mean, we should use this annotation only once per cluster?

However, we planning to shift from nginx to alb ingress controllers, so this may not the blocker for us right now. Also its happening only when destroying the resources.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 9, 2023
@lucasarrudawiliot
Copy link

Any update on this issue? We had on a NLB

@lgb861213
Copy link

I also encountered the same thing. When the ingress was not completely deleted, then the creation was applied again, and the deletion was performed again during creation. At this time, you can see that the target group cannot be deleted normally. My eks version is 1.23.

@lgb861213
Copy link

When I re-apply ingress and the associated svc, and then manually use kubectl delete svc to delete the svc associated with ingress, the target group can be successfully deleted.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 19, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

7 participants