Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installation of AWS load balancer failed to deploy successfully #2956

Closed
ta1meng opened this issue Jan 4, 2023 · 21 comments
Closed

Installation of AWS load balancer failed to deploy successfully #2956

ta1meng opened this issue Jan 4, 2023 · 21 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ta1meng
Copy link

ta1meng commented Jan 4, 2023

Describe the bug

I followed https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html to install the AWS load balancer in our EKS cluster.

The final state should be a successfully deployed AWS load balancer:

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   2/2     2            2           84s

However what I see is:

 ~/Downloads/ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   0/2     0            0           15d

When I describe the deployment, I see:

 ~/Downloads/ kubectl describe deployment -n kube-system aws-load-balancer-controller
Name:                   aws-load-balancer-controller
Namespace:              kube-system
CreationTimestamp:      Mon, 19 Dec 2022 14:19:09 -0800
Labels:                 app.kubernetes.io/instance=aws-load-balancer-controller
                        app.kubernetes.io/managed-by=Helm
                        app.kubernetes.io/name=aws-load-balancer-controller
                        app.kubernetes.io/version=v2.4.5
                        helm.sh/chart=aws-load-balancer-controller-1.4.6
Annotations:            deployment.kubernetes.io/revision: 1
                        meta.helm.sh/release-name: aws-load-balancer-controller
                        meta.helm.sh/release-namespace: kube-system
Selector:               app.kubernetes.io/instance=aws-load-balancer-controller,app.kubernetes.io/name=aws-load-balancer-controller
Replicas:               2 desired | 0 updated | 0 total | 0 available | 2 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app.kubernetes.io/instance=aws-load-balancer-controller
                    app.kubernetes.io/name=aws-load-balancer-controller
  Annotations:      prometheus.io/port: 8080
                    prometheus.io/scrape: true
  Service Account:  aws-load-balancer-controller
  Containers:
   aws-load-balancer-controller:
    Image:       602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller:v2.4.5
    Ports:       9443/TCP, 8080/TCP
    Host Ports:  0/TCP, 0/TCP
    Command:
      /controller
    Args:
      --cluster-name=eventplatform0
      --ingress-class=alb
    Liveness:     http-get http://:61779/healthz delay=30s timeout=10s period=10s #success=1 #failure=2
    Environment:  <none>
    Mounts:
      /tmp/k8s-webhook-server/serving-certs from cert (ro)
  Volumes:
   cert:
    Type:               Secret (a volume populated by a Secret)
    SecretName:         aws-load-balancer-tls
    Optional:           false
  Priority Class Name:  system-cluster-critical
Conditions:
  Type             Status  Reason
  ----             ------  ------
  Available        False   MinimumReplicasUnavailable
  ReplicaFailure   True    FailedCreate
  Progressing      False   ProgressDeadlineExceeded
OldReplicaSets:    <none>
NewReplicaSet:     aws-load-balancer-controller-6d9d9fc86c (0/2 replicas created)
Events:            <none>

What does "MinimumReplicasUnavailable" mean?

Steps to reproduce

Follow the guide at https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html.

Expected outcome

 ~/Downloads/ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   2/2     2            2           84s

Environment

  • AWS Load Balancer controller version: seems to be v2.4.5.
 ~/Downloads/ helm list -a -A                                                             
NAME                        	NAMESPACE   	REVISION	UPDATED                             	STATUS  	CHART                             	APP VERSION
aws-load-balancer-controller	kube-system 	1       	2022-12-19 14:19:03.986723 -0800 PST	deployed	aws-load-balancer-controller-1.4.6	v2.4.5     
cert-manager                	kube-system 	1       	2022-07-14 16:08:11.149648 -0700 PDT	deployed	cert-manager-v1.5.4               	v1.5.4     
event-platform-pulsar-0     	pulsar      	15      	2022-12-06 18:00:28.480601 -0800 PST	deployed	sn-1.5.5-alpha.1                  	2.9        
pulsar-operator             	sn-operators	4       	2022-11-21 16:28:44.960682 -0800 PST	deployed	pulsar-operator-0.10.0            	0.9.4      
  • Kubernetes version
  • Using EKS (yes/no), if so version? Yes. Server version seems to be v1.21.14-eks-fb459a0.
 ~/Downloads/ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.14-eks-fb459a0", GitCommit:"b07006b2e59857b13fe5057a956e86225f0e82b7", GitTreeState:"clean", BuildDate:"2022-10-24T20:32:54Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}

Additional Context:

@ta1meng
Copy link
Author

ta1meng commented Jan 4, 2023

@kishorj I've created a new issue following your advice in #1953.

Can you let me know if more information is needed for me, for you to be able to assess the issue?

@ta1meng
Copy link
Author

ta1meng commented Jan 4, 2023

I found a related ticket: #1597

Because I am using an IAM role that has admin level access, it should not be necessary for me to add IAM permissions. My workflow is to get something working first using the admin role, then re-do the process using a developer role, and add IAM permissions where needed.

Following that ticket, I found the following event:

 ~/Downloads/ kubectl get events -n kube-system
LAST SEEN   TYPE      REASON         OBJECT                                               MESSAGE
9m48s       Warning   FailedCreate   replicaset/aws-load-balancer-controller-6d9d9fc86c   Error creating: pods "aws-load-balancer-controller-6d9d9fc86c-" is forbidden: error looking up service account kube-system/aws-load-balancer-controller: serviceaccount "aws-load-balancer-controller" not found

Due to how recent the timestamp of the warning is, and how long ago I did the deployment (15 days ago), it looks like older events are purged?

@ta1meng
Copy link
Author

ta1meng commented Jan 4, 2023

Now I think I understand "MinimumReplicasUnavailable". It arose due to the "FailedCreate" error. A pod should be created but it could not be created.

 ~/Downloads/ kubectl get pods -A
NAMESPACE      NAME                                                             READY   STATUS    RESTARTS   AGE
kube-system    aws-node-4xdxp                                                   1/1     Running   0          224d
kube-system    aws-node-kfrzb                                                   1/1     Running   0          224d
kube-system    aws-node-txssb                                                   1/1     Running   0          224d
kube-system    cert-manager-74f46787b6-sfxv2                                    1/1     Running   0          173d
kube-system    cert-manager-cainjector-748dc889c5-87ftr                         1/1     Running   1          173d
kube-system    cert-manager-webhook-7f668776cb-m4n9q                            1/1     Running   0          173d
kube-system    coredns-66cb55d4f4-4knn9                                         1/1     Running   0          224d
kube-system    coredns-66cb55d4f4-lwql9                                         1/1     Running   0          224d
kube-system    kube-proxy-5h5t9                                                 1/1     Running   0          224d
kube-system    kube-proxy-kdbvq                                                 1/1     Running   0          224d
kube-system    kube-proxy-vsdcm                                                 1/1     Running   0          224d
pulsar         event-platform-pulsar-0-sn-alert-manager-cd8597664-zqnhp         2/2     Running   0          131d
pulsar         event-platform-pulsar-0-sn-bookie-0                              1/1     Running   0          173d
pulsar         event-platform-pulsar-0-sn-bookie-1                              1/1     Running   0          173d
pulsar         event-platform-pulsar-0-sn-bookie-2                              1/1     Running   0          109d
pulsar         event-platform-pulsar-0-sn-broker-0                              1/1     Running   0          43d
pulsar         event-platform-pulsar-0-sn-grafana-0                             1/1     Running   0          131d
pulsar         event-platform-pulsar-0-sn-node-exporter-cx4pv                   1/1     Running   0          173d
pulsar         event-platform-pulsar-0-sn-node-exporter-gh44r                   1/1     Running   0          173d
pulsar         event-platform-pulsar-0-sn-node-exporter-hzxvv                   1/1     Running   0          173d
pulsar         event-platform-pulsar-0-sn-prometheus-0                          2/2     Running   0          173d
pulsar         event-platform-pulsar-0-sn-proxy-0                               1/1     Running   0          43d
pulsar         event-platform-pulsar-0-sn-recovery-0                            1/1     Running   0          43d
pulsar         event-platform-pulsar-0-sn-toolset-0                             1/1     Running   0          131d
pulsar         event-platform-pulsar-0-sn-zookeeper-0                           1/1     Running   0          173d
pulsar         event-platform-pulsar-0-sn-zookeeper-1                           1/1     Running   0          173d
pulsar         event-platform-pulsar-0-sn-zookeeper-2                           1/1     Running   0          173d
sn-operators   pulsar-operator-bookkeeper-controller-manager-6d45c67f49-8hwml   1/1     Running   0          43d
sn-operators   pulsar-operator-pulsar-controller-manager-5f8597dfc-dnk5x        1/1     Running   0          43d
sn-operators   pulsar-operator-zookeeper-controller-manager-68458cf-q6mrc       1/1     Running   0          43d

@ta1meng
Copy link
Author

ta1meng commented Jan 4, 2023

I reviewed the installation instructions and see that I missed a section:

image

I thought the section related to IAM permissions which I didn't need to modify because I'm working with an admin role. But that section covers the creation of the service account aws-load-balancer-controller.

I'm re-reading that section now and will attempt to follow it.

@ta1meng
Copy link
Author

ta1meng commented Jan 4, 2023

I'm trying to figure out if we use an AWS region that is a part of GovCloud. I've asked this question to our TAM but it would be helpful to have an answer here too.

Our EKS cluster is in the us-east-1 region.

Does that mean when creating the IAM policy, I should follow the GovCloud instruction? Specifically:

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy_us-gov.json

image

@kishorj
Copy link
Collaborator

kishorj commented Jan 5, 2023

@ta1meng, the gov cloud permissions don't work on other regions. For us-east-1 region, you'd need to refer to https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json

@ta1meng
Copy link
Author

ta1meng commented Jan 6, 2023

Thank you @kishorj!

I've gotten farther now.

 ~/git/terraform/environments/systems/event-platform/event-platform-0/pulsar-helm/ [master*] kubectl get events -n kube-system
LAST SEEN   TYPE     REASON             OBJECT                                               MESSAGE
44m         Normal   Scheduled          pod/aws-load-balancer-controller-6d9d9fc86c-bfh2m    Successfully assigned kube-system/aws-load-balancer-controller-6d9d9fc86c-bfh2m to ip-172-16-167-192.ec2.internal
44m         Normal   Pulling            pod/aws-load-balancer-controller-6d9d9fc86c-bfh2m    Pulling image "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller:v2.4.5"
44m         Normal   Pulled             pod/aws-load-balancer-controller-6d9d9fc86c-bfh2m    Successfully pulled image "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller:v2.4.5" in 3.595406189s
44m         Normal   Created            pod/aws-load-balancer-controller-6d9d9fc86c-bfh2m    Created container aws-load-balancer-controller
44m         Normal   Started            pod/aws-load-balancer-controller-6d9d9fc86c-bfh2m    Started container aws-load-balancer-controller
44m         Normal   Scheduled          pod/aws-load-balancer-controller-6d9d9fc86c-dgbft    Successfully assigned kube-system/aws-load-balancer-controller-6d9d9fc86c-dgbft to ip-172-16-167-121.ec2.internal
44m         Normal   Pulling            pod/aws-load-balancer-controller-6d9d9fc86c-dgbft    Pulling image "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller:v2.4.5"
44m         Normal   Pulled             pod/aws-load-balancer-controller-6d9d9fc86c-dgbft    Successfully pulled image "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller:v2.4.5" in 3.726657922s
44m         Normal   Created            pod/aws-load-balancer-controller-6d9d9fc86c-dgbft    Created container aws-load-balancer-controller
44m         Normal   Started            pod/aws-load-balancer-controller-6d9d9fc86c-dgbft    Started container aws-load-balancer-controller
44m         Normal   SuccessfulCreate   replicaset/aws-load-balancer-controller-6d9d9fc86c   Created pod: aws-load-balancer-controller-6d9d9fc86c-dgbft
44m         Normal   SuccessfulCreate   replicaset/aws-load-balancer-controller-6d9d9fc86c   Created pod: aws-load-balancer-controller-6d9d9fc86c-bfh2m
44m         Normal   LeaderElection     configmap/aws-load-balancer-controller-leader        aws-load-balancer-controller-6d9d9fc86c-dgbft_40fe993a-b4ca-4912-a09d-04fea73ac240 became leader
 ~/Downloads/ kubectl get deployment -n kube-system aws-load-balancer-controller

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   2/2     2            2           17d

However the suggested change to the load balancer specification does not seem to work.

With external instance

image

image

With nlb instance

image

image

Are you able to tell, based on the warnings I'm seeing, whether I've successfully invoked aws-load-balancer-controller?

@ta1meng
Copy link
Author

ta1meng commented Jan 6, 2023

I'm trying to rollback the above change but am having trouble.

I'm seeing a similar warning:

image

Also did "nlb" become "nlb-ip" because aws-load-balancer-controller is now processing the load balancer specification?

@ta1meng
Copy link
Author

ta1meng commented Jan 6, 2023

I've re-read the installation instructions, and I'm wondering if the above warning/error is occurring because I skipped this optional step?

image

@kishorj
Copy link
Collaborator

kishorj commented Jan 6, 2023

@ta1meng, you don't need step 3 to successfully setup the controller. The warning event from your prior screenshots - Failed build model due to WebIdentityErr: failed to retrieve credentials ... indicate issues with IRSA (IAM roles for service account). Please verify:

  • the service account kube-system/aws-load-balancer-controller has the annotation eks.amazonaws.com/role-arn pointing to the iamserviceaccount role.
  • Your helm chart is installed with the values --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller
  • If you use IMDSv2 with hop count of 1, set region and vpcId during helm installation

@ta1meng
Copy link
Author

ta1meng commented Jan 6, 2023

 ~/git/terraform/environments/systems/event-platform/event-platform-0/pulsar-helm/ [master*] kubectl describe serviceaccounts aws-load-balancer-controller -n kube-system
Name:                aws-load-balancer-controller
Namespace:           kube-system
Labels:              app.kubernetes.io/managed-by=eksctl
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::103595423143:role/AmazonEKSLoadBalancerControllerRole
Image pull secrets:  <none>
Mountable secrets:   aws-load-balancer-controller-token-pb6lr
Tokens:              aws-load-balancer-controller-token-pb6lr
Events:              <none>
 ~/git/terraform/environments/systems/event-platform/event-platform-0/pulsar-helm/ [master*] 

The service account kube-system/aws-load-balancer-controller has the annotation eks.amazonaws.com/role-arn pointing to the iamserviceaccount role AmazonEKSLoadBalancerControllerRole.

I do see the CloudFormation stack:

image

However I cannot find the IAM policy or IAM role that I thought I created yesterday. Yesterday's log:

 ~/Downloads/ aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam_policy.json
    
{
    "Policy": {
        "PolicyName": "AWSLoadBalancerControllerIAMPolicy",
        "PolicyId": "ANPARQHWEWWT3UGEUDYMG",
        "Arn": "arn:aws:iam::103595423143:policy/AWSLoadBalancerControllerIAMPolicy",
        "Path": "/",
        "DefaultVersionId": "v1",
        "AttachmentCount": 0,
        "PermissionsBoundaryUsageCount": 0,
        "IsAttachable": true,
        "CreateDate": "2023-01-06T00:49:12+00:00",
        "UpdateDate": "2023-01-06T00:49:12+00:00"
    }
}
(END)

 ~/Downloads/ eksctl create iamserviceaccount \
  --cluster=eventplatform0 \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \
  --role-name AmazonEKSLoadBalancerControllerRole \
  --attach-policy-arn=arn:aws:iam::103595423143:policy/AWSLoadBalancerControllerIAMPolicy \
  --approve
2023-01-05 16:53:12 [ℹ]  1 iamserviceaccount (kube-system/aws-load-balancer-controller) was included (based on the include/exclude rules)
2023-01-05 16:53:12 [!]  serviceaccounts that exist in Kubernetes will be excluded, use --override-existing-serviceaccounts to override
2023-01-05 16:53:12 [ℹ]  1 task: { 
    2 sequential sub-tasks: { 
        create IAM role for serviceaccount "kube-system/aws-load-balancer-controller",
        create serviceaccount "kube-system/aws-load-balancer-controller",
    } }2023-01-05 16:53:12 [ℹ]  building iamserviceaccount stack "eksctl-eventplatform0-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2023-01-05 16:53:12 [ℹ]  deploying stack "eksctl-eventplatform0-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2023-01-05 16:53:12 [ℹ]  waiting for CloudFormation stack "eksctl-eventplatform0-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2023-01-05 16:53:44 [ℹ]  waiting for CloudFormation stack "eksctl-eventplatform0-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2023-01-05 16:53:44 [ℹ]  created serviceaccount "kube-system/aws-load-balancer-controller"
 ~/Downloads/ 

The AWS account ID in the log appears correct. It is possible that I did create the policy and the IAM role, but they got auto deleted afterwards. I'm aware of that mechanism in our Production AWS account, but wasn't sure if that mechanism is also put in place in our Systems (test) AWS account.

I'll attempt to recreate the IAM policy and IAM role later to see if they show up in AWS console. If they do show up and get auto deleted after a period of time, we will have established the cause to the latest issue, and then I will think about whether it is time well spent to continue trying to get aws-load-balancer-controller to work, because the manual effort of going to AWS console and setting "Preserve Client IP" from "On" to "Off" appears much less than the effort of codifying the same through Terraform, through Kubernetes annotations.

One question, if we created a brand new EKS cluster today, would it come with aws-load-balancer-controller pre-installed?

If not, are there plans to include aws-local-balancer-controller in new EKS clusters in the foreseeable future?

@ta1meng
Copy link
Author

ta1meng commented Jan 13, 2023

I found sufficient evidence of the auto deletion of IAM policies and IAM roles created outside our iam-configuration repo, and after our TAM connected me to David Qiu, he also described this as the cause.

From my perspective, aws-load-balancer-controller should be auto deployed to an EKS instance on its creation. I shouldn’t need to think about IAM policies or IAM roles in this case, and aws-load-balancer-controller should work in a similar way as its predecessor.

Regarding that perspective, David responded:

Unfortunately, while the aws-load-balancer-controller add-on is on the roadmap to be added into EKS, we do not have a timeline that we can share at the moment.

David and I have a meeting scheduled today where we will discuss what issues we should expect to run into if we don't install aws-load-balancer-controller. We'll also talk about our workflow which is incompatible with a part of the AWS installation guide for aws-load-balancer-controller, so that hopefully, when it is rolled out as part of some future version of EKS, our use case would be accommodated.

@ta1meng
Copy link
Author

ta1meng commented Jan 20, 2023

David and I have exchanged emails. I have decided to not use aws-load-balancer-controller as it does not allow us to have a fully automated pipeline where we can create target groups for a load balancer with the right setting for "Preserve client IP". The emphasis is on "fully automated". Contributing to this decision is the fact that CCM does not currently have an end-of-support date.

Instead I will write a script using AWS CLI to complete the automation. That is, we'll use CCM to create a load balancer with the wrong setting for "Preserve client IP". We would then run a script that uses AWS CLI to correct the value of that setting.

@ta1meng
Copy link
Author

ta1meng commented Jan 20, 2023

One comment. I tried uninstalling aws-load-balancer-controller by following https://catalog.workshops.aws/eks-immersionday/en-US/kubecost/cleanup but could not do so cleanly.

When I deleted the ingress service tied to the load balancer, it restarted but got stuck in an "Ensuring load balancer" state.

So in this case, I've decided to destroy our EKS cluster and recreate it, as our previous EKS instance was going to reach its end of support date in February.

I wonder if AWS has better uninstallation documentation for aws-load-balancer-controller, for others who decide to try it out but discover that it is incompatible with their AWS environment.

@ta1meng
Copy link
Author

ta1meng commented Jan 20, 2023

One comment. I tried uninstalling aws-load-balancer-controller by following https://catalog.workshops.aws/eks-immersionday/en-US/kubecost/cleanup but could not do so cleanly.

Correction. The uninstallation seemed successful.

It's been so long since I had tried to install aws-load-balancer-controller that I had forgotten that KCM/CCM did not support load balancers of type "external"?

Because once I reverted the changes in the load balancer annotations (back to type "nlb"), in a brand new EKS instance, the load balancer got created successfully.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 21, 2023
@ricardorqr
Copy link

One comment. I tried uninstalling aws-load-balancer-controller by following https://catalog.workshops.aws/eks-immersionday/en-US/kubecost/cleanup but could not do so cleanly.

Correction. The uninstallation seemed successful.

It's been so long since I had tried to install aws-load-balancer-controller that I had forgotten that KCM/CCM did not support load balancers of type "external"?

Because once I reverted the changes in the load balancer annotations (back to type "nlb"), in a brand new EKS instance, the load balancer got created successfully.

Have you fixed your problem?

@ta1meng
Copy link
Author

ta1meng commented May 11, 2023

Have you fixed your problem?

I'm not sure which problem you are referring to, but I have summarized the outcome in #2956 (comment)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 10, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 10, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants