-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Description
/kind bug
1. What kops version are you running? The command kops version, will display
this information.
1.30.1
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
v1.30.6
3. What cloud provider are you using?
GCE
4. What commands did you run? What is the simplest way to reproduce this issue?
kops delete cluster --yes
5. What happened after the commands executed?
Kops tries to delete backend services/healthchecks it does not manage. These resources were created via our IAC, and kops delete fails because they are attached to another load balancer not associated with kops.
6. What did you expect to happen?
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: "2024-12-03T09:37:19Z"
name: <redacted>
spec:
api:
loadBalancer:
type: Internal
authorization:
rbac: {}
certManager:
enabled: true
channel: stable
cloudConfig: {}
cloudProvider: gce
clusterAutoscaler:
enabled: false
configBase: <redacted>
dnsZone: <redacted>
etcdClusters:
- cpuRequest: 300m
etcdMembers:
- encryptedVolume: true
instanceGroup: control-node1
name: main1
volumeIops: 3000
volumeThroughput: 125
volumeType: pd-balanced
- encryptedVolume: true
instanceGroup: control-node2
name: main2
volumeIops: 3000
volumeThroughput: 125
volumeType: pd-balanced
- encryptedVolume: true
instanceGroup: control-node3
name: main3
volumeIops: 3000
volumeThroughput: 125
volumeType: pd-balanced
manager:
backupRetentionDays: 90
memoryRequest: 500Mi
name: main
- cpuRequest: 100m
etcdMembers:
- encryptedVolume: true
instanceGroup: control-node1
name: events1
volumeIops: 3000
volumeThroughput: 125
volumeType: pd-balanced
- encryptedVolume: true
instanceGroup: control-node2
name: events2
volumeIops: 3000
volumeThroughput: 125
volumeType: pd-balanced
- encryptedVolume: true
instanceGroup: control-node3
name: events3
volumeIops: 3000
volumeThroughput: 125
volumeType: pd-balanced
manager:
backupRetentionDays: 90
memoryRequest: 200Mi
name: events
kubeControllerManager:
nodeCIDRMaskSize: 27
kubelet:
anonymousAuth: false
kubernetesApiAccess:
- 0.0.0.0/0
- ::/0
kubernetesVersion: 1.30.6
networkID: <redacted>
networking:
gce: {}
nonMasqueradeCIDR: 10.165.128.0/17
podCIDR: 10.165.128.0/17
project: <redacted>
serviceClusterIPRange: 10.254.201.0/24
snapshotController:
enabled: true
sshAccess:
- 0.0.0.0/0
- ::/0
subnets:
- cidr: 10.219.160.0/19
egress: External
name: <redacted>
region: us-central1
type: Private
topology:
bastion: {}
dns:
type: None8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
Cant include due to sensitive information
9. Anything else do we need to know?
This happens if there are no backends attached to a Backend Service
This function returns true as it never goes into the for loop, causing the kops client to add a delete resource task.