Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sPolicyTestExtended Validate toEntities KubeAPIServer Allows connection to KubeAPIServer #25263

Closed
maintainer-s-little-helper bot opened this issue May 4, 2023 · 4 comments
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sPolicyTestExtended Validate toEntities KubeAPIServer Allows connection to KubeAPIServer

Failure Output

FAIL: Timed out after 240.001s.

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:453
Timed out after 240.001s.
Timeout while waiting for Cilium to become ready
Expected
    <*errors.errorString | 0xc0014e1790>: {
        s: "only 0 of 2 desired pods are ready",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/assertion_helpers.go:115

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
⚠️  Found "2023-05-04T13:16:59.228452167Z level=error msg=\"Unable to contact k8s api-server\" error=\"Get \\\"https://10.0.2.15:6443/api/v1/namespaces/kube-system\\\": dial tcp 10.0.2.15:6443: connect: connection refused\" ipAddr=\"https://10.0.2.15:6443\" subsys=k8s-client" in logs 1 times
⚠️  Found "2023-05-04T13:16:59.228454886Z level=error msg=\"Start hook failed\" error=\"Get \\\"https://10.0.2.15:6443/api/v1/namespaces/kube-system\\\": dial tcp 10.0.2.15:6443: connect: connection refused\" function=\"client.(*compositeClientset).onStart\" subsys=hive" in logs 1 times
⚠️  Found "2023-05-04T13:16:59.267186305Z level=error msg=\"Unable to contact k8s api-server\" error=\"Get \\\"https://10.0.2.15:6443/api/v1/namespaces/kube-system\\\": dial tcp 10.0.2.15:6443: connect: connection refused\" ipAddr=\"https://10.0.2.15:6443\" subsys=k8s-client" in logs 1 times
⚠️  Found "2023-05-04T13:16:59.267194754Z level=error msg=\"Start hook failed\" error=\"Get \\\"https://10.0.2.15:6443/api/v1/namespaces/kube-system\\\": dial tcp 10.0.2.15:6443: connect: connection refused\" function=\"client.(*compositeClientset).onStart\" subsys=hive" in logs 1 times
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 4
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to contact k8s api-server
Start hook failed
Cilium pods: [cilium-5tr2d cilium-xkj9z]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
grafana-67ff49cd99-mcnrr     false     false
prometheus-8c7df94b4-wj67q   false     false


Standard Error

Click to show.
13:13:35 STEP: Running BeforeAll block for EntireTestsuite K8sPolicyTestExtended
13:13:35 STEP: Ensuring the namespace kube-system exists
13:13:35 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:13:35 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:13:36 STEP: Running BeforeAll block for EntireTestsuite K8sPolicyTestExtended Validate toEntities KubeAPIServer
13:13:36 STEP: Redeploying Cilium with tunnel disabled and KPR enabled
13:13:36 STEP: Installing Cilium
13:13:37 STEP: Waiting for Cilium to become ready
FAIL: Timed out after 240.001s.
Timeout while waiting for Cilium to become ready
Expected
    <*errors.errorString | 0xc0014e1790>: {
        s: "only 0 of 2 desired pods are ready",
    }
to be nil
13:17:37 STEP: Running JustAfterEach block for EntireTestsuite K8sPolicyTestExtended
FAIL: Found 4 io.cilium/app=operator logs matching list of errors that must be investigated:
2023-05-04T13:16:59.228452167Z level=error msg="Unable to contact k8s api-server" error="Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": dial tcp 10.0.2.15:6443: connect: connection refused" ipAddr="https://10.0.2.15:6443" subsys=k8s-client
2023-05-04T13:16:59.228454886Z level=error msg="Start hook failed" error="Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": dial tcp 10.0.2.15:6443: connect: connection refused" function="client.(*compositeClientset).onStart" subsys=hive
2023-05-04T13:16:59.267186305Z level=error msg="Unable to contact k8s api-server" error="Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": dial tcp 10.0.2.15:6443: connect: connection refused" ipAddr="https://10.0.2.15:6443" subsys=k8s-client
2023-05-04T13:16:59.267194754Z level=error msg="Start hook failed" error="Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": dial tcp 10.0.2.15:6443: connect: connection refused" function="client.(*compositeClientset).onStart" subsys=hive
===================== TEST FAILED =====================
13:17:37 STEP: Running AfterFailed block for EntireTestsuite K8sPolicyTestExtended
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS              RESTARTS      AGE    IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-67ff49cd99-mcnrr           1/1     Running             0             11m    10.0.0.180      k8s1   <none>           <none>
	 cilium-monitoring   prometheus-8c7df94b4-wj67q         1/1     Running             0             11m    10.0.0.224      k8s1   <none>           <none>
	 kube-system         cilium-5tr2d                       0/1     Init:0/6            3 (48s ago)   4m4s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-6db5d9b4cf-brg9m   0/1     Running             3 (42s ago)   4m4s   192.168.56.13   k8s3   <none>           <none>
	 kube-system         cilium-operator-6db5d9b4cf-v89zj   0/1     Running             3 (42s ago)   4m4s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-xkj9z                       0/1     Init:0/6            3 (48s ago)   4m4s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         coredns-6d97d5ddb-jzkj7            0/1     ContainerCreating   0             4m6s   <none>          k8s2   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running             0             18m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running             0             18m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running             0             18m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running             0             18m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-6mk6g                 1/1     Running             0             11m    192.168.56.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-hkxxd                 1/1     Running             0             11m    192.168.56.13   k8s3   <none>           <none>
	 kube-system         log-gatherer-q4jcg                 1/1     Running             0             11m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-fhkc5               1/1     Running             0             12m    192.168.56.12   k8s2   <none>           <none>
	 kube-system         registry-adder-kfj8k               1/1     Running             0             12m    192.168.56.13   k8s3   <none>           <none>
	 kube-system         registry-adder-vt586               1/1     Running             0             12m    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-5tr2d cilium-xkj9z]
cmd: kubectl exec -n kube-system cilium-5tr2d -c cilium-agent -- cilium service list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: container not found ("cilium-agent")
	 

cmd: kubectl exec -n kube-system cilium-5tr2d -c cilium-agent -- cilium endpoint list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: container not found ("cilium-agent")
	 

cmd: kubectl exec -n kube-system cilium-xkj9z -c cilium-agent -- cilium service list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: container not found ("cilium-agent")
	 

cmd: kubectl exec -n kube-system cilium-xkj9z -c cilium-agent -- cilium endpoint list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: container not found ("cilium-agent")
	 

===================== Exiting AfterFailed =====================
13:17:42 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTestExtended Validate toEntities KubeAPIServer
13:17:42 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTestExtended
13:17:42 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|c01fc254_K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Allows_connection_to_KubeAPIServer.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2067/artifact/0e79ade9_K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Denies_connection_to_KubeAPIServer.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2067/artifact/866ffcb1_K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Still_allows_connection_to_KubeAPIServer_with_a_duplicate_policy.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2067/artifact/c01fc254_K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Allows_connection_to_KubeAPIServer.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2067/artifact/test_results_Cilium-PR-K8s-1.26-kernel-net-next_2067_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next/2067/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper maintainer-s-little-helper bot added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label May 4, 2023
@maintainer-s-little-helper
Copy link
Author

PR #25853 hit this flake with 92.47% similarity:

Click to show.

Test Name

K8sPolicyTestExtended Validate toEntities KubeAPIServer Allows connection to KubeAPIServer

Failure Output

FAIL: Timed out after 240.001s.

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:453
Timed out after 240.001s.
Timeout while waiting for Cilium to become ready
Expected
    <*errors.errorString | 0xc0011425e0>: {
        s: "only 0 of 2 desired pods are ready",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-5.4/src/github.com/cilium/cilium/test/k8s/assertion_helpers.go:115

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
⚠️  Found "2023-06-02T08:22:42.453076391Z level=error msg=\"Unable to contact k8s api-server\" error=\"Get \\\"https://10.0.2.15:6443/api/v1/namespaces/kube-system\\\": x509: certificate is valid for 10.96.0.1, 192.168.56.11, not 10.0.2.15\" ipAddr=\"https://10.0.2.15:6443\" subsys=k8s-client" in logs 1 times
⚠️  Found "2023-06-02T08:22:42.453154980Z level=error msg=\"Start hook failed\" error=\"Get \\\"https://10.0.2.15:6443/api/v1/namespaces/kube-system\\\": x509: certificate is valid for 10.96.0.1, 192.168.56.11, not 10.0.2.15\" function=\"client.(*compositeClientset).onStart\" subsys=hive" in logs 1 times
⚠️  Found "2023-06-02T08:22:37.862146815Z level=error msg=\"Unable to contact k8s api-server\" error=\"Get \\\"https://10.0.2.15:6443/api/v1/namespaces/kube-system\\\": dial tcp 10.0.2.15:6443: connect: connection refused\" ipAddr=\"https://10.0.2.15:6443\" subsys=k8s-client" in logs 1 times
⚠️  Found "2023-06-02T08:22:37.862173627Z level=error msg=\"Start hook failed\" error=\"Get \\\"https://10.0.2.15:6443/api/v1/namespaces/kube-system\\\": dial tcp 10.0.2.15:6443: connect: connection refused\" function=\"client.(*compositeClientset).onStart\" subsys=hive" in logs 1 times
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 4
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to contact k8s api-server
Start hook failed
Cilium pods: [cilium-qgdkd cilium-zrd87]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod   Ingress   Egress


Standard Error

Click to show.
08:19:15 STEP: Running BeforeAll block for EntireTestsuite K8sPolicyTestExtended
08:19:15 STEP: Ensuring the namespace kube-system exists
08:19:15 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
08:19:15 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
08:19:15 STEP: Running BeforeAll block for EntireTestsuite K8sPolicyTestExtended Validate toEntities KubeAPIServer
08:19:15 STEP: Redeploying Cilium with tunnel disabled and KPR enabled
08:19:15 STEP: Installing Cilium
08:19:17 STEP: Waiting for Cilium to become ready
FAIL: Timed out after 240.001s.
Timeout while waiting for Cilium to become ready
Expected
    <*errors.errorString | 0xc0011425e0>: {
        s: "only 0 of 2 desired pods are ready",
    }
to be nil
08:23:17 STEP: Running JustAfterEach block for EntireTestsuite K8sPolicyTestExtended
FAIL: Found 4 io.cilium/app=operator logs matching list of errors that must be investigated:
2023-06-02T08:22:42.453076391Z level=error msg="Unable to contact k8s api-server" error="Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": x509: certificate is valid for 10.96.0.1, 192.168.56.11, not 10.0.2.15" ipAddr="https://10.0.2.15:6443" subsys=k8s-client
2023-06-02T08:22:42.453154980Z level=error msg="Start hook failed" error="Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": x509: certificate is valid for 10.96.0.1, 192.168.56.11, not 10.0.2.15" function="client.(*compositeClientset).onStart" subsys=hive
2023-06-02T08:22:37.862146815Z level=error msg="Unable to contact k8s api-server" error="Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": dial tcp 10.0.2.15:6443: connect: connection refused" ipAddr="https://10.0.2.15:6443" subsys=k8s-client
2023-06-02T08:22:37.862173627Z level=error msg="Start hook failed" error="Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": dial tcp 10.0.2.15:6443: connect: connection refused" function="client.(*compositeClientset).onStart" subsys=hive
===================== TEST FAILED =====================
08:23:17 STEP: Running AfterFailed block for EntireTestsuite K8sPolicyTestExtended
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS     RESTARTS      AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-84476dcf4b-ttmf9           0/1     Running    0             14m     10.0.0.227      k8s1   <none>           <none>
	 cilium-monitoring   prometheus-7dbb447479-b7wrs        1/1     Running    0             14m     10.0.0.98       k8s1   <none>           <none>
	 kube-system         cilium-operator-6867cc5747-f42k8   1/1     Running    3 (43s ago)   4m3s    192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-6867cc5747-vcsvh   1/1     Running    3 (38s ago)   4m3s    192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-qgdkd                       0/1     Init:0/6   3 (45s ago)   4m4s    192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-zrd87                       0/1     Init:0/6   3 (49s ago)   4m3s    192.168.56.12   k8s2   <none>           <none>
	 kube-system         coredns-6b775575b5-jvp8j           0/1     Running    0             4m28s   10.0.0.7        k8s2   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running    0             19m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running    0             19m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running    0             19m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-24lnd                   1/1     Running    0             18m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-xkmnh                   1/1     Running    0             15m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running    0             19m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-b8lg2                 1/1     Running    0             14m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-wk675                 1/1     Running    0             14m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         registry-adder-hhhdx               1/1     Running    0             15m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-nx9lq               1/1     Running    0             15m     192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-qgdkd cilium-zrd87]
cmd: kubectl exec -n kube-system cilium-qgdkd -c cilium-agent -- cilium service list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: container not found ("cilium-agent")
	 

cmd: kubectl exec -n kube-system cilium-qgdkd -c cilium-agent -- cilium endpoint list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: container not found ("cilium-agent")
	 

cmd: kubectl exec -n kube-system cilium-zrd87 -c cilium-agent -- cilium service list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: container not found ("cilium-agent")
	 

cmd: kubectl exec -n kube-system cilium-zrd87 -c cilium-agent -- cilium endpoint list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: container not found ("cilium-agent")
	 

===================== Exiting AfterFailed =====================
08:23:21 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTestExtended Validate toEntities KubeAPIServer
08:23:21 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTestExtended
08:23:21 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|5407be37_K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Allows_connection_to_KubeAPIServer.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4//343/artifact/1ea4535b_K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Denies_connection_to_KubeAPIServer.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4//343/artifact/5407be37_K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Allows_connection_to_KubeAPIServer.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4//343/artifact/c33850c1_K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Still_allows_connection_to_KubeAPIServer_with_a_duplicate_policy.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4//343/artifact/test_results_Cilium-PR-K8s-1.24-kernel-5.4_343_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4/343/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@nbusseneau
Copy link
Member

This was hit in #27030 as well, but on K8sDatapathConfig Host firewall With VXLAN.

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-5.4/65/testReport/junit/Suite-k8s-1/21/K8sDatapathConfig_Host_firewall_With_VXLAN/

Artifacts: b9cac1e2_K8sDatapathConfig_Host_firewall_With_VXLAN.zip

Excerpt of Cilium agent log:

2023-07-24T17:58:31.164794502Z level=info msg="Establishing connection to apiserver" host="https://10.0.2.15:6443" subsys=k8s
2023-07-24T17:58:31.168556794Z level=error msg="Unable to contact k8s api-server" error="Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": x509: certificate is valid for 10.96.0.1, 192.168.56.11, not 10.0.2.15" ipAddr="https://10.0.2.15:6443" subsys=k8s
2023-07-24T17:58:31.168573352Z level=fatal msg="Unable to initialize Kubernetes subsystem" error="unable to create k8s client: unable to create k8s client: Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": x509: certificate is valid for 10.96.0.1, 192.168.56.11, not 10.0.2.15" subsys=daemon

@github-actions
Copy link

This issue has been automatically marked as stale because it has not
had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added the stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. label Sep 24, 2023
@github-actions
Copy link

github-actions bot commented Oct 8, 2023

This issue has not seen any activity since it was marked stale.
Closing.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects
None yet
Development

No branches or pull requests

1 participant