Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications #30802

Open
maintainer-s-little-helper bot opened this issue Feb 16, 2024 · 54 comments
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-tqqd5 policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-tqqd5 policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc00012b3e0>: {
        s: "Cannot retrieve cilium pod cilium-tqqd5 policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Waiting for k8s node information
Unable to get node resource
Cilium pods: [cilium-4xgn6 cilium-tqqd5]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testds-vgmvq                  false     false
grafana-bd774d7bd-kjkvf       false     false
prometheus-598dddcc7c-6bzpk   false     false
coredns-86d4d67667-sj828      false     false
test-k8s2-7b4f7b4586-9w9mx    false     false
testclient-2h6gk              false     false
Cilium agent 'cilium-4xgn6': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-tqqd5': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
01:42:40 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
01:42:40 STEP: Ensuring the namespace kube-system exists
01:42:40 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
01:42:40 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
01:42:40 STEP: Installing Cilium
01:42:41 STEP: Waiting for Cilium to become ready
01:43:22 STEP: Restarting unmanaged pods coredns-86d4d67667-mgh2j in namespace kube-system
01:43:22 STEP: Validating if Kubernetes DNS is deployed
01:43:22 STEP: Checking if deployment is ready
01:43:22 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
01:43:22 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
01:43:22 STEP: Waiting for Kubernetes DNS to become operational
01:43:22 STEP: Checking if deployment is ready
01:43:22 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:43:23 STEP: Checking if deployment is ready
01:43:23 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:43:24 STEP: Checking if deployment is ready
01:43:24 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:43:25 STEP: Checking if deployment is ready
01:43:25 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:43:26 STEP: Checking if deployment is ready
01:43:26 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:43:27 STEP: Checking if deployment is ready
01:43:27 STEP: Checking if kube-dns service is plumbed correctly
01:43:27 STEP: Checking if pods have identity
01:43:27 STEP: Checking if DNS can resolve
01:43:28 STEP: Validating Cilium Installation
01:43:28 STEP: Performing Cilium controllers preflight check
01:43:28 STEP: Performing Cilium health check
01:43:28 STEP: Performing Cilium status preflight check
01:43:28 STEP: Checking whether host EP regenerated
01:43:29 STEP: Performing Cilium service preflight check
01:43:29 STEP: Performing K8s service preflight check
01:43:31 STEP: Waiting for cilium-operator to be ready
01:43:32 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
01:43:33 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
01:43:35 STEP: Making sure all endpoints are in ready state
01:43:37 STEP: Launching cilium monitor on "cilium-tqqd5"
01:43:37 STEP: Creating namespace 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito
01:43:37 STEP: Deploying demo_ds.yaml in namespace 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito
01:43:38 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-tqqd5 policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc00012b3e0>: {
        s: "Cannot retrieve cilium pod cilium-tqqd5 policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-02-16T01:43:48Z====
01:43:48 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
01:43:48 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-7b4f7b4586-9w9mx         2/2     Running             0          12s     10.0.1.131      k8s2   <none>           <none>
	 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-2h6gk                   1/1     Running             0          12s     10.0.1.67       k8s2   <none>           <none>
	 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-94g29                   0/1     ContainerCreating   0          13s     <none>          k8s1   <none>           <none>
	 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-f4497                       0/2     ContainerCreating   0          13s     <none>          k8s1   <none>           <none>
	 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-vgmvq                       2/2     Running             0          13s     10.0.1.78       k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-bd774d7bd-kjkvf            0/1     Running             0          70s     10.0.0.126      k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-598dddcc7c-6bzpk        1/1     Running             0          70s     10.0.0.79       k8s1   <none>           <none>
	 kube-system                                                       cilium-4xgn6                       1/1     Running             0          69s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-554c5fd95c-5hbj9   1/1     Running             0          69s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-554c5fd95c-t8xgc   1/1     Running             0          69s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-tqqd5                       1/1     Running             0          69s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-86d4d67667-sj828           1/1     Running             0          28s     10.0.0.206      k8s1   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-4w2lm                   1/1     Running             0          4m40s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-v6jm5                   1/1     Running             0          2m16s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-xn8sf                 1/1     Running             0          88s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-xxv2m                 1/1     Running             0          88s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-fqbqs               1/1     Running             0          2m7s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-jjlrp               1/1     Running             0          2m7s    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-4xgn6 cilium-tqqd5]
cmd: kubectl exec -n kube-system cilium-4xgn6 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.23 (v1.23.17) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-1a4105b4)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.1.4, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 203/65535 (0.31%), Flows/s: 4.71   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-16T01:43:30Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-4xgn6 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 266        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1af   10.0.1.206   ready   
	 1263       Disabled           Disabled          4275       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::172   10.0.1.67    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 2049       Disabled           Disabled          10241      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::195   10.0.1.131   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                                                              
	 2202       Disabled           Disabled          12176      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1d2   10.0.1.78    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 3174       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                                ready   
	                                                            reserved:host                                                                                                                                                     
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-tqqd5 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.23 (v1.23.17) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-1a4105b4)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.134, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 267/65535 (0.41%), Flows/s: 6.41   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-16T01:43:31Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-tqqd5 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 389        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::e9   10.0.0.61    ready   
	 574        Disabled           Disabled          5642       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::28   10.0.0.206   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 870        Disabled           Disabled          12176      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::29   10.0.0.60    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 1120       Disabled           Disabled          3167       k8s:app=grafana                                                                                                                  fd02::57   10.0.0.126   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1215       Disabled           Disabled          4275       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::40   10.0.0.131   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 1736       Disabled           Disabled          17255      k8s:app=prometheus                                                                                                               fd02::e8   10.0.0.79    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2079       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                               
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            reserved:host                                                                                                                                                    
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
01:44:31 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
01:44:31 STEP: Deleting deployment demo_ds.yaml
01:44:32 STEP: Deleting namespace 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito
01:44:47 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|d7a3eae6_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19//660/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19//660/artifact/d7a3eae6_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19//660/artifact/test_results_Cilium-PR-K8s-1.23-kernel-4.19_660_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19/660/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper maintainer-s-little-helper bot added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label Feb 16, 2024
@maintainer-s-little-helper
Copy link
Author

PR #30801 hit this flake with 96.12% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-r8fv7 policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-r8fv7 policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc0002de8a0>: {
        s: "Cannot retrieve cilium pod cilium-r8fv7 policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Waiting for k8s node information
Unable to get node resource
Cilium pods: [cilium-6tgtq cilium-r8fv7]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-t684w              false     false
testds-qp8cv                  false     false
grafana-bd774d7bd-4p6st       false     false
prometheus-598dddcc7c-6dvdj   false     false
coredns-86d4d67667-46c8t      false     false
Cilium agent 'cilium-6tgtq': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-r8fv7': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
04:58:01 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
04:58:01 STEP: Ensuring the namespace kube-system exists
04:58:01 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
04:58:02 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
04:58:02 STEP: Installing Cilium
04:58:02 STEP: Waiting for Cilium to become ready
04:58:47 STEP: Restarting unmanaged pods coredns-86d4d67667-2kv5l in namespace kube-system
04:58:47 STEP: Validating if Kubernetes DNS is deployed
04:58:47 STEP: Checking if deployment is ready
04:58:47 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
04:58:47 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
04:58:48 STEP: Waiting for Kubernetes DNS to become operational
04:58:48 STEP: Checking if deployment is ready
04:58:48 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
04:58:49 STEP: Checking if deployment is ready
04:58:49 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
04:58:50 STEP: Checking if deployment is ready
04:58:50 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
04:58:51 STEP: Checking if deployment is ready
04:58:51 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
04:58:52 STEP: Checking if deployment is ready
04:58:52 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
04:58:53 STEP: Checking if deployment is ready
04:58:53 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
04:58:54 STEP: Checking if deployment is ready
04:58:54 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
04:58:55 STEP: Checking if deployment is ready
04:58:55 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
04:58:56 STEP: Checking if deployment is ready
04:58:56 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
04:58:57 STEP: Checking if deployment is ready
04:58:57 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
04:58:58 STEP: Checking if deployment is ready
04:58:58 STEP: Checking if kube-dns service is plumbed correctly
04:58:58 STEP: Checking if pods have identity
04:58:58 STEP: Checking if DNS can resolve
04:58:58 STEP: Validating Cilium Installation
04:58:58 STEP: Performing Cilium controllers preflight check
04:58:58 STEP: Performing Cilium health check
04:58:58 STEP: Performing Cilium status preflight check
04:58:58 STEP: Checking whether host EP regenerated
04:58:59 STEP: Performing Cilium service preflight check
04:58:59 STEP: Performing K8s service preflight check
04:59:01 STEP: Waiting for cilium-operator to be ready
04:59:01 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
04:59:01 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
04:59:01 STEP: Making sure all endpoints are in ready state
04:59:08 STEP: Launching cilium monitor on "cilium-6tgtq"
04:59:08 STEP: Creating namespace 202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito
04:59:08 STEP: Deploying demo_ds.yaml in namespace 202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito
04:59:09 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-r8fv7 policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc0002de8a0>: {
        s: "Cannot retrieve cilium pod cilium-r8fv7 policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-02-16T04:59:20Z====
04:59:20 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
04:59:23 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-7b4f7b4586-tdxh4         0/2     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-nn4f9                   0/1     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-t684w                   1/1     Running             0          16s     10.0.0.71       k8s1   <none>           <none>
	 202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-g4sh6                       0/2     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-qp8cv                       2/2     Running             0          16s     10.0.0.48       k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-bd774d7bd-4p6st            0/1     ContainerCreating   0          84s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-598dddcc7c-6dvdj        1/1     Running             0          84s     10.0.1.227      k8s2   <none>           <none>
	 kube-system                                                       cilium-6tgtq                       1/1     Running             0          83s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-554c5fd95c-k2tn5   1/1     Running             0          83s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-554c5fd95c-srgdn   1/1     Running             0          83s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-r8fv7                       1/1     Running             0          83s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-86d4d67667-46c8t           1/1     Running             0          37s     10.0.1.126      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m31s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m31s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m31s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-74nmr                   1/1     Running             0          4m58s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-wq7qk                   1/1     Running             0          2m27s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m31s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-bh45c                 1/1     Running             0          100s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-tjlxd                 1/1     Running             0          100s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-7f4xx               1/1     Running             0          2m19s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-dcngn               1/1     Running             0          2m19s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-6tgtq cilium-r8fv7]
cmd: kubectl exec -n kube-system cilium-6tgtq -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.23 (v1.23.17) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-1a4105b4)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.0.18, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 149/65535 (0.23%), Flows/s: 2.10   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-16T04:58:59Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-6tgtq -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 1037       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                               
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            reserved:host                                                                                                                                                    
	 1825       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::20   10.0.0.234   ready   
	 2352       Disabled           Disabled          22788      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::5d   10.0.0.48    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 3212       Disabled           Disabled          41896      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::af   10.0.0.71    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-r8fv7 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.23 (v1.23.17) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-1a4105b4)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 8/254 allocated from 10.0.1.0/24, IPv6: 8/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       40/40 healthy
	 Proxy Status:            OK, ip 10.0.1.2, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 245/65535 (0.37%), Flows/s: 3.88   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-16T04:59:01Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-r8fv7 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 469        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1f6   10.0.1.214   ready   
	 631        Disabled           Disabled          22788      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::14c   10.0.1.98    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 1172       Disabled           Disabled          10905      k8s:app=grafana                                                                                                                  fd02::16a   10.0.1.6     ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                  
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                 
	 1197       Disabled           Disabled          6751       k8s:app=prometheus                                                                                                               fd02::153   10.0.1.227   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                  
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                            
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                 
	 1716       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                                ready   
	                                                            reserved:host                                                                                                                                                     
	 1754       Disabled           Disabled          41896      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1a5   10.0.1.222   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 3116       Disabled           Disabled          24675      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::1be   10.0.1.126   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                       
	                                                            k8s:k8s-app=kube-dns                                                                                                                                              
	 4044       Disabled           Disabled          2677       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::173   10.0.1.182   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                                                              
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
05:00:04 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
05:00:04 STEP: Deleting deployment demo_ds.yaml
05:00:05 STEP: Deleting namespace 202402160459k8sdatapathconfigmonitoraggregationchecksthatmonito
05:00:20 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|133c39a6_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19//661/artifact/133c39a6_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19//661/artifact/24d99677_K8sDatapathConfig_Host_firewall_Check_connectivity_with_IPv6_disabled.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19//661/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19//661/artifact/test_results_Cilium-PR-K8s-1.23-kernel-4.19_661_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19/661/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #30812 hit this flake with 93.38% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-56pz8 policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-56pz8 policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000448c50>: {
        s: "Cannot retrieve cilium pod cilium-56pz8 policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Unable to get node resource
Waiting for k8s node information
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-56pz8 cilium-c4r67]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-sc4cg              false     false
testds-wh6w4                  false     false
grafana-698dc95f6c-5j522      false     false
prometheus-669755c8c5-5gfzv   false     false
coredns-85fbf8f7dd-blhz7      false     false
Cilium agent 'cilium-56pz8': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-c4r67': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
00:06:19 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
00:06:19 STEP: Ensuring the namespace kube-system exists
00:06:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
00:06:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
00:06:19 STEP: Installing Cilium
00:06:20 STEP: Waiting for Cilium to become ready
00:07:05 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-t69br in namespace kube-system
00:07:05 STEP: Validating if Kubernetes DNS is deployed
00:07:05 STEP: Checking if deployment is ready
00:07:05 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
00:07:05 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
00:07:05 STEP: Waiting for Kubernetes DNS to become operational
00:07:05 STEP: Checking if deployment is ready
00:07:05 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
00:07:06 STEP: Checking if deployment is ready
00:07:06 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
00:07:07 STEP: Checking if deployment is ready
00:07:07 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
00:07:08 STEP: Checking if deployment is ready
00:07:08 STEP: Checking if kube-dns service is plumbed correctly
00:07:08 STEP: Checking if pods have identity
00:07:08 STEP: Checking if DNS can resolve
00:07:09 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
00:07:09 STEP: Checking if deployment is ready
00:07:09 STEP: Checking if kube-dns service is plumbed correctly
00:07:09 STEP: Checking if pods have identity
00:07:09 STEP: Checking if DNS can resolve
00:07:10 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
00:07:10 STEP: Checking if deployment is ready
00:07:10 STEP: Checking if kube-dns service is plumbed correctly
00:07:10 STEP: Checking if pods have identity
00:07:10 STEP: Checking if DNS can resolve
00:07:11 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
00:07:11 STEP: Checking if deployment is ready
00:07:11 STEP: Checking if kube-dns service is plumbed correctly
00:07:11 STEP: Checking if pods have identity
00:07:11 STEP: Checking if DNS can resolve
00:07:12 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
00:07:12 STEP: Checking if deployment is ready
00:07:12 STEP: Checking if kube-dns service is plumbed correctly
00:07:12 STEP: Checking if pods have identity
00:07:12 STEP: Checking if DNS can resolve
00:07:13 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
00:07:13 STEP: Checking if deployment is ready
00:07:13 STEP: Checking if kube-dns service is plumbed correctly
00:07:13 STEP: Checking if pods have identity
00:07:13 STEP: Checking if DNS can resolve
00:07:14 STEP: Validating Cilium Installation
00:07:14 STEP: Performing Cilium controllers preflight check
00:07:14 STEP: Performing Cilium status preflight check
00:07:14 STEP: Checking whether host EP regenerated
00:07:14 STEP: Performing Cilium health check
00:07:15 STEP: Performing Cilium service preflight check
00:07:15 STEP: Performing K8s service preflight check
00:07:16 STEP: Waiting for cilium-operator to be ready
00:07:16 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
00:07:16 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
00:07:16 STEP: Making sure all endpoints are in ready state
00:07:18 STEP: Launching cilium monitor on "cilium-c4r67"
00:07:18 STEP: Creating namespace 202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito
00:07:18 STEP: Deploying demo_ds.yaml in namespace 202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito
00:07:19 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-56pz8 policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000448c50>: {
        s: "Cannot retrieve cilium pod cilium-56pz8 policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-02-17T00:07:29Z====
00:07:29 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
00:07:37 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-jcfpr          0/2     ContainerCreating   0          20s     <none>          k8s2   <none>           <none>
	 202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-pfbqm                   0/1     ContainerCreating   0          20s     <none>          k8s2   <none>           <none>
	 202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-sc4cg                   1/1     Running             0          20s     10.0.1.125      k8s1   <none>           <none>
	 202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-sbrkt                       0/2     ContainerCreating   0          20s     <none>          k8s2   <none>           <none>
	 202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-wh6w4                       2/2     Running             0          20s     10.0.1.182      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-5j522           0/1     Running             0          80s     10.0.0.223      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-5gfzv        0/1     ContainerCreating   0          80s     <none>          k8s2   <none>           <none>
	 kube-system                                                       cilium-56pz8                       1/1     Running             0          79s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-c4r67                       1/1     Running             0          79s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-6849565478-2gtm6   1/1     Running             0          79s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-6849565478-6nscz   1/1     Running             0          79s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-blhz7           1/1     Running             0          34s     10.0.0.134      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m37s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m37s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m37s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-6krhn                   1/1     Running             0          2m16s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-gw85z                   1/1     Running             0          5m18s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m37s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-4jr97                 1/1     Running             0          96s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-xdvzp                 1/1     Running             0          96s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-6p6jq               1/1     Running             0          2m13s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-bxfxz               1/1     Running             0          2m13s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-56pz8 cilium-c4r67]
cmd: kubectl exec -n kube-system cilium-56pz8 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-22b6f13e)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.88, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 210/65535 (0.32%), Flows/s: 3.05   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-17T00:07:15Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-56pz8 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 163        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::4c   10.0.0.152   ready   
	 236        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 271        Disabled           Disabled          543        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::78   10.0.0.134   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 415        Disabled           Disabled          22942      k8s:app=prometheus                                                                                                               fd02::9    10.0.0.57    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 639        Disabled           Disabled          25532      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::d3   10.0.0.33    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 698        Disabled           Disabled          1671       k8s:app=grafana                                                                                                                  fd02::5f   10.0.0.223   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1934       Disabled           Disabled          11503      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::b9   10.0.0.13    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 2265       Disabled           Disabled          1916       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::bd   10.0.0.175   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-c4r67 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-22b6f13e)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.4, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 173/65535 (0.26%), Flows/s: 2.86   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-17T00:07:16Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-c4r67 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 268        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 3219       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::15a   10.0.1.105   ready   
	 3241       Disabled           Disabled          25532      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1c6   10.0.1.182   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 3963       Disabled           Disabled          11503      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::15d   10.0.1.125   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
00:08:18 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
00:08:18 STEP: Deleting deployment demo_ds.yaml
00:08:19 STEP: Deleting namespace 202402170007k8sdatapathconfigmonitoraggregationchecksthatmonito
00:08:34 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|bd3e5889_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//394/artifact/70bf38d0_K8sDatapathConfig_Host_firewall_Check_connectivity_with_IPv6_disabled.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//394/artifact/bd3e5889_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//394/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//394/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_394_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/394/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #30812 hit this flake with 93.64% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-6kjzs policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-6kjzs policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000f2c1f0>: {
        s: "Cannot retrieve cilium pod cilium-6kjzs policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Cilium pods: [cilium-6kjzs cilium-n9p2s]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
prometheus-669755c8c5-89wjh   false     false
coredns-85fbf8f7dd-kfwr5      false     false
testclient-7qck4              false     false
testds-z6pgq                  false     false
grafana-698dc95f6c-5x42w      false     false
Cilium agent 'cilium-6kjzs': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-n9p2s': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
08:38:44 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
08:38:44 STEP: Ensuring the namespace kube-system exists
08:38:44 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
08:38:44 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
08:38:45 STEP: Installing Cilium
08:38:45 STEP: Waiting for Cilium to become ready
08:39:28 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-8hcmd in namespace kube-system
08:39:28 STEP: Validating if Kubernetes DNS is deployed
08:39:28 STEP: Checking if deployment is ready
08:39:28 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
08:39:28 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
08:39:28 STEP: Waiting for Kubernetes DNS to become operational
08:39:28 STEP: Checking if deployment is ready
08:39:28 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:39:29 STEP: Checking if deployment is ready
08:39:29 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:39:30 STEP: Checking if deployment is ready
08:39:30 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:39:31 STEP: Checking if deployment is ready
08:39:31 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:39:32 STEP: Checking if deployment is ready
08:39:32 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:39:33 STEP: Checking if deployment is ready
08:39:33 STEP: Checking if kube-dns service is plumbed correctly
08:39:33 STEP: Checking if pods have identity
08:39:33 STEP: Checking if DNS can resolve
08:39:34 STEP: Validating Cilium Installation
08:39:34 STEP: Performing Cilium controllers preflight check
08:39:34 STEP: Performing Cilium health check
08:39:34 STEP: Performing Cilium status preflight check
08:39:34 STEP: Checking whether host EP regenerated
08:39:35 STEP: Performing Cilium service preflight check
08:39:35 STEP: Performing K8s service preflight check
08:39:36 STEP: Waiting for cilium-operator to be ready
08:39:36 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
08:39:36 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
08:39:36 STEP: Making sure all endpoints are in ready state
08:39:43 STEP: Launching cilium monitor on "cilium-n9p2s"
08:39:43 STEP: Creating namespace 202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito
08:39:43 STEP: Deploying demo_ds.yaml in namespace 202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito
08:39:44 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-6kjzs policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000f2c1f0>: {
        s: "Cannot retrieve cilium pod cilium-6kjzs policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-02-17T08:39:54Z====
08:39:54 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
08:39:55 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-288p6          0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-7qck4                   1/1     Running             0          13s     10.0.1.76       k8s1   <none>           <none>
	 202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-xllxr                   0/1     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-lfbxj                       0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-z6pgq                       2/2     Running             0          13s     10.0.1.90       k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-5x42w           0/1     Running             0          73s     10.0.0.46       k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-89wjh        1/1     Running             0          73s     10.0.0.21       k8s2   <none>           <none>
	 kube-system                                                       cilium-6kjzs                       1/1     Running             0          72s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-n9p2s                       1/1     Running             0          72s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-6849565478-k5k5s   1/1     Running             0          72s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-6849565478-ppqnp   1/1     Running             0          72s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-kfwr5           1/1     Running             0          29s     10.0.0.144      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m3s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m3s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m10s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-hcgnq                   1/1     Running             0          2m10s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-r5k46                   1/1     Running             0          4m44s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m3s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-kxwk2                 1/1     Running             0          90s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-lwrmx                 1/1     Running             0          90s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-f2dcb               1/1     Running             0          2m7s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-tzggk               1/1     Running             0          2m7s    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-6kjzs cilium-n9p2s]
cmd: kubectl exec -n kube-system cilium-6kjzs -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-22b6f13e)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.121, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 207/65535 (0.32%), Flows/s: 4.28   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-17T08:39:35Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-6kjzs -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 686        Disabled           Disabled          11911      k8s:app=grafana                                                                                                                  fd02::36   10.0.0.46    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 817        Disabled           Disabled          11327      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::86   10.0.0.204   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 987        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::bd   10.0.0.63    ready   
	 1572       Disabled           Disabled          18451      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::75   10.0.0.235   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 1962       Disabled           Disabled          7661       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::7a   10.0.0.144   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 2766       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 3513       Disabled           Disabled          13218      k8s:app=prometheus                                                                                                               fd02::61   10.0.0.21    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 3609       Disabled           Disabled          16511      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::4a   10.0.0.10    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-n9p2s -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-22b6f13e)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.111, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 145/65535 (0.22%), Flows/s: 2.34   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-17T08:39:36Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-n9p2s -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4        STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 1874       Disabled           Disabled          16511      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::161   10.0.1.90   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 2341       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                               
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            reserved:host                                                                                                                                                    
	 2923       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::12a   10.0.1.87   ready   
	 3078       Disabled           Disabled          11327      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1a0   10.0.1.76   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
08:40:37 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
08:40:37 STEP: Deleting deployment demo_ds.yaml
08:40:38 STEP: Deleting namespace 202402170839k8sdatapathconfigmonitoraggregationchecksthatmonito
08:40:52 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|7125ae22_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//395/artifact/7125ae22_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//395/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//395/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_395_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/395/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #30957 hit this flake with 92.21% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-qc22j policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-qc22j policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc00091e480>: {
        s: "Cannot retrieve cilium pod cilium-qc22j policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 1
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Network status error received, restarting client connections
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Waiting for k8s node information
Unable to get node resource
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-9d5wl cilium-qc22j]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-lbd5w              false     false
testds-t5bgd                  false     false
grafana-698dc95f6c-q57xt      false     false
prometheus-669755c8c5-qbl8q   false     false
coredns-85fbf8f7dd-c8lbz      false     false
Cilium agent 'cilium-9d5wl': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-qc22j': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
10:06:17 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
10:06:17 STEP: Ensuring the namespace kube-system exists
10:06:17 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
10:06:17 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
10:06:17 STEP: Installing Cilium
10:06:18 STEP: Waiting for Cilium to become ready
10:06:59 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-86fz5 in namespace kube-system
10:06:59 STEP: Validating if Kubernetes DNS is deployed
10:06:59 STEP: Checking if deployment is ready
10:06:59 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
10:06:59 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
10:06:59 STEP: Waiting for Kubernetes DNS to become operational
10:06:59 STEP: Checking if deployment is ready
10:06:59 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:07:00 STEP: Checking if deployment is ready
10:07:00 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:07:01 STEP: Checking if deployment is ready
10:07:01 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:07:02 STEP: Checking if deployment is ready
10:07:02 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:07:03 STEP: Checking if deployment is ready
10:07:03 STEP: Checking if kube-dns service is plumbed correctly
10:07:03 STEP: Checking if pods have identity
10:07:03 STEP: Checking if DNS can resolve
10:07:04 STEP: Validating Cilium Installation
10:07:04 STEP: Performing Cilium controllers preflight check
10:07:04 STEP: Performing Cilium health check
10:07:04 STEP: Checking whether host EP regenerated
10:07:04 STEP: Performing Cilium status preflight check
10:07:05 STEP: Performing Cilium service preflight check
10:07:05 STEP: Performing K8s service preflight check
10:07:06 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-9d5wl': controller ipcache-inject-labels is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Disabled   
	 Host firewall:          Disabled
	 CNI Chaining:           none
	 CNI Config file:        CNI configuration file management disabled
	 Cilium:                 Ok   1.13.12 (v1.13.12-5984e8c1)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 IPAM:                   IPv4: 2/254 allocated from 10.0.1.0/24, IPv6: 2/254 allocated from fd02::100/120
	 IPv6 BIG TCP:           Disabled
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      16/17 healthy
	   Name                                  Last success   Last error   Count   Message
	   cilium-health-ep                      5s ago         never        0       no error                     
	   dns-garbage-collector-job             9s ago         never        0       no error                     
	   endpoint-1750-regeneration-recovery   never          never        0       no error                     
	   endpoint-391-regeneration-recovery    never          never        0       no error                     
	   endpoint-gc                           9s ago         never        0       no error                     
	   ipcache-inject-labels                 never          7s ago       7       k8s cache not fully synced   
	   k8s-heartbeat                         9s ago         never        0       no error                     
	   link-cache                            6s ago         never        0       no error                     
	   metricsmap-bpf-prom-sync              4s ago         never        0       no error                     
	   resolve-identity-1750                 6s ago         never        0       no error                     
	   resolve-identity-391                  5s ago         never        0       no error                     
	   sync-endpoints-and-host-ips           6s ago         never        0       no error                     
	   sync-lb-maps-with-k8s-services        6s ago         never        0       no error                     
	   sync-policymap-1750                   5s ago         never        0       no error                     
	   sync-to-k8s-ciliumendpoint (1750)     6s ago         never        0       no error                     
	   sync-to-k8s-ciliumendpoint (391)      5s ago         never        0       no error                     
	   template-dir-watcher                  never          never        0       no error                     
	 Proxy Status:            OK, ip 10.0.1.60, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 42/65535 (0.06%), Flows/s: 6.88   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          0/2 reachable   (2024-02-26T10:07:00Z)
	   Name                   IP              Node        Endpoints
	   k8s1 (localhost)       192.168.56.11   reachable   unreachable
	   k8s2                   192.168.56.12   reachable   unreachable
	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
	 

10:07:06 STEP: Performing Cilium controllers preflight check
10:07:06 STEP: Performing Cilium health check
10:07:06 STEP: Checking whether host EP regenerated
10:07:06 STEP: Performing Cilium status preflight check
10:07:07 STEP: Performing Cilium service preflight check
10:07:07 STEP: Performing K8s service preflight check
10:07:09 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-9d5wl': controller ipcache-inject-labels is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Disabled   
	 Host firewall:          Disabled
	 CNI Chaining:           none
	 CNI Config file:        CNI configuration file management disabled
	 Cilium:                 Ok   1.13.12 (v1.13.12-5984e8c1)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 IPAM:                   IPv4: 2/254 allocated from 10.0.1.0/24, IPv6: 2/254 allocated from fd02::100/120
	 IPv6 BIG TCP:           Disabled
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      16/17 healthy
	   Name                                  Last success   Last error   Count   Message
	   cilium-health-ep                      7s ago         never        0       no error                     
	   dns-garbage-collector-job             11s ago        never        0       no error                     
	   endpoint-1750-regeneration-recovery   never          never        0       no error                     
	   endpoint-391-regeneration-recovery    never          never        0       no error                     
	   endpoint-gc                           11s ago        never        0       no error                     
	   ipcache-inject-labels                 never          10s ago      7       k8s cache not fully synced   
	   k8s-heartbeat                         11s ago        never        0       no error                     
	   link-cache                            8s ago         never        0       no error                     
	   metricsmap-bpf-prom-sync              6s ago         never        0       no error                     
	   resolve-identity-1750                 9s ago         never        0       no error                     
	   resolve-identity-391                  7s ago         never        0       no error                     
	   sync-endpoints-and-host-ips           9s ago         never        0       no error                     
	   sync-lb-maps-with-k8s-services        9s ago         never        0       no error                     
	   sync-policymap-1750                   7s ago         never        0       no error                     
	   sync-to-k8s-ciliumendpoint (1750)     9s ago         never        0       no error                     
	   sync-to-k8s-ciliumendpoint (391)      7s ago         never        0       no error                     
	   template-dir-watcher                  never          never        0       no error                     
	 Proxy Status:            OK, ip 10.0.1.60, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 42/65535 (0.06%), Flows/s: 6.88   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-26T10:07:05Z)
	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
	 

10:07:09 STEP: Performing Cilium status preflight check
10:07:09 STEP: Performing Cilium health check
10:07:09 STEP: Performing Cilium controllers preflight check
10:07:09 STEP: Checking whether host EP regenerated
10:07:10 STEP: Performing Cilium service preflight check
10:07:10 STEP: Performing K8s service preflight check
10:07:11 STEP: Waiting for cilium-operator to be ready
10:07:11 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
10:07:11 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
10:07:11 STEP: Making sure all endpoints are in ready state
10:07:18 STEP: Launching cilium monitor on "cilium-9d5wl"
10:07:18 STEP: Creating namespace 202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito
10:07:18 STEP: Deploying demo_ds.yaml in namespace 202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito
10:07:19 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-qc22j policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc00091e480>: {
        s: "Cannot retrieve cilium pod cilium-qc22j policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-02-26T10:07:30Z====
10:07:30 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
10:07:30 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                              READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-rd9k9         0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-lbd5w                  1/1     Running             0          13s     10.0.1.210      k8s1   <none>           <none>
	 202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-lfcgm                  0/1     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-jtz6q                      0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-t5bgd                      2/2     Running             0          13s     10.0.1.235      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-q57xt          0/1     Running             0          75s     10.0.0.38       k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-qbl8q       1/1     Running             0          75s     10.0.0.33       k8s2   <none>           <none>
	 kube-system                                                       cilium-9d5wl                      1/1     Running             0          74s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-5766f7c94-28qvx   1/1     Running             0          74s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-5766f7c94-kz96l   1/1     Running             0          74s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-qc22j                      1/1     Running             0          74s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-c8lbz          1/1     Running             0          33s     10.0.0.67       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                         1/1     Running             0          5m16s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1               1/1     Running             0          5m16s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1      1/1     Running             0          5m16s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-5bprq                  1/1     Running             0          4m54s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-lbh2z                  1/1     Running             0          2m14s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1               1/1     Running             0          5m16s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-k9dsx                1/1     Running             0          92s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-tqpkx                1/1     Running             0          92s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-q8rfb              1/1     Running             0          2m12s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-qwfkt              1/1     Running             0          2m12s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-9d5wl cilium-qc22j]
cmd: kubectl exec -n kube-system cilium-9d5wl -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-5984e8c1)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.60, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 184/65535 (0.28%), Flows/s: 4.01   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-26T10:07:10Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-9d5wl -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 391        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1a4   10.0.1.241   ready   
	 1750       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 3092       Disabled           Disabled          22880      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::181   10.0.1.235   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 3589       Disabled           Disabled          51870      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1b9   10.0.1.210   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-qc22j -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-5984e8c1)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.198, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 270/65535 (0.41%), Flows/s: 5.18   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-26T10:07:11Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-qc22j -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 168        Disabled           Disabled          15783      k8s:app=grafana                                                                                                                  fd02::3c   10.0.0.38    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 827        Disabled           Disabled          51870      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::3e   10.0.0.164   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 969        Disabled           Disabled          64275      k8s:app=prometheus                                                                                                               fd02::2c   10.0.0.33    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1305       Disabled           Disabled          4487       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::e6   10.0.0.193   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 1866       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 2211       Disabled           Disabled          22880      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::33   10.0.0.240   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 2644       Disabled           Disabled          57702      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::a5   10.0.0.67    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 3026       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::68   10.0.0.116   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
10:08:11 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
10:08:11 STEP: Deleting deployment demo_ds.yaml
10:08:12 STEP: Deleting namespace 202402261007k8sdatapathconfigmonitoraggregationchecksthatmonito
10:08:27 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|9d9fd9a4_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//400/artifact/9d9fd9a4_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//400/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//400/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_400_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/400/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #31049 hit this flake with 91.34% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-h4gdk policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-h4gdk policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000c2bea0>: {
        s: "Cannot retrieve cilium pod cilium-h4gdk policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Cilium pods: [cilium-4sngf cilium-h4gdk]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-mpkkx              false     false
testds-wgjfs                  false     false
grafana-698dc95f6c-dmn5d      false     false
prometheus-669755c8c5-vwwlv   false     false
coredns-85fbf8f7dd-dzblp      false     false
Cilium agent 'cilium-4sngf': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-h4gdk': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
15:25:12 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
15:25:12 STEP: Ensuring the namespace kube-system exists
15:25:12 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
15:25:12 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
15:25:12 STEP: Installing Cilium
15:25:12 STEP: Waiting for Cilium to become ready
15:25:51 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-nxck2 in namespace kube-system
15:25:51 STEP: Validating if Kubernetes DNS is deployed
15:25:51 STEP: Checking if deployment is ready
15:25:51 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
15:25:51 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
15:25:52 STEP: Waiting for Kubernetes DNS to become operational
15:25:52 STEP: Checking if deployment is ready
15:25:52 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:25:53 STEP: Checking if deployment is ready
15:25:53 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:25:54 STEP: Checking if deployment is ready
15:25:54 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:25:55 STEP: Checking if deployment is ready
15:25:55 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:25:56 STEP: Checking if deployment is ready
15:25:56 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:25:57 STEP: Checking if deployment is ready
15:25:57 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:25:58 STEP: Checking if deployment is ready
15:25:58 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:25:59 STEP: Checking if deployment is ready
15:25:59 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:26:00 STEP: Checking if deployment is ready
15:26:00 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:26:01 STEP: Checking if deployment is ready
15:26:01 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:26:02 STEP: Checking if deployment is ready
15:26:02 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:26:03 STEP: Checking if deployment is ready
15:26:03 STEP: Checking if kube-dns service is plumbed correctly
15:26:03 STEP: Checking if pods have identity
15:26:03 STEP: Checking if DNS can resolve
15:26:03 STEP: Validating Cilium Installation
15:26:03 STEP: Performing Cilium controllers preflight check
15:26:03 STEP: Performing Cilium health check
15:26:03 STEP: Performing Cilium status preflight check
15:26:03 STEP: Checking whether host EP regenerated
15:26:04 STEP: Performing Cilium service preflight check
15:26:04 STEP: Performing K8s service preflight check
15:26:06 STEP: Waiting for cilium-operator to be ready
15:26:06 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
15:26:06 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
15:26:06 STEP: Making sure all endpoints are in ready state
15:26:07 STEP: Launching cilium monitor on "cilium-4sngf"
15:26:07 STEP: Creating namespace 202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito
15:26:07 STEP: Deploying demo_ds.yaml in namespace 202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito
15:26:08 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-h4gdk policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000c2bea0>: {
        s: "Cannot retrieve cilium pod cilium-h4gdk policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-02-29T15:26:19Z====
15:26:19 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
15:26:21 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-blc6r          0/2     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-dtw8w                   0/1     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-mpkkx                   1/1     Running             0          15s     10.0.1.222      k8s1   <none>           <none>
	 202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-dn9f4                       0/2     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-wgjfs                       2/2     Running             0          15s     10.0.1.121      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-dmn5d           0/1     Running             0          71s     10.0.0.64       k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-vwwlv        0/1     ContainerCreating   0          71s     <none>          k8s2   <none>           <none>
	 kube-system                                                       cilium-4sngf                       1/1     Running             0          71s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-h4gdk                       1/1     Running             0          70s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-7bc85849f4-hvhdg   1/1     Running             0          70s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-7bc85849f4-n7tv2   1/1     Running             0          70s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-dzblp           1/1     Running             0          31s     10.0.0.85       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m6s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m6s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m6s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-szmf4                   1/1     Running             0          4m47s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-v825b                   1/1     Running             0          2m7s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m6s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-gkwgr                 1/1     Running             0          87s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-rnvfb                 1/1     Running             0          87s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-fc8sd               1/1     Running             0          2m5s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-p6z4g               1/1     Running             0          2m5s    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-4sngf cilium-h4gdk]
cmd: kubectl exec -n kube-system cilium-4sngf -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-22e12a78)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.120, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 174/65535 (0.27%), Flows/s: 3.71   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-29T15:26:05Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-4sngf -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 900        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 1281       Disabled           Disabled          26722      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::18c   10.0.1.222   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 2189       Disabled           Disabled          57227      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::10a   10.0.1.121   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 3246       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1cc   10.0.1.58    ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-h4gdk -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-22e12a78)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 8/254 allocated from 10.0.0.0/24, IPv6: 8/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       40/40 healthy
	 Proxy Status:            OK, ip 10.0.0.179, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 222/65535 (0.34%), Flows/s: 3.58   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-29T15:26:06Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-h4gdk -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 429        Disabled           Disabled          12577      k8s:app=prometheus                                                                                                               fd02::4e   10.0.0.75    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 784        Disabled           Disabled          26722      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::c4   10.0.0.222   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 1117       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::12   10.0.0.76    ready   
	 1545       Disabled           Disabled          57227      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::e    10.0.0.37    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 1651       Disabled           Disabled          36719      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::40   10.0.0.57    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 1825       Disabled           Disabled          33576      k8s:app=grafana                                                                                                                  fd02::17   10.0.0.64    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2109       Disabled           Disabled          12162      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::16   10.0.0.85    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 2786       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
15:27:04 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
15:27:04 STEP: Deleting deployment demo_ds.yaml
15:27:05 STEP: Deleting namespace 202402291526k8sdatapathconfigmonitoraggregationchecksthatmonito
15:27:19 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|38f09da9_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//407/artifact/38f09da9_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//407/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//407/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_407_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/407/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #31223 hit this flake with 92.21% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-qp7bx policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-qp7bx policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000903750>: {
        s: "Cannot retrieve cilium pod cilium-qp7bx policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Cilium pods: [cilium-c6zcs cilium-qp7bx]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testds-jlvw5                  false     false
grafana-698dc95f6c-fr7dw      false     false
prometheus-669755c8c5-kxcbh   false     false
coredns-85fbf8f7dd-s9ql4      false     false
testclient-6stfs              false     false
Cilium agent 'cilium-c6zcs': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-qp7bx': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
14:41:19 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
14:41:19 STEP: Ensuring the namespace kube-system exists
14:41:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
14:41:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
14:41:19 STEP: Installing Cilium
14:41:20 STEP: Waiting for Cilium to become ready
14:42:00 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-fwng2 in namespace kube-system
14:42:00 STEP: Validating if Kubernetes DNS is deployed
14:42:00 STEP: Checking if deployment is ready
14:42:00 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
14:42:00 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
14:42:01 STEP: Waiting for Kubernetes DNS to become operational
14:42:01 STEP: Checking if deployment is ready
14:42:01 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:42:02 STEP: Checking if deployment is ready
14:42:02 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:42:03 STEP: Checking if deployment is ready
14:42:03 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:42:04 STEP: Checking if deployment is ready
14:42:04 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:42:05 STEP: Checking if deployment is ready
14:42:05 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:42:06 STEP: Checking if deployment is ready
14:42:06 STEP: Checking if kube-dns service is plumbed correctly
14:42:06 STEP: Checking if DNS can resolve
14:42:06 STEP: Checking if pods have identity
14:42:06 STEP: Validating Cilium Installation
14:42:06 STEP: Performing Cilium health check
14:42:06 STEP: Performing Cilium controllers preflight check
14:42:06 STEP: Checking whether host EP regenerated
14:42:06 STEP: Performing Cilium status preflight check
14:42:07 STEP: Performing Cilium service preflight check
14:42:07 STEP: Performing K8s service preflight check
14:42:08 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-c6zcs': controller ipcache-inject-labels is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Disabled   
	 Host firewall:          Disabled
	 CNI Chaining:           none
	 CNI Config file:        CNI configuration file management disabled
	 Cilium:                 Ok   1.13.12 (v1.13.12-e1b4e307)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 IPAM:                   IPv4: 2/254 allocated from 10.0.1.0/24, IPv6: 2/254 allocated from fd02::100/120
	 IPv6 BIG TCP:           Disabled
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      16/17 healthy
	   Name                                 Last success   Last error   Count   Message
	   cilium-health-ep                     6s ago         never        0       no error                     
	   dns-garbage-collector-job            10s ago        never        0       no error                     
	   endpoint-557-regeneration-recovery   never          never        0       no error                     
	   endpoint-75-regeneration-recovery    never          never        0       no error                     
	   endpoint-gc                          10s ago        never        0       no error                     
	   ipcache-inject-labels                never          9s ago       8       k8s cache not fully synced   
	   k8s-heartbeat                        10s ago        never        0       no error                     
	   link-cache                           7s ago         never        0       no error                     
	   metricsmap-bpf-prom-sync             5s ago         never        0       no error                     
	   resolve-identity-557                 7s ago         never        0       no error                     
	   resolve-identity-75                  6s ago         never        0       no error                     
	   sync-endpoints-and-host-ips          7s ago         never        0       no error                     
	   sync-lb-maps-with-k8s-services       7s ago         never        0       no error                     
	   sync-policymap-557                   6s ago         never        0       no error                     
	   sync-to-k8s-ciliumendpoint (557)     7s ago         never        0       no error                     
	   sync-to-k8s-ciliumendpoint (75)      6s ago         never        0       no error                     
	   template-dir-watcher                 never          never        0       no error                     
	 Proxy Status:            OK, ip 10.0.1.103, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 37/65535 (0.06%), Flows/s: 6.91   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:                Warning   cilium-health daemon unreachable
	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
	 

14:42:08 STEP: Performing Cilium controllers preflight check
14:42:08 STEP: Performing Cilium health check
14:42:08 STEP: Checking whether host EP regenerated
14:42:08 STEP: Performing Cilium status preflight check
14:42:09 STEP: Performing Cilium service preflight check
14:42:09 STEP: Performing K8s service preflight check
14:42:11 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-c6zcs': controller ipcache-inject-labels is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Disabled   
	 Host firewall:          Disabled
	 CNI Chaining:           none
	 CNI Config file:        CNI configuration file management disabled
	 Cilium:                 Ok   1.13.12 (v1.13.12-e1b4e307)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 IPAM:                   IPv4: 2/254 allocated from 10.0.1.0/24, IPv6: 2/254 allocated from fd02::100/120
	 IPv6 BIG TCP:           Disabled
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      16/17 healthy
	   Name                                 Last success   Last error   Count   Message
	   cilium-health-ep                     8s ago         never        0       no error                     
	   dns-garbage-collector-job            12s ago        never        0       no error                     
	   endpoint-557-regeneration-recovery   never          never        0       no error                     
	   endpoint-75-regeneration-recovery    never          never        0       no error                     
	   endpoint-gc                          12s ago        never        0       no error                     
	   ipcache-inject-labels                never          10s ago      8       k8s cache not fully synced   
	   k8s-heartbeat                        12s ago        never        0       no error                     
	   link-cache                           9s ago         never        0       no error                     
	   metricsmap-bpf-prom-sync             7s ago         never        0       no error                     
	   resolve-identity-557                 9s ago         never        0       no error                     
	   resolve-identity-75                  8s ago         never        0       no error                     
	   sync-endpoints-and-host-ips          9s ago         never        0       no error                     
	   sync-lb-maps-with-k8s-services       9s ago         never        0       no error                     
	   sync-policymap-557                   8s ago         never        0       no error                     
	   sync-to-k8s-ciliumendpoint (557)     9s ago         never        0       no error                     
	   sync-to-k8s-ciliumendpoint (75)      8s ago         never        0       no error                     
	   template-dir-watcher                 never          never        0       no error                     
	 Proxy Status:            OK, ip 10.0.1.103, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 37/65535 (0.06%), Flows/s: 6.91   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          0/2 reachable   (2024-03-07T14:42:01Z)
	   Name                   IP              Node        Endpoints
	   k8s1 (localhost)       192.168.56.11   reachable   unreachable
	   k8s2                   192.168.56.12   reachable   unreachable
	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
	 

14:42:11 STEP: Performing Cilium status preflight check
14:42:11 STEP: Performing Cilium controllers preflight check
14:42:11 STEP: Checking whether host EP regenerated
14:42:11 STEP: Performing Cilium health check
14:42:12 STEP: Performing Cilium service preflight check
14:42:12 STEP: Performing K8s service preflight check
14:42:13 STEP: Waiting for cilium-operator to be ready
14:42:13 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
14:42:13 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
14:42:13 STEP: Making sure all endpoints are in ready state
14:42:14 STEP: Launching cilium monitor on "cilium-c6zcs"
14:42:14 STEP: Creating namespace 202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito
14:42:14 STEP: Deploying demo_ds.yaml in namespace 202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito
14:42:16 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-qp7bx policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000903750>: {
        s: "Cannot retrieve cilium pod cilium-qp7bx policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-03-07T14:42:26Z====
14:42:26 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
14:42:27 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-nbwnz          0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-6stfs                   1/1     Running             0          13s     10.0.1.229      k8s1   <none>           <none>
	 202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-h24xq                   0/1     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-hwrrg                       0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-jlvw5                       2/2     Running             0          13s     10.0.1.155      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-fr7dw           0/1     ContainerCreating   0          69s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-kxcbh        0/1     ContainerCreating   0          69s     <none>          k8s2   <none>           <none>
	 kube-system                                                       cilium-c6zcs                       1/1     Running             0          68s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-554dbcbc9d-6hx5m   1/1     Running             0          68s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-554dbcbc9d-c9fbn   1/1     Running             0          68s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-qp7bx                       1/1     Running             0          68s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-s9ql4           1/1     Running             0          27s     10.0.0.232      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          4m42s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          4m42s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          4m42s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-kx67r                   1/1     Running             0          4m36s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-r57xq                   1/1     Running             0          2m6s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          4m42s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-4mfdh                 1/1     Running             0          86s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-gfscr                 1/1     Running             0          86s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-jsh5r               1/1     Running             0          2m3s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-wlz5f               1/1     Running             0          2m3s    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-c6zcs cilium-qp7bx]
cmd: kubectl exec -n kube-system cilium-c6zcs -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-e1b4e307)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.103, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 181/65535 (0.28%), Flows/s: 4.73   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-07T14:42:12Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-c6zcs -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 75         Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1b1   10.0.1.59    ready   
	 557        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 632        Disabled           Disabled          9563       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::197   10.0.1.155   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 3801       Disabled           Disabled          34069      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::146   10.0.1.229   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-qp7bx -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-e1b4e307)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.2, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 221/65535 (0.34%), Flows/s: 4.55   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-07T14:42:13Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-qp7bx -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 2          Disabled           Disabled          36070      k8s:app=grafana                                                                                                                  fd02::b2   10.0.0.9     ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 232        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 537        Disabled           Disabled          34069      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::45   10.0.0.54    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 734        Disabled           Disabled          9563       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::40   10.0.0.117   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 1958       Disabled           Disabled          39472      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::f5   10.0.0.53    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 2230       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::9a   10.0.0.29    ready   
	 2249       Disabled           Disabled          25900      k8s:app=prometheus                                                                                                               fd02::8c   10.0.0.248   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2730       Disabled           Disabled          39981      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::1a   10.0.0.232   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
14:43:08 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
14:43:08 STEP: Deleting deployment demo_ds.yaml
14:43:09 STEP: Deleting namespace 202403071442k8sdatapathconfigmonitoraggregationchecksthatmonito
14:43:23 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|7138d664_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//417/artifact/7138d664_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//417/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//417/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_417_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/417/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #31223 hit this flake with 90.94% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-zztdb policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-zztdb policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000fe4490>: {
        s: "Cannot retrieve cilium pod cilium-zztdb policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Unable to get node resource
Waiting for k8s node information
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-wqwf6 cilium-zztdb]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testds-49jlq                  false     false
grafana-698dc95f6c-wtt8x      false     false
prometheus-669755c8c5-th879   false     false
coredns-85fbf8f7dd-vxb8q      false     false
testclient-5vh5p              false     false
Cilium agent 'cilium-wqwf6': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-zztdb': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
18:56:32 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
18:56:32 STEP: Ensuring the namespace kube-system exists
18:56:32 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
18:56:32 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
18:56:32 STEP: Installing Cilium
18:56:33 STEP: Waiting for Cilium to become ready
18:57:19 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-t59t6 in namespace kube-system
18:57:19 STEP: Validating if Kubernetes DNS is deployed
18:57:19 STEP: Checking if deployment is ready
18:57:19 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
18:57:19 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
18:57:20 STEP: Waiting for Kubernetes DNS to become operational
18:57:20 STEP: Checking if deployment is ready
18:57:20 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
18:57:21 STEP: Checking if deployment is ready
18:57:21 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
18:57:22 STEP: Checking if deployment is ready
18:57:22 STEP: Checking if kube-dns service is plumbed correctly
18:57:22 STEP: Checking if pods have identity
18:57:22 STEP: Checking if DNS can resolve
18:57:22 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
18:57:23 STEP: Checking if deployment is ready
18:57:23 STEP: Checking if kube-dns service is plumbed correctly
18:57:23 STEP: Checking if pods have identity
18:57:23 STEP: Checking if DNS can resolve
18:57:23 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
18:57:24 STEP: Checking if deployment is ready
18:57:24 STEP: Checking if kube-dns service is plumbed correctly
18:57:24 STEP: Checking if pods have identity
18:57:24 STEP: Checking if DNS can resolve
18:57:24 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
18:57:25 STEP: Checking if deployment is ready
18:57:25 STEP: Checking if kube-dns service is plumbed correctly
18:57:25 STEP: Checking if pods have identity
18:57:25 STEP: Checking if DNS can resolve
18:57:25 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
18:57:26 STEP: Checking if deployment is ready
18:57:26 STEP: Checking if kube-dns service is plumbed correctly
18:57:26 STEP: Checking if pods have identity
18:57:26 STEP: Checking if DNS can resolve
18:57:26 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
18:57:27 STEP: Checking if deployment is ready
18:57:27 STEP: Checking if kube-dns service is plumbed correctly
18:57:27 STEP: Checking if DNS can resolve
18:57:27 STEP: Checking if pods have identity
18:57:27 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
18:57:28 STEP: Checking if deployment is ready
18:57:28 STEP: Checking if kube-dns service is plumbed correctly
18:57:28 STEP: Checking if pods have identity
18:57:28 STEP: Checking if DNS can resolve
18:57:29 STEP: Validating Cilium Installation
18:57:29 STEP: Performing Cilium controllers preflight check
18:57:29 STEP: Performing Cilium health check
18:57:29 STEP: Checking whether host EP regenerated
18:57:29 STEP: Performing Cilium status preflight check
18:57:30 STEP: Performing Cilium service preflight check
18:57:30 STEP: Performing K8s service preflight check
18:57:31 STEP: Waiting for cilium-operator to be ready
18:57:31 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
18:57:31 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
18:57:31 STEP: Making sure all endpoints are in ready state
18:57:32 STEP: Launching cilium monitor on "cilium-wqwf6"
18:57:32 STEP: Creating namespace 202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito
18:57:33 STEP: Deploying demo_ds.yaml in namespace 202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito
18:57:34 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-zztdb policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000fe4490>: {
        s: "Cannot retrieve cilium pod cilium-zztdb policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-03-07T18:57:44Z====
18:57:44 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
18:57:52 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-gslhd          0/2     ContainerCreating   0          21s     <none>          k8s2   <none>           <none>
	 202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-5vh5p                   1/1     Running             0          21s     10.0.1.238      k8s1   <none>           <none>
	 202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-kdf5z                   0/1     ContainerCreating   0          21s     <none>          k8s2   <none>           <none>
	 202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-49jlq                       2/2     Running             0          21s     10.0.1.14       k8s1   <none>           <none>
	 202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-94ml5                       0/2     ContainerCreating   0          21s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-wtt8x           0/1     Running             0          82s     10.0.0.119      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-th879        0/1     ContainerCreating   0          82s     <none>          k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-554dbcbc9d-n72vz   1/1     Running             0          81s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-554dbcbc9d-vq9sz   1/1     Running             0          81s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-wqwf6                       1/1     Running             0          81s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-zztdb                       1/1     Running             0          81s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-vxb8q           1/1     Running             0          34s     10.0.0.4        k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m37s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m37s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m37s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-lsfcj                   1/1     Running             0          5m28s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-q7fng                   1/1     Running             0          2m17s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m37s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-j6j27                 1/1     Running             0          99s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-jkkv9                 1/1     Running             0          99s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-nt2r4               1/1     Running             0          2m15s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-x6j74               1/1     Running             0          2m15s   192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-wqwf6 cilium-zztdb]
cmd: kubectl exec -n kube-system cilium-wqwf6 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-e1b4e307)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.3, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 180/65535 (0.27%), Flows/s: 3.23   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-07T18:57:30Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-wqwf6 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 656        Disabled           Disabled          4214       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::14a   10.0.1.14    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 1992       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 2584       Disabled           Disabled          4436       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::166   10.0.1.238   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 3237       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1c2   10.0.1.91    ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zztdb -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-e1b4e307)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.62, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 236/65535 (0.36%), Flows/s: 3.53   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-07T18:57:31Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zztdb -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 113        Disabled           Disabled          4214       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::2a   10.0.0.31    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 143        Disabled           Disabled          12389      k8s:app=prometheus                                                                                                               fd02::a    10.0.0.148   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 450        Disabled           Disabled          6796       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::cf   10.0.0.69    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 729        Disabled           Disabled          4436       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::8c   10.0.0.92    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 961        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 982        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::2d   10.0.0.169   ready   
	 2703       Disabled           Disabled          4663       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::ea   10.0.0.4     ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 4079       Disabled           Disabled          37750      k8s:app=grafana                                                                                                                  fd02::50   10.0.0.119   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
18:58:34 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
18:58:34 STEP: Deleting deployment demo_ds.yaml
18:58:34 STEP: Deleting namespace 202403071857k8sdatapathconfigmonitoraggregationchecksthatmonito
18:58:50 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|83d1eef9_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//419/artifact/7389b316_K8sDatapathConfig_Host_firewall_Check_connectivity_with_IPv6_disabled.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//419/artifact/83d1eef9_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//419/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//419/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_419_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/419/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #31295 hit this flake with 93.79% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-8j7sb policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-8j7sb policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc0009ded80>: {
        s: "Cannot retrieve cilium pod cilium-8j7sb policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Cilium pods: [cilium-8j7sb cilium-vscv7]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-b9qrb              false     false
testds-h7r78                  false     false
grafana-698dc95f6c-kkcqg      false     false
prometheus-669755c8c5-p75jw   false     false
coredns-85fbf8f7dd-mc6cw      false     false
Cilium agent 'cilium-8j7sb': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-vscv7': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
18:38:51 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
18:38:51 STEP: Ensuring the namespace kube-system exists
18:38:51 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
18:38:51 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
18:38:52 STEP: Installing Cilium
18:38:53 STEP: Waiting for Cilium to become ready
18:39:37 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-nrlc2 in namespace kube-system
18:39:37 STEP: Validating if Kubernetes DNS is deployed
18:39:37 STEP: Checking if deployment is ready
18:39:37 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
18:39:37 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
18:39:38 STEP: Waiting for Kubernetes DNS to become operational
18:39:38 STEP: Checking if deployment is ready
18:39:38 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
18:39:39 STEP: Checking if deployment is ready
18:39:39 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
18:39:40 STEP: Checking if deployment is ready
18:39:40 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
18:39:41 STEP: Checking if deployment is ready
18:39:41 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
18:39:42 STEP: Checking if deployment is ready
18:39:42 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
18:39:43 STEP: Checking if deployment is ready
18:39:43 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
18:39:44 STEP: Checking if deployment is ready
18:39:44 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
18:39:45 STEP: Checking if deployment is ready
18:39:45 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
18:39:46 STEP: Checking if deployment is ready
18:39:46 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
18:39:47 STEP: Checking if deployment is ready
18:39:47 STEP: Checking if kube-dns service is plumbed correctly
18:39:47 STEP: Checking if pods have identity
18:39:47 STEP: Checking if DNS can resolve
18:39:47 STEP: Validating Cilium Installation
18:39:47 STEP: Performing Cilium controllers preflight check
18:39:47 STEP: Performing Cilium health check
18:39:47 STEP: Checking whether host EP regenerated
18:39:47 STEP: Performing Cilium status preflight check
18:39:48 STEP: Performing Cilium service preflight check
18:39:48 STEP: Performing K8s service preflight check
18:39:50 STEP: Waiting for cilium-operator to be ready
18:39:50 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
18:39:50 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
18:39:50 STEP: Making sure all endpoints are in ready state
18:39:57 STEP: Launching cilium monitor on "cilium-vscv7"
18:39:57 STEP: Creating namespace 202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito
18:39:57 STEP: Deploying demo_ds.yaml in namespace 202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito
18:39:58 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-8j7sb policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc0009ded80>: {
        s: "Cannot retrieve cilium pod cilium-8j7sb policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-03-11T18:40:08Z====
18:40:08 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
18:40:10 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                              READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-j5jlw         0/2     ContainerCreating   0          14s     <none>          k8s2   <none>           <none>
	 202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-6b4t8                  0/1     ContainerCreating   0          14s     <none>          k8s2   <none>           <none>
	 202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-b9qrb                  1/1     Running             0          14s     10.0.0.80       k8s1   <none>           <none>
	 202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-h7r78                      2/2     Running             0          14s     10.0.0.170      k8s1   <none>           <none>
	 202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-ppsn6                      0/2     ContainerCreating   0          14s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-kkcqg          0/1     ContainerCreating   0          81s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-p75jw       1/1     Running             0          81s     10.0.1.217      k8s2   <none>           <none>
	 kube-system                                                       cilium-8j7sb                      1/1     Running             0          79s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-cdd6bfbc4-d4wcj   1/1     Running             0          79s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-cdd6bfbc4-j8thb   1/1     Running             0          79s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-vscv7                      1/1     Running             0          79s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-mc6cw          1/1     Running             0          34s     10.0.1.234      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                         1/1     Running             0          5m13s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1               1/1     Running             0          5m13s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1      1/1     Running             0          5m13s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-5fmd6                  1/1     Running             0          5m7s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-67qqj                  1/1     Running             0          2m22s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1               1/1     Running             0          5m13s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-k65jt                1/1     Running             0          99s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-xb5bn                1/1     Running             0          99s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-2s4fp              1/1     Running             0          2m20s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-7bgxt              1/1     Running             0          2m20s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-8j7sb cilium-vscv7]
cmd: kubectl exec -n kube-system cilium-8j7sb -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-9968144b)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.1.120, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 213/65535 (0.33%), Flows/s: 3.81   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-11T18:39:48Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-8j7sb -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 39         Disabled           Disabled          63076      k8s:app=grafana                                                                                                                  fd02::101   10.0.1.109   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                  
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                 
	 951        Disabled           Disabled          42649      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1b1   10.0.1.49    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                                                              
	 966        Disabled           Disabled          24128      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::12b   10.0.1.147   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 1020       Disabled           Disabled          19078      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::19a   10.0.1.64    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 1378       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                                ready   
	                                                            reserved:host                                                                                                                                                     
	 1567       Disabled           Disabled          28617      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::1fc   10.0.1.234   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                       
	                                                            k8s:k8s-app=kube-dns                                                                                                                                              
	 2947       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::191   10.0.1.225   ready   
	 3427       Disabled           Disabled          20262      k8s:app=prometheus                                                                                                               fd02::1ea   10.0.1.217   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                  
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                            
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                 
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-vscv7 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-9968144b)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.0.4, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 133/65535 (0.20%), Flows/s: 1.95   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-11T18:39:50Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-vscv7 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 778        Disabled           Disabled          24128      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::56   10.0.0.80    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 1197       Disabled           Disabled          19078      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::ba   10.0.0.170   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 1852       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                               
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            reserved:host                                                                                                                                                    
	 3513       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::12   10.0.0.130   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
18:40:52 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
18:40:52 STEP: Deleting deployment demo_ds.yaml
18:40:53 STEP: Deleting namespace 202403111839k8sdatapathconfigmonitoraggregationchecksthatmonito
18:41:08 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|5134b127_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//428/artifact/5134b127_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//428/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//428/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_428_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/428/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #31340 hit this flake with 94.10% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-jwmbz policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-jwmbz policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc0005faf00>: {
        s: "Cannot retrieve cilium pod cilium-jwmbz policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Waiting for k8s node information
Unable to get node resource
Cilium pods: [cilium-jp9vw cilium-jwmbz]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-bhqmf              false     false
testds-9rbdq                  false     false
grafana-698dc95f6c-6xdz7      false     false
prometheus-669755c8c5-n47kg   false     false
coredns-85fbf8f7dd-c8lrf      false     false
Cilium agent 'cilium-jp9vw': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-jwmbz': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
11:02:56 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
11:02:56 STEP: Ensuring the namespace kube-system exists
11:02:56 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
11:02:56 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
11:02:56 STEP: Installing Cilium
11:02:57 STEP: Waiting for Cilium to become ready
11:03:40 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-vfkgc in namespace kube-system
11:03:40 STEP: Validating if Kubernetes DNS is deployed
11:03:40 STEP: Checking if deployment is ready
11:03:40 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
11:03:40 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
11:03:40 STEP: Waiting for Kubernetes DNS to become operational
11:03:40 STEP: Checking if deployment is ready
11:03:40 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:03:41 STEP: Checking if deployment is ready
11:03:41 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:03:42 STEP: Checking if deployment is ready
11:03:42 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:03:43 STEP: Checking if deployment is ready
11:03:43 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:03:44 STEP: Checking if deployment is ready
11:03:44 STEP: Checking if pods have identity
11:03:44 STEP: Checking if kube-dns service is plumbed correctly
11:03:44 STEP: Checking if DNS can resolve
11:03:45 STEP: Validating Cilium Installation
11:03:45 STEP: Performing Cilium controllers preflight check
11:03:45 STEP: Performing Cilium health check
11:03:45 STEP: Performing Cilium status preflight check
11:03:45 STEP: Checking whether host EP regenerated
11:03:46 STEP: Performing Cilium service preflight check
11:03:46 STEP: Performing K8s service preflight check
11:03:47 STEP: Waiting for cilium-operator to be ready
11:03:47 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
11:03:47 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
11:03:47 STEP: Making sure all endpoints are in ready state
11:03:53 STEP: Launching cilium monitor on "cilium-jp9vw"
11:03:53 STEP: Creating namespace 202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito
11:03:53 STEP: Deploying demo_ds.yaml in namespace 202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito
11:03:54 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-jwmbz policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc0005faf00>: {
        s: "Cannot retrieve cilium pod cilium-jwmbz policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-03-12T11:04:05Z====
11:04:05 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
11:04:07 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-6zq6h          0/2     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-856s4                   0/1     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-bhqmf                   1/1     Running             0          15s     10.0.1.32       k8s1   <none>           <none>
	 202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-9rbdq                       2/2     Running             0          15s     10.0.1.216      k8s1   <none>           <none>
	 202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-ltb6z                       0/2     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-6xdz7           0/1     Running             0          73s     10.0.0.111      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-n47kg        1/1     Running             0          73s     10.0.0.159      k8s2   <none>           <none>
	 kube-system                                                       cilium-jp9vw                       1/1     Running             0          72s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-jwmbz                       1/1     Running             0          72s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-5b68bd6d48-658rw   1/1     Running             0          72s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-5b68bd6d48-sj7cw   1/1     Running             0          72s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-c8lrf           1/1     Running             0          29s     10.0.0.186      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-7ldn5                   1/1     Running             0          2m14s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-n72dx                   1/1     Running             0          4m43s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-df6k8                 1/1     Running             0          91s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-xb7xd                 1/1     Running             0          91s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-6cb59               1/1     Running             0          2m11s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-m5m6f               1/1     Running             0          2m11s   192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-jp9vw cilium-jwmbz]
cmd: kubectl exec -n kube-system cilium-jp9vw -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-9d80f05b)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.196, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 168/65535 (0.26%), Flows/s: 3.37   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-12T11:03:46Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-jp9vw -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 181        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 228        Disabled           Disabled          39313      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1db   10.0.1.216   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 633        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::17d   10.0.1.203   ready   
	 1369       Disabled           Disabled          7566       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::188   10.0.1.32    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-jwmbz -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-9d80f05b)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.125, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 247/65535 (0.38%), Flows/s: 5.28   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-12T11:03:47Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-jwmbz -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 140        Disabled           Disabled          7566       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::85   10.0.0.187   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 505        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 678        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::47   10.0.0.212   ready   
	 1367       Disabled           Disabled          46350      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::15   10.0.0.186   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 2351       Disabled           Disabled          39313      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::a3   10.0.0.42    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 3433       Disabled           Disabled          13172      k8s:app=prometheus                                                                                                               fd02::ed   10.0.0.159   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 3560       Disabled           Disabled          10939      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::e2   10.0.0.93    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 4030       Disabled           Disabled          19689      k8s:app=grafana                                                                                                                  fd02::a2   10.0.0.111   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
11:04:50 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
11:04:50 STEP: Deleting deployment demo_ds.yaml
11:04:51 STEP: Deleting namespace 202403121103k8sdatapathconfigmonitoraggregationchecksthatmonito
11:05:06 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|793f1d97_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//431/artifact/09ff54f4_K8sDatapathConfig_Host_firewall_Check_connectivity_with_IPv6_disabled.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//431/artifact/793f1d97_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//431/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//431/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_431_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/431/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #31315 hit this flake with 97.55% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output


Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Timed out after 240.001s.
Monitor log did not contain 2 ingress and 3 egress TCP notifications
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40540 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34064 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:925984/183975 0x000e21200002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> Service upserted: {"id":8,"frontend-address":{"ip":"10.97.143.154","port":80},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Cluster","name":"testds-service","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":9,"frontend-address":{"ip":"10.97.143.154","port":69},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Cluster","name":"testds-service","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":10,"frontend-address":{"ip":"10.108.53.136","port":10069},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-nodeport","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":11,"frontend-address":{"ip":"10.108.53.136","port":10080},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-nodeport","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":12,"frontend-address":{"ip":"10.103.103.69","port":10069},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-affinity","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":13,"frontend-address":{"ip":"10.103.103.69","port":10080},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-affinity","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40541 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34063 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926035/183975 0x000e21530002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> Service upserted: {"id":14,"frontend-address":{"ip":"10.104.81.40","port":10080},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Local","name":"test-nodeport-local","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":15,"frontend-address":{"ip":"10.104.81.40","port":10069},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Local","name":"test-nodeport-local","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":16,"frontend-address":{"ip":"10.108.97.8","port":10080},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Local","name":"test-nodeport-local-k8s2","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":17,"frontend-address":{"ip":"10.108.97.8","port":10069},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Local","name":"test-nodeport-local-k8s2","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":18,"frontend-address":{"ip":"10.105.98.17","port":10080},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-nodeport-k8s2","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":19,"frontend-address":{"ip":"10.105.98.17","port":10069},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-nodeport-k8s2","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":20,"frontend-address":{"ip":"10.101.210.206","port":80},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-lb","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":21,"frontend-address":{"ip":"10.104.15.141","port":80},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Local","name":"test-lb-local-k8s2","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":22,"frontend-address":{"ip":"10.100.107.187","port":20080},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-external-ips","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":23,"frontend-address":{"ip":"10.100.107.187","port":20069},"backend-addresses":[],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-external-ips","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40542 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34062 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926086/183975 0x000e21860002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40543 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34061 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926137/183975 0x000e21b90002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> Policy updated: {"labels":["k8s:io.cilium.k8s.policy.derived-from=CiliumNetworkPolicy","k8s:io.cilium.k8s.policy.name=l3-policy-demo","k8s:io.cilium.k8s.policy.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","k8s:io.cilium.k8s.policy.uid=999f3898-8109-45a7-98ea-2cb3912f47ae"],"revision":2,"rule_count":1}
>> Endpoint created: {"id":1401,"pod-name":"testds-pk847","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Endpoint regenerated: {"id":1085,"labels":["reserved:health"]}
>> Endpoint regenerated: {"id":2905,"labels":["k8s:cilium.io/ci-node=k8s1","reserved:host","k8s:node-role.kubernetes.io/master"]}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40544 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34060 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926188/183975 0x000e21ec0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> IPCache entry upserted: {"cidr":"10.0.0.216/32","id":16842,"host-ip":"192.168.56.12","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"testds-8b87z"}
>> IPCache entry upserted: {"cidr":"fd02::21/128","id":16842,"host-ip":"192.168.56.12","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"testds-8b87z"}
>> IPCache entry upserted: {"cidr":"10.0.1.142/32","id":16842,"host-ip":"192.168.56.11","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"testds-pk847"}
>> IPCache entry upserted: {"cidr":"fd02::1ad/128","id":16842,"host-ip":"192.168.56.11","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"testds-pk847"}
>> Endpoint created: {"id":713,"pod-name":"testclient-2-zp4hq","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> IPCache entry upserted: {"cidr":"10.0.1.52/32","id":17387,"host-ip":"192.168.56.11","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"testclient-2-zp4hq"}
>> IPCache entry upserted: {"cidr":"fd02::13c/128","id":17387,"host-ip":"192.168.56.11","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"testclient-2-zp4hq"}
>> IPCache entry upserted: {"cidr":"10.0.0.155/32","id":56400,"host-ip":"192.168.56.12","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"test-k8s2-5ffdc78d54-khxf4"}
>> IPCache entry upserted: {"cidr":"fd02::f8/128","id":56400,"host-ip":"192.168.56.12","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"test-k8s2-5ffdc78d54-khxf4"}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40545 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34059 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926240/183975 0x000e22200002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40546 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34058 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926291/183975 0x000e22530002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> IPCache entry upserted: {"cidr":"10.0.0.11/32","id":17387,"host-ip":"192.168.56.12","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"testclient-2-69nwh"}
>> IPCache entry upserted: {"cidr":"fd02::fe/128","id":17387,"host-ip":"192.168.56.12","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"testclient-2-69nwh"}
>> IPCache entry upserted: {"cidr":"10.0.0.34/32","id":10064,"host-ip":"192.168.56.12","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"testclient-m5j5l"}
>> IPCache entry upserted: {"cidr":"fd02::8e/128","id":10064,"host-ip":"192.168.56.12","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"testclient-m5j5l"}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40547 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34057 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926345/183975 0x000e22890002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> Endpoint created: {"id":22,"pod-name":"testclient-kwhmx","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> IPCache entry upserted: {"cidr":"10.0.1.36/32","id":10064,"host-ip":"192.168.56.11","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"testclient-kwhmx"}
>> IPCache entry upserted: {"cidr":"fd02::1bb/128","id":10064,"host-ip":"192.168.56.11","encrypt-key":0,"namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","pod-name":"testclient-kwhmx"}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40548 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34056 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926396/183975 0x000e22bc0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> Endpoint regenerated: {"id":1401,"labels":["k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.cilium.k8s.policy.cluster=default","k8s:zgroup=testDS","k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"]}
>> Endpoint regenerated: {"id":1085,"labels":["reserved:health"]}
>> Endpoint regenerated: {"id":2905,"labels":["k8s:cilium.io/ci-node=k8s1","reserved:host","k8s:node-role.kubernetes.io/master"]}
>> Endpoint regenerated: {"id":1401,"labels":["k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.cilium.k8s.policy.cluster=default","k8s:zgroup=testDS","k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"]}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40549 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34055 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926447/183975 0x000e22ef0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..102..] SrcMAC=aa:27:eb:9f:39:2d DstMAC=33:33:00:00:00:16 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..54..] Version=6 TrafficClass=0 FlowLabel=0 Length=56 NextHeader=IPv6HopByHop HopLimit=1 SrcIP=fe80::a827:ebff:fe9f:392d DstIP=ff02::16 HopByHop={ Contents=[..8..] Payload=[..54..] NextHeader=ICMPv6 HeaderLength=0 ActualLength=8 Options=[{OptionType=5 OptionLength=2 ActualLength=4 OptionData=[0, 0] OptionAlignment=[0 0]}, {OptionType=1 OptionLength=0 ActualLength=2 OptionData=[] OptionAlignment=[0 0]}]}}
ICMPv6	{Contents=[143, 0, 103, 129] Payload=[..50..] TypeCode=143(0) Checksum=26497 TypeBytes=[]}
  Packet has been truncated
  Failed to decode layer: No decoder for layer type MLDv2MulticastListenerReport
CPU 02: MARK 0x0 FROM 1401 DROP: 110 bytes, reason Invalid source ip, identity 16842->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=aa:27:eb:9f:39:2d DstMAC=33:33:00:00:00:02 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..16..] Version=6 TrafficClass=0 FlowLabel=0 Length=16 NextHeader=ICMPv6 HopLimit=255 SrcIP=fe80::a827:ebff:fe9f:392d DstIP=ff02::2 HopByHop=nil}
ICMPv6	{Contents=[133, 0, 225, 68] Payload=[..12..] TypeCode=RouterSolicitation Checksum=57668 TypeBytes=[]}
  Failed to decode layer: No decoder for layer type ICMPv6RouterSolicitation
CPU 02: MARK 0x0 FROM 1401 DROP: 70 bytes, reason Invalid source ip, identity 16842->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40550 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34054 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926498/183975 0x000e23220002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> Endpoint regenerated: {"id":713,"labels":["k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.cilium.k8s.policy.cluster=default","k8s:zgroup=testDSClient2","k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"]}
>> Endpoint regenerated: {"id":713,"labels":["k8s:zgroup=testDSClient2","k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.cilium.k8s.policy.cluster=default"]}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..102..] SrcMAC=aa:27:eb:9f:39:2d DstMAC=33:33:00:00:00:16 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..54..] Version=6 TrafficClass=0 FlowLabel=0 Length=56 NextHeader=IPv6HopByHop HopLimit=1 SrcIP=fe80::a827:ebff:fe9f:392d DstIP=ff02::16 HopByHop={ Contents=[..8..] Payload=[..54..] NextHeader=ICMPv6 HeaderLength=0 ActualLength=8 Options=[{OptionType=5 OptionLength=2 ActualLength=4 OptionData=[0, 0] OptionAlignment=[0 0]}, {OptionType=1 OptionLength=0 ActualLength=2 OptionData=[] OptionAlignment=[0 0]}]}}
ICMPv6	{Contents=[143, 0, 103, 129] Payload=[..50..] TypeCode=143(0) Checksum=26497 TypeBytes=[]}
  Packet has been truncated
  Failed to decode layer: No decoder for layer type MLDv2MulticastListenerReport
CPU 02: MARK 0x0 FROM 1401 DROP: 110 bytes, reason Invalid source ip, identity 16842->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40551 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34053 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926549/183975 0x000e23550002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..102..] SrcMAC=6a:8b:24:20:b7:e9 DstMAC=33:33:00:00:00:16 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..54..] Version=6 TrafficClass=0 FlowLabel=0 Length=56 NextHeader=IPv6HopByHop HopLimit=1 SrcIP=fe80::688b:24ff:fe20:b7e9 DstIP=ff02::16 HopByHop={ Contents=[..8..] Payload=[..54..] NextHeader=ICMPv6 HeaderLength=0 ActualLength=8 Options=[{OptionType=5 OptionLength=2 ActualLength=4 OptionData=[0, 0] OptionAlignment=[0 0]}, {OptionType=1 OptionLength=0 ActualLength=2 OptionData=[] OptionAlignment=[0 0]}]}}
ICMPv6	{Contents=[143, 0, 114, 20] Payload=[..50..] TypeCode=143(0) Checksum=29204 TypeBytes=[]}
  Packet has been truncated
  Failed to decode layer: No decoder for layer type MLDv2MulticastListenerReport
CPU 02: MARK 0x0 FROM 713 DROP: 110 bytes, reason Invalid source ip, identity 17387->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=6a:8b:24:20:b7:e9 DstMAC=33:33:00:00:00:02 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..16..] Version=6 TrafficClass=0 FlowLabel=0 Length=16 NextHeader=ICMPv6 HopLimit=255 SrcIP=fe80::688b:24ff:fe20:b7e9 DstIP=ff02::2 HopByHop=nil}
ICMPv6	{Contents=[133, 0, 242, 3] Payload=[..12..] TypeCode=RouterSolicitation Checksum=61955 TypeBytes=[]}
  Failed to decode layer: No decoder for layer type ICMPv6RouterSolicitation
CPU 02: MARK 0x0 FROM 713 DROP: 70 bytes, reason Invalid source ip, identity 17387->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=1899 Flags=DF FragOffset=0 TTL=62 Protocol=TCP Checksum=7900 SrcIP=10.0.0.122 DstIP=10.0.1.252 Options=[] Padding=[]}
TCP	{Contents=[..40..] Payload=[] SrcPort=36508 DstPort=3000(hbci) Seq=2832039105 Ack=0 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=false URG=false ECE=false CWR=false NS=false Window=29200 Checksum=50137 Urgent=0 Options=[..5..] Padding=[]}
CPU 01: MARK 0x0 FROM 2905 DROP: 74 bytes, reason Stale or unroutable IP, identity remote-node->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40552 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34052 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926600/183975 0x000e23880002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> Endpoint regenerated: {"id":22,"labels":["k8s:zgroup=testDSClient","k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.cilium.k8s.policy.cluster=default"]}
>> Endpoint regenerated: {"id":22,"labels":["k8s:io.cilium.k8s.policy.cluster=default","k8s:zgroup=testDSClient","k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","k8s:io.cilium.k8s.policy.serviceaccount=default"]}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40553 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34051 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926651/183975 0x000e23bb0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..102..] SrcMAC=da:1f:06:f2:75:a9 DstMAC=33:33:00:00:00:16 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..54..] Version=6 TrafficClass=0 FlowLabel=0 Length=56 NextHeader=IPv6HopByHop HopLimit=1 SrcIP=fe80::d81f:6ff:fef2:75a9 DstIP=ff02::16 HopByHop={ Contents=[..8..] Payload=[..54..] NextHeader=ICMPv6 HeaderLength=0 ActualLength=8 Options=[{OptionType=5 OptionLength=2 ActualLength=4 OptionData=[0, 0] OptionAlignment=[0 0]}, {OptionType=1 OptionLength=0 ActualLength=2 OptionData=[] OptionAlignment=[0 0]}]}}
ICMPv6	{Contents=[143, 0, 162, 221] Payload=[..50..] TypeCode=143(0) Checksum=41693 TypeBytes=[]}
  Packet has been truncated
  Failed to decode layer: No decoder for layer type MLDv2MulticastListenerReport
CPU 01: MARK 0x0 FROM 22 DROP: 110 bytes, reason Invalid source ip, identity 10064->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=da:1f:06:f2:75:a9 DstMAC=33:33:00:00:00:02 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..16..] Version=6 TrafficClass=0 FlowLabel=0 Length=16 NextHeader=ICMPv6 HopLimit=255 SrcIP=fe80::d81f:6ff:fef2:75a9 DstIP=ff02::2 HopByHop=nil}
ICMPv6	{Contents=[133, 0, 209, 183] Payload=[..12..] TypeCode=RouterSolicitation Checksum=53687 TypeBytes=[]}
  Failed to decode layer: No decoder for layer type ICMPv6RouterSolicitation
CPU 01: MARK 0x0 FROM 22 DROP: 70 bytes, reason Invalid source ip, identity 10064->unknown
>> Endpoint regenerated: {"id":713,"labels":["k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.cilium.k8s.policy.cluster=default","k8s:zgroup=testDSClient2","k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"]}
>> Endpoint regenerated: {"id":22,"labels":["k8s:io.cilium.k8s.policy.cluster=default","k8s:zgroup=testDSClient","k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","k8s:io.cilium.k8s.policy.serviceaccount=default"]}
>> Endpoint regenerated: {"id":2905,"labels":["reserved:host","k8s:node-role.kubernetes.io/master","k8s:cilium.io/ci-node=k8s1"]}
>> Endpoint regenerated: {"id":1085,"labels":["reserved:health"]}
>> Endpoint regenerated: {"id":1401,"labels":["k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.cilium.k8s.policy.cluster=default","k8s:zgroup=testDS"]}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..102..] SrcMAC=6a:8b:24:20:b7:e9 DstMAC=33:33:00:00:00:16 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..54..] Version=6 TrafficClass=0 FlowLabel=0 Length=56 NextHeader=IPv6HopByHop HopLimit=1 SrcIP=fe80::688b:24ff:fe20:b7e9 DstIP=ff02::16 HopByHop={ Contents=[..8..] Payload=[..54..] NextHeader=ICMPv6 HeaderLength=0 ActualLength=8 Options=[{OptionType=5 OptionLength=2 ActualLength=4 OptionData=[0, 0] OptionAlignment=[0 0]}, {OptionType=1 OptionLength=0 ActualLength=2 OptionData=[] OptionAlignment=[0 0]}]}}
ICMPv6	{Contents=[143, 0, 114, 20] Payload=[..50..] TypeCode=143(0) Checksum=29204 TypeBytes=[]}
  Packet has been truncated
  Failed to decode layer: No decoder for layer type MLDv2MulticastListenerReport
CPU 02: MARK 0x0 FROM 713 DROP: 110 bytes, reason Invalid source ip, identity 17387->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40554 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34050 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926702/183975 0x000e23ee0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40555 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34049 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926753/183975 0x000e24210002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
Policy verdict log: flow 0xd8d43c39 local EP ID 1401, remote ID host, proto 6, ingress, action allow, match L3-Only, 10.0.1.236:54570 -> 10.0.1.142:80 tcp SYN
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=be:0d:14:9c:a5:86 DstMAC=aa:27:eb:9f:39:2d EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=28311 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=46507 SrcIP=10.0.1.236 DstIP=10.0.1.142 Options=[] Padding=[]}
TCP	{Contents=[..40..] Payload=[] SrcPort=54570 DstPort=80(http) Seq=368472233 Ack=0 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=false URG=false ECE=false CWR=false NS=false Window=29200 Checksum=6056 Urgent=0 Options=[..5..] Padding=[]}
CPU 01: MARK 0xd8d43c39 FROM 1401 to-endpoint: 74 bytes (74 captured), state new, interface 220, , identity host->16842, orig-ip 10.0.1.236, to endpoint 1401
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=aa:27:eb:9f:39:2d DstMAC=be:0d:14:9c:a5:86 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=0 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=9283 SrcIP=10.0.1.142 DstIP=10.0.1.236 Options=[] Padding=[]}
TCP	{Contents=[..40..] Payload=[] SrcPort=80(http) DstPort=54570 Seq=1642574776 Ack=368472234 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=28960 Checksum=6056 Urgent=0 Options=[..5..] Padding=[]}
CPU 01: MARK 0x348968e3 FROM 1401 to-stack: 74 bytes (74 captured), state reply, , identity 16842->host, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=be:0d:14:9c:a5:86 DstMAC=aa:27:eb:9f:39:2d EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=28315 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=46511 SrcIP=10.0.1.236 DstIP=10.0.1.142 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=54570 DstPort=80(http) Seq=368472344 Ack=1642575382 DataOffset=8 FIN=true SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=238 Checksum=6048 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926792/926792 0x000e2448000e2448)] Padding=[]}
CPU 01: MARK 0xd8d43c39 FROM 1401 to-endpoint: 66 bytes (66 captured), state established, interface 220, , identity host->16842, orig-ip 10.0.1.236, to endpoint 1401
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=be:0d:14:9c:a5:86 DstMAC=aa:27:eb:9f:39:2d EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=28316 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=46510 SrcIP=10.0.1.236 DstIP=10.0.1.142 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=54570 DstPort=80(http) Seq=368472345 Ack=1642575383 DataOffset=8 FIN=false SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=238 Checksum=6048 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926792/926792 0x000e2448000e2448)] Padding=[]}
CPU 01: MARK 0xd8d43c39 FROM 1401 to-endpoint: 66 bytes (66 captured), state established, interface 220, , identity host->16842, orig-ip 10.0.1.236, to endpoint 1401
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=aa:27:eb:9f:39:2d DstMAC=be:0d:14:9c:a5:86 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=6419 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=2872 SrcIP=10.0.1.142 DstIP=10.0.1.236 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=80(http) DstPort=54570 Seq=1642575382 Ack=368472344 DataOffset=8 FIN=true SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=227 Checksum=6048 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926792/926792 0x000e2448000e2448)] Padding=[]}
CPU 02: MARK 0x348968e3 FROM 1401 to-stack: 66 bytes (66 captured), state reply, , identity 16842->host, orig-ip 0.0.0.0
>> Service upserted: {"id":20,"frontend-address":{"ip":"10.101.210.206","port":80},"backend-addresses":[{"ip":"10.0.1.142","port":80}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-lb","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":8,"frontend-address":{"ip":"10.97.143.154","port":80},"backend-addresses":[{"ip":"10.0.1.142","port":80}],"type":"ClusterIP","traffic-policy":"Cluster","name":"testds-service","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":9,"frontend-address":{"ip":"10.97.143.154","port":69},"backend-addresses":[{"ip":"10.0.1.142","port":69}],"type":"ClusterIP","traffic-policy":"Cluster","name":"testds-service","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":14,"frontend-address":{"ip":"10.104.81.40","port":10080},"backend-addresses":[{"ip":"10.0.1.142","port":80}],"type":"ClusterIP","traffic-policy":"Local","name":"test-nodeport-local","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":15,"frontend-address":{"ip":"10.104.81.40","port":10069},"backend-addresses":[{"ip":"10.0.1.142","port":69}],"type":"ClusterIP","traffic-policy":"Local","name":"test-nodeport-local","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":10,"frontend-address":{"ip":"10.108.53.136","port":10069},"backend-addresses":[{"ip":"10.0.1.142","port":69}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-nodeport","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":11,"frontend-address":{"ip":"10.108.53.136","port":10080},"backend-addresses":[{"ip":"10.0.1.142","port":80}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-nodeport","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":13,"frontend-address":{"ip":"10.103.103.69","port":10080},"backend-addresses":[{"ip":"10.0.1.142","port":80}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-affinity","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":12,"frontend-address":{"ip":"10.103.103.69","port":10069},"backend-addresses":[{"ip":"10.0.1.142","port":69}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-affinity","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":22,"frontend-address":{"ip":"10.100.107.187","port":20080},"backend-addresses":[{"ip":"10.0.1.142","port":80}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-external-ips","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":23,"frontend-address":{"ip":"10.100.107.187","port":20069},"backend-addresses":[{"ip":"10.0.1.142","port":69}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-external-ips","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40556 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34048 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926804/183975 0x000e24540002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40557 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34047 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926855/183975 0x000e24870002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40558 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34046 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926907/183975 0x000e24bb0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=ba:30:b7:b1:bc:93 DstMAC=33:33:00:00:00:02 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..16..] Version=6 TrafficClass=0 FlowLabel=0 Length=16 NextHeader=ICMPv6 HopLimit=255 SrcIP=fe80::b830:b7ff:feb1:bc93 DstIP=ff02::2 HopByHop=nil}
ICMPv6	{Contents=[133, 0, 34, 66] Payload=[..12..] TypeCode=RouterSolicitation Checksum=8770 TypeBytes=[]}
  Failed to decode layer: No decoder for layer type ICMPv6RouterSolicitation
CPU 01: MARK 0x0 FROM 1085 DROP: 70 bytes, reason Invalid source ip, identity health->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..102..] SrcMAC=da:1f:06:f2:75:a9 DstMAC=33:33:00:00:00:16 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..54..] Version=6 TrafficClass=0 FlowLabel=0 Length=56 NextHeader=IPv6HopByHop HopLimit=1 SrcIP=fe80::d81f:6ff:fef2:75a9 DstIP=ff02::16 HopByHop={ Contents=[..8..] Payload=[..54..] NextHeader=ICMPv6 HeaderLength=0 ActualLength=8 Options=[{OptionType=5 OptionLength=2 ActualLength=4 OptionData=[0, 0] OptionAlignment=[0 0]}, {OptionType=1 OptionLength=0 ActualLength=2 OptionData=[] OptionAlignment=[0 0]}]}}
ICMPv6	{Contents=[143, 0, 162, 221] Payload=[..50..] TypeCode=143(0) Checksum=41693 TypeBytes=[]}
  Packet has been truncated
  Failed to decode layer: No decoder for layer type MLDv2MulticastListenerReport
CPU 01: MARK 0x0 FROM 22 DROP: 110 bytes, reason Invalid source ip, identity 10064->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40559 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34045 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:926958/183975 0x000e24ee0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40560 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34044 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927009/183975 0x000e25210002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40561 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34043 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927060/183975 0x000e25540002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40562 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34042 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927111/183975 0x000e25870002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40563 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34041 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927163/183975 0x000e25bb0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40564 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34040 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927214/183975 0x000e25ee0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40565 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34039 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927267/183975 0x000e26230002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40566 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34038 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927318/183975 0x000e26560002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40567 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34037 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927369/183975 0x000e26890002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40568 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34036 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927420/183975 0x000e26bc0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40569 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34035 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927471/183975 0x000e26ef0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=aa:27:eb:9f:39:2d DstMAC=33:33:00:00:00:02 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..16..] Version=6 TrafficClass=0 FlowLabel=0 Length=16 NextHeader=ICMPv6 HopLimit=255 SrcIP=fe80::a827:ebff:fe9f:392d DstIP=ff02::2 HopByHop=nil}
ICMPv6	{Contents=[133, 0, 225, 68] Payload=[..12..] TypeCode=RouterSolicitation Checksum=57668 TypeBytes=[]}
  Failed to decode layer: No decoder for layer type ICMPv6RouterSolicitation
CPU 02: MARK 0x0 FROM 1401 DROP: 70 bytes, reason Invalid source ip, identity 16842->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40570 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34034 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927522/183975 0x000e27220002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40571 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34033 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927573/183975 0x000e27550002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40572 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34032 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927624/183975 0x000e27880002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40573 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34031 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927676/183975 0x000e27bc0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=6a:8b:24:20:b7:e9 DstMAC=33:33:00:00:00:02 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..16..] Version=6 TrafficClass=0 FlowLabel=0 Length=16 NextHeader=ICMPv6 HopLimit=255 SrcIP=fe80::688b:24ff:fe20:b7e9 DstIP=ff02::2 HopByHop=nil}
ICMPv6	{Contents=[133, 0, 242, 3] Payload=[..12..] TypeCode=RouterSolicitation Checksum=61955 TypeBytes=[]}
  Failed to decode layer: No decoder for layer type ICMPv6RouterSolicitation
CPU 02: MARK 0x0 FROM 713 DROP: 70 bytes, reason Invalid source ip, identity 17387->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40574 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34030 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927727/183975 0x000e27ef0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> Service upserted: {"id":19,"frontend-address":{"ip":"10.105.98.17","port":10069},"backend-addresses":[{"ip":"10.0.0.155","port":69}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-nodeport-k8s2","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":18,"frontend-address":{"ip":"10.105.98.17","port":10080},"backend-addresses":[{"ip":"10.0.0.155","port":80}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-nodeport-k8s2","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":16,"frontend-address":{"ip":"10.108.97.8","port":10080},"backend-addresses":[{"ip":"10.0.0.155","port":80}],"type":"ClusterIP","traffic-policy":"Local","name":"test-nodeport-local-k8s2","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":17,"frontend-address":{"ip":"10.108.97.8","port":10069},"backend-addresses":[{"ip":"10.0.0.155","port":69}],"type":"ClusterIP","traffic-policy":"Local","name":"test-nodeport-local-k8s2","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":21,"frontend-address":{"ip":"10.104.15.141","port":80},"backend-addresses":[{"ip":"10.0.0.155","port":80}],"type":"ClusterIP","traffic-policy":"Local","name":"test-lb-local-k8s2","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=da:1f:06:f2:75:a9 DstMAC=33:33:00:00:00:02 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..16..] Version=6 TrafficClass=0 FlowLabel=0 Length=16 NextHeader=ICMPv6 HopLimit=255 SrcIP=fe80::d81f:6ff:fef2:75a9 DstIP=ff02::2 HopByHop=nil}
ICMPv6	{Contents=[133, 0, 209, 183] Payload=[..12..] TypeCode=RouterSolicitation Checksum=53687 TypeBytes=[]}
  Failed to decode layer: No decoder for layer type ICMPv6RouterSolicitation
CPU 01: MARK 0x0 FROM 22 DROP: 70 bytes, reason Invalid source ip, identity 10064->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40575 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34029 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927778/183975 0x000e28220002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40576 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34028 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927829/183975 0x000e28550002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40577 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34027 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927881/183975 0x000e28890002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40578 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34026 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927932/183975 0x000e28bc0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40579 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34025 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:927983/183975 0x000e28ef0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40580 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34024 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928034/183975 0x000e29220002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40581 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34023 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928085/183975 0x000e29550002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40582 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34022 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928136/183975 0x000e29880002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40583 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34021 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928188/183975 0x000e29bc0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40584 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34020 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928239/183975 0x000e29ef0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40585 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34019 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928291/183975 0x000e2a230002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40586 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34018 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928342/183975 0x000e2a560002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40587 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34017 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928393/183975 0x000e2a890002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40588 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34016 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928446/183975 0x000e2abe0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40589 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34015 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928497/183975 0x000e2af10002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40590 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34014 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928548/183975 0x000e2b240002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40591 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34013 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928599/183975 0x000e2b570002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40592 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34012 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928651/183975 0x000e2b8b0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40593 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34011 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928702/183975 0x000e2bbe0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40594 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34010 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928753/183975 0x000e2bf10002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40595 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34009 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928804/183975 0x000e2c240002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40596 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34008 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928855/183975 0x000e2c570002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> Service upserted: {"id":8,"frontend-address":{"ip":"10.97.143.154","port":80},"backend-addresses":[{"ip":"10.0.1.142","port":80},{"ip":"10.0.0.216","port":80}],"type":"ClusterIP","traffic-policy":"Cluster","name":"testds-service","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":9,"frontend-address":{"ip":"10.97.143.154","port":69},"backend-addresses":[{"ip":"10.0.0.216","port":69},{"ip":"10.0.1.142","port":69}],"type":"ClusterIP","traffic-policy":"Cluster","name":"testds-service","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":12,"frontend-address":{"ip":"10.103.103.69","port":10069},"backend-addresses":[{"ip":"10.0.0.216","port":69},{"ip":"10.0.1.142","port":69}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-affinity","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":13,"frontend-address":{"ip":"10.103.103.69","port":10080},"backend-addresses":[{"ip":"10.0.0.216","port":80},{"ip":"10.0.1.142","port":80}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-affinity","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":20,"frontend-address":{"ip":"10.101.210.206","port":80},"backend-addresses":[{"ip":"10.0.0.216","port":80},{"ip":"10.0.1.142","port":80}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-lb","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":11,"frontend-address":{"ip":"10.108.53.136","port":10080},"backend-addresses":[{"ip":"10.0.0.216","port":80},{"ip":"10.0.1.142","port":80}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-nodeport","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":10,"frontend-address":{"ip":"10.108.53.136","port":10069},"backend-addresses":[{"ip":"10.0.1.142","port":69},{"ip":"10.0.0.216","port":69}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-nodeport","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":22,"frontend-address":{"ip":"10.100.107.187","port":20080},"backend-addresses":[{"ip":"10.0.0.216","port":80},{"ip":"10.0.1.142","port":80}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-external-ips","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":23,"frontend-address":{"ip":"10.100.107.187","port":20069},"backend-addresses":[{"ip":"10.0.0.216","port":69},{"ip":"10.0.1.142","port":69}],"type":"ClusterIP","traffic-policy":"Cluster","name":"test-external-ips","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":14,"frontend-address":{"ip":"10.104.81.40","port":10080},"backend-addresses":[{"ip":"10.0.1.142","port":80},{"ip":"10.0.0.216","port":80}],"type":"ClusterIP","traffic-policy":"Local","name":"test-nodeport-local","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
>> Service upserted: {"id":15,"frontend-address":{"ip":"10.104.81.40","port":10069},"backend-addresses":[{"ip":"10.0.1.142","port":69},{"ip":"10.0.0.216","port":69}],"type":"ClusterIP","traffic-policy":"Local","name":"test-nodeport-local","namespace":"202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito"}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40597 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34007 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928906/183975 0x000e2c8a0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40598 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34006 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:928957/183975 0x000e2cbd0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40599 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34005 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929009/183975 0x000e2cf10002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..78..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..32..] Version=6 TrafficClass=0 FlowLabel=912495 Length=32 NextHeader=TCP HopLimit=64 SrcIP=fd04::11 DstIP=fd02::c3 HopByHop=nil}
TCP	{Contents=[..32..] Payload=[] SrcPort=58242 DstPort=4240 Seq=3472188071 Ack=111487819 DataOffset=8 FIN=false SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=218 Checksum=64257 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929047/865931 0x000e2d17000d368b)] Padding=[]}
CPU 01: MARK 0xfd666fec FROM 2905 to-overlay: 86 bytes (86 captured), state unknown, interface cilium_vxlan, , identity remote-node->unknown, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=22119 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=52840 SrcIP=10.0.1.236 DstIP=10.0.0.9 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=44368 DstPort=4240 Seq=2952212622 Ack=3678574257 DataOffset=8 FIN=false SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=5659 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929047/865932 0x000e2d17000d368c)] Padding=[]}
CPU 01: MARK 0xe6e6f0b6 FROM 2905 to-overlay: 66 bytes (66 captured), state unknown, interface cilium_vxlan, , identity remote-node->unknown, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40600 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34004 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929060/183975 0x000e2d240002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=20347 Flags=DF FragOffset=0 TTL=62 Protocol=TCP Checksum=54987 SrcIP=10.0.0.122 DstIP=10.0.1.252 Options=[] Padding=[]}
TCP	{Contents=[..40..] Payload=[] SrcPort=36512 DstPort=3000(hbci) Seq=1728722644 Ack=0 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=false URG=false ECE=false CWR=false NS=false Window=29200 Checksum=16829 Urgent=0 Options=[..5..] Padding=[]}
CPU 01: MARK 0x0 FROM 2905 DROP: 74 bytes, reason Stale or unroutable IP, identity remote-node->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40601 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34003 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929111/183975 0x000e2d570002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..86..] SrcMAC=da:1f:06:f2:75:a9 DstMAC=2a:48:66:ef:6e:fa EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..64..] Version=4 IHL=5 TOS=0 Length=84 Id=52739 Flags=DF FragOffset=0 TTL=64 Protocol=ICMPv4 Checksum=22186 SrcIP=10.0.1.36 DstIP=10.0.0.216 Options=[] Padding=[]}
ICMPv4	{Contents=[..8..] Payload=[..56..] TypeCode=EchoRequest Checksum=12062 Id=7 Seq=1}
  Failed to decode layer: No decoder for layer type Payload
CPU 01: MARK 0x0 FROM 22 to-overlay: 98 bytes (98 captured), state new, interface cilium_vxlan, , identity 10064->unknown, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..86..] SrcMAC=2a:48:66:ef:6e:fa DstMAC=da:1f:06:f2:75:a9 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..64..] Version=4 IHL=5 TOS=0 Length=84 Id=53235 Flags= FragOffset=0 TTL=63 Protocol=ICMPv4 Checksum=38330 SrcIP=10.0.0.216 DstIP=10.0.1.36 Options=[] Padding=[]}
ICMPv4	{Contents=[..8..] Payload=[..56..] TypeCode=EchoReply Checksum=14110 Id=7 Seq=1}
  Failed to decode layer: No decoder for layer type Payload
CPU 01: MARK 0x0 FROM 22 to-endpoint: 98 bytes (98 captured), state reply, interface 224, , identity 16842->10064, orig-ip 10.0.0.216, to endpoint 22
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40602 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34002 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929162/183975 0x000e2d8a0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40603 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34001 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929214/183975 0x000e2dbe0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40604 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34000 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929265/183975 0x000e2df10002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
Policy verdict log: flow 0xfd7eaf3c local EP ID 1401, remote ID host, proto 6, ingress, action allow, match L3-Only, 10.0.1.236:54574 -> 10.0.1.142:80 tcp SYN
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=be:0d:14:9c:a5:86 DstMAC=aa:27:eb:9f:39:2d EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=19749 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=55069 SrcIP=10.0.1.236 DstIP=10.0.1.142 Options=[] Padding=[]}
TCP	{Contents=[..40..] Payload=[] SrcPort=54574 DstPort=80(http) Seq=2510353933 Ack=0 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=false URG=false ECE=false CWR=false NS=false Window=29200 Checksum=6056 Urgent=0 Options=[..5..] Padding=[]}
CPU 01: MARK 0xfd7eaf3c FROM 1401 to-endpoint: 74 bytes (74 captured), state new, interface 220, , identity host->16842, orig-ip 10.0.1.236, to endpoint 1401
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=aa:27:eb:9f:39:2d DstMAC=be:0d:14:9c:a5:86 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=0 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=9283 SrcIP=10.0.1.142 DstIP=10.0.1.236 Options=[] Padding=[]}
TCP	{Contents=[..40..] Payload=[] SrcPort=80(http) DstPort=54574 Seq=3633570756 Ack=2510353934 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=28960 Checksum=6056 Urgent=0 Options=[..5..] Padding=[]}
CPU 01: MARK 0xf604377f FROM 1401 to-stack: 74 bytes (74 captured), state reply, , identity 16842->host, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=aa:27:eb:9f:39:2d DstMAC=be:0d:14:9c:a5:86 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=59702 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=15124 SrcIP=10.0.1.142 DstIP=10.0.1.236 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=80(http) DstPort=54574 Seq=3633571362 Ack=2510354044 DataOffset=8 FIN=true SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=227 Checksum=6048 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929293/929293 0x000e2e0d000e2e0d)] Padding=[]}
CPU 01: MARK 0xf604377f FROM 1401 to-stack: 66 bytes (66 captured), state reply, , identity 16842->host, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=be:0d:14:9c:a5:86 DstMAC=aa:27:eb:9f:39:2d EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=19753 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=55073 SrcIP=10.0.1.236 DstIP=10.0.1.142 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=54574 DstPort=80(http) Seq=2510354044 Ack=3633571363 DataOffset=8 FIN=true SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=238 Checksum=6048 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929293/929293 0x000e2e0d000e2e0d)] Padding=[]}
CPU 02: MARK 0xfd7eaf3c FROM 1401 to-endpoint: 66 bytes (66 captured), state established, interface 220, , identity host->16842, orig-ip 10.0.1.236, to endpoint 1401
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40605 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33999 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929317/183975 0x000e2e250002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40606 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33998 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929368/183975 0x000e2e580002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=aa:27:eb:9f:39:2d DstMAC=33:33:00:00:00:02 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..16..] Version=6 TrafficClass=0 FlowLabel=0 Length=16 NextHeader=ICMPv6 HopLimit=255 SrcIP=fe80::a827:ebff:fe9f:392d DstIP=ff02::2 HopByHop=nil}
ICMPv6	{Contents=[133, 0, 225, 68] Payload=[..12..] TypeCode=RouterSolicitation Checksum=57668 TypeBytes=[]}
  Failed to decode layer: No decoder for layer type ICMPv6RouterSolicitation
CPU 02: MARK 0x0 FROM 1401 DROP: 70 bytes, reason Invalid source ip, identity 16842->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40607 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33997 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929421/183975 0x000e2e8d0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40608 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33996 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929472/183975 0x000e2ec00002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40609 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33995 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929523/183975 0x000e2ef30002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40610 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33994 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929574/183975 0x000e2f260002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40611 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33993 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929625/183975 0x000e2f590002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40612 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33992 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929676/183975 0x000e2f8c0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40613 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33991 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929727/183975 0x000e2fbf0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40614 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33990 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929778/183975 0x000e2ff20002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=6a:8b:24:20:b7:e9 DstMAC=33:33:00:00:00:02 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..16..] Version=6 TrafficClass=0 FlowLabel=0 Length=16 NextHeader=ICMPv6 HopLimit=255 SrcIP=fe80::688b:24ff:fe20:b7e9 DstIP=ff02::2 HopByHop=nil}
ICMPv6	{Contents=[133, 0, 242, 3] Payload=[..12..] TypeCode=RouterSolicitation Checksum=61955 TypeBytes=[]}
  Failed to decode layer: No decoder for layer type ICMPv6RouterSolicitation
CPU 02: MARK 0x0 FROM 713 DROP: 70 bytes, reason Invalid source ip, identity 17387->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40615 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33989 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929829/183975 0x000e30250002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=da:1f:06:f2:75:a9 DstMAC=33:33:00:00:00:02 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..16..] Version=6 TrafficClass=0 FlowLabel=0 Length=16 NextHeader=ICMPv6 HopLimit=255 SrcIP=fe80::d81f:6ff:fef2:75a9 DstIP=ff02::2 HopByHop=nil}
ICMPv6	{Contents=[133, 0, 209, 183] Payload=[..12..] TypeCode=RouterSolicitation Checksum=53687 TypeBytes=[]}
  Failed to decode layer: No decoder for layer type ICMPv6RouterSolicitation
CPU 01: MARK 0x0 FROM 22 DROP: 70 bytes, reason Invalid source ip, identity 10064->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40616 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33988 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929880/183975 0x000e30580002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40617 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33987 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929931/183975 0x000e308b0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40618 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33986 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929982/183975 0x000e30be0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40619 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33985 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930033/183975 0x000e30f10002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40620 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33984 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930084/183975 0x000e31240002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40621 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33983 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930135/183975 0x000e31570002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=da:1f:06:f2:75:a9 DstMAC=2a:48:66:ef:6e:fa EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=44025 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=30919 SrcIP=10.0.1.36 DstIP=10.0.0.216 Options=[] Padding=[]}
TCP	{Contents=[..40..] Payload=[] SrcPort=36446 DstPort=80(http) Seq=811193800 Ack=0 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=false URG=false ECE=false CWR=false NS=false Window=29200 Checksum=5674 Urgent=0 Options=[..5..] Padding=[]}
CPU 02: MARK 0x7a6a26d7 FROM 22 to-overlay: 74 bytes (74 captured), state new, interface cilium_vxlan, , identity 10064->unknown, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=2a:48:66:ef:6e:fa DstMAC=da:1f:06:f2:75:a9 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=0 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=9665 SrcIP=10.0.0.216 DstIP=10.0.1.36 Options=[] Padding=[]}
TCP	{Contents=[..40..] Payload=[] SrcPort=80(http) DstPort=36446 Seq=3654996678 Ack=811193801 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=28960 Checksum=1054 Urgent=0 Options=[..5..] Padding=[]}
CPU 01: MARK 0x0 FROM 22 to-endpoint: 74 bytes (74 captured), state reply, interface lxc9bdfad7be41d, , identity 16842->10064, orig-ip 10.0.0.216, to endpoint 22
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=da:1f:06:f2:75:a9 DstMAC=2a:48:66:ef:6e:fa EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=44029 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=30923 SrcIP=10.0.1.36 DstIP=10.0.0.216 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=36446 DstPort=80(http) Seq=811193875 Ack=3654997253 DataOffset=8 FIN=true SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=238 Checksum=5666 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930176/870825 0x000e3180000d49a9)] Padding=[]}
CPU 02: MARK 0x7a6a26d7 FROM 22 to-overlay: 66 bytes (66 captured), state established, interface cilium_vxlan, , identity 10064->unknown, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=2a:48:66:ef:6e:fa DstMAC=da:1f:06:f2:75:a9 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=4163 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=5510 SrcIP=10.0.0.216 DstIP=10.0.1.36 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=80(http) DstPort=36446 Seq=3654997253 Ack=811193876 DataOffset=8 FIN=true SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=227 Checksum=41116 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:870825/930176 0x000d49a9000e3180)] Padding=[]}
CPU 01: MARK 0x0 FROM 22 to-endpoint: 66 bytes (66 captured), state reply, interface lxc9bdfad7be41d, , identity 16842->10064, orig-ip 10.0.0.216, to endpoint 22
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=da:1f:06:f2:75:a9 DstMAC=2a:48:66:ef:6e:fa EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=44030 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=30922 SrcIP=10.0.1.36 DstIP=10.0.0.216 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=36446 DstPort=80(http) Seq=811193876 Ack=3654997254 DataOffset=8 FIN=false SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=238 Checksum=5666 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930176/870825 0x000e3180000d49a9)] Padding=[]}
CPU 01: MARK 0x7a6a26d7 FROM 22 to-overlay: 66 bytes (66 captured), state established, interface cilium_vxlan, , identity 10064->unknown, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40622 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33982 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930186/183975 0x000e318a0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40623 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33981 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930237/183975 0x000e31bd0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> Policy deleted: {"labels":["k8s:io.cilium.k8s.policy.derived-from=CiliumNetworkPolicy","k8s:io.cilium.k8s.policy.name=l3-policy-demo","k8s:io.cilium.k8s.policy.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","k8s:io.cilium.k8s.policy.uid=999f3898-8109-45a7-98ea-2cb3912f47ae"],"revision":3,"rule_count":1}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40624 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33980 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930288/183975 0x000e31f00002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40625 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33979 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930339/183975 0x000e32230002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> Endpoint regenerated: {"id":1401,"labels":["k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.cilium.k8s.policy.cluster=default","k8s:zgroup=testDS"]}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40626 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33978 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930390/183975 0x000e32560002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40627 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33977 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930441/183975 0x000e32890002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40628 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33976 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930492/183975 0x000e32bc0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40629 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33975 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930544/183975 0x000e32f00002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
Listening for events on 3 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
level=info msg="Initializing dissection cache..." subsys=monitor

Expected
    <bool>: false
to be true
/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:109

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Key allocation attempt failed
Cilium pods: [cilium-gk44r cilium-j8wln]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
testds-8b87z                 false     false
testds-pk847                 false     false
coredns-7c74c644b-b46bw      false     false
test-k8s2-5ffdc78d54-khxf4   false     false
testclient-2-69nwh           false     false
testclient-2-zp4hq           false     false
testclient-kwhmx             false     false
testclient-m5j5l             false     false
Cilium agent 'cilium-gk44r': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 31 Failed 0
Cilium agent 'cilium-j8wln': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 40 Failed 0


Standard Error

Click to show.
12:20:47 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
12:20:48 STEP: Ensuring the namespace kube-system exists
12:20:48 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
12:20:48 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
12:20:48 STEP: Installing Cilium
12:20:48 STEP: Waiting for Cilium to become ready
12:21:17 STEP: Validating if Kubernetes DNS is deployed
12:21:17 STEP: Checking if deployment is ready
12:21:17 STEP: Checking if kube-dns service is plumbed correctly
12:21:17 STEP: Checking if pods have identity
12:21:17 STEP: Checking if DNS can resolve
12:21:17 STEP: Kubernetes DNS is up and operational
12:21:17 STEP: Validating Cilium Installation
12:21:17 STEP: Performing Cilium controllers preflight check
12:21:17 STEP: Performing Cilium health check
12:21:17 STEP: Performing Cilium status preflight check
12:21:17 STEP: Checking whether host EP regenerated
12:21:18 STEP: Performing Cilium service preflight check
12:21:18 STEP: Performing K8s service preflight check
12:21:20 STEP: Waiting for cilium-operator to be ready
12:21:20 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
12:21:20 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
12:21:20 STEP: Making sure all endpoints are in ready state
12:21:21 STEP: Launching cilium monitor on "cilium-gk44r"
12:21:21 STEP: Creating namespace 202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito
12:21:21 STEP: Deploying demo_ds.yaml in namespace 202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito
12:21:22 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
12:21:25 STEP: Waiting for 4m0s for 7 pods of deployment demo_ds.yaml to become ready
12:21:25 STEP: WaitforNPods(namespace="202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito", filter="")
12:21:34 STEP: WaitforNPods(namespace="202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito", filter="") => <nil>
12:21:34 STEP: Checking pod connectivity between nodes
12:21:34 STEP: WaitforPods(namespace="202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito", filter="-l zgroup=testDSClient")
12:21:34 STEP: WaitforPods(namespace="202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito", filter="-l zgroup=testDSClient") => <nil>
12:21:34 STEP: WaitforPods(namespace="202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito", filter="-l zgroup=testDS")
12:21:34 STEP: WaitforPods(namespace="202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito", filter="-l zgroup=testDS") => <nil>
12:21:40 STEP: Checking that ICMP notifications in egress direction were observed
12:21:40 STEP: Checking that ICMP notifications in ingress direction were observed
12:21:40 STEP: Checking the set of TCP notifications received matches expectations
12:21:40 STEP: Looking for TCP notifications using the ephemeral port "34704"
Could not locate final FIN notification in monitor log: egressTCPMatches [[84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 53 52 53 55 48 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 51 54 56 52 55 50 51 52 52 32 65 99 107 61 49 54 52 50 53 55 53 51 56 50 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 53 52 53 55 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 50 53 49 48 51 53 52 48 52 52 32 65 99 107 61 51 54 51 51 53 55 49 51 54 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 54 52 52 54 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 56 49 49 49 57 51 56 55 53 32 65 99 107 61 51 54 53 52 57 57 55 50 53 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101]]
12:21:41 STEP: Looking for TCP notifications using the ephemeral port "34704"
Could not locate final FIN notification in monitor log: egressTCPMatches [[84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 53 52 53 55 48 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 51 54 56 52 55 50 51 52 52 32 65 99 107 61 49 54 52 50 53 55 53 51 56 50 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 107 61 51 51 52 56 52 48 50 55 57 51 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101] [84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114 99 80 111 114 116 61 51 52 55 48 52 32 68 115 116 80 111 114 116 61 56 48 40 104 116 116 112 41 32 83 101 113 61 49 50 49 56 53 55 54 50 49 50 32 65 99 
...[truncated 62326577 chars]...
tents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40603 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34001 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929214/183975 0x000e2dbe0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40604 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=34000 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929265/183975 0x000e2df10002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
Policy verdict log: flow 0xfd7eaf3c local EP ID 1401, remote ID host, proto 6, ingress, action allow, match L3-Only, 10.0.1.236:54574 -> 10.0.1.142:80 tcp SYN
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=be:0d:14:9c:a5:86 DstMAC=aa:27:eb:9f:39:2d EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=19749 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=55069 SrcIP=10.0.1.236 DstIP=10.0.1.142 Options=[] Padding=[]}
TCP	{Contents=[..40..] Payload=[] SrcPort=54574 DstPort=80(http) Seq=2510353933 Ack=0 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=false URG=false ECE=false CWR=false NS=false Window=29200 Checksum=6056 Urgent=0 Options=[..5..] Padding=[]}
CPU 01: MARK 0xfd7eaf3c FROM 1401 to-endpoint: 74 bytes (74 captured), state new, interface 220, , identity host->16842, orig-ip 10.0.1.236, to endpoint 1401
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=aa:27:eb:9f:39:2d DstMAC=be:0d:14:9c:a5:86 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=0 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=9283 SrcIP=10.0.1.142 DstIP=10.0.1.236 Options=[] Padding=[]}
TCP	{Contents=[..40..] Payload=[] SrcPort=80(http) DstPort=54574 Seq=3633570756 Ack=2510353934 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=28960 Checksum=6056 Urgent=0 Options=[..5..] Padding=[]}
CPU 01: MARK 0xf604377f FROM 1401 to-stack: 74 bytes (74 captured), state reply, , identity 16842->host, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=aa:27:eb:9f:39:2d DstMAC=be:0d:14:9c:a5:86 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=59702 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=15124 SrcIP=10.0.1.142 DstIP=10.0.1.236 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=80(http) DstPort=54574 Seq=3633571362 Ack=2510354044 DataOffset=8 FIN=true SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=227 Checksum=6048 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929293/929293 0x000e2e0d000e2e0d)] Padding=[]}
CPU 01: MARK 0xf604377f FROM 1401 to-stack: 66 bytes (66 captured), state reply, , identity 16842->host, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=be:0d:14:9c:a5:86 DstMAC=aa:27:eb:9f:39:2d EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=19753 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=55073 SrcIP=10.0.1.236 DstIP=10.0.1.142 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=54574 DstPort=80(http) Seq=2510354044 Ack=3633571363 DataOffset=8 FIN=true SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=238 Checksum=6048 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929293/929293 0x000e2e0d000e2e0d)] Padding=[]}
CPU 02: MARK 0xfd7eaf3c FROM 1401 to-endpoint: 66 bytes (66 captured), state established, interface 220, , identity host->16842, orig-ip 10.0.1.236, to endpoint 1401
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40605 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33999 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929317/183975 0x000e2e250002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40606 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33998 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929368/183975 0x000e2e580002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=aa:27:eb:9f:39:2d DstMAC=33:33:00:00:00:02 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..16..] Version=6 TrafficClass=0 FlowLabel=0 Length=16 NextHeader=ICMPv6 HopLimit=255 SrcIP=fe80::a827:ebff:fe9f:392d DstIP=ff02::2 HopByHop=nil}
ICMPv6	{Contents=[133, 0, 225, 68] Payload=[..12..] TypeCode=RouterSolicitation Checksum=57668 TypeBytes=[]}
  Failed to decode layer: No decoder for layer type ICMPv6RouterSolicitation
CPU 02: MARK 0x0 FROM 1401 DROP: 70 bytes, reason Invalid source ip, identity 16842->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40607 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33997 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929421/183975 0x000e2e8d0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40608 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33996 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929472/183975 0x000e2ec00002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40609 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33995 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929523/183975 0x000e2ef30002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40610 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33994 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929574/183975 0x000e2f260002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40611 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33993 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929625/183975 0x000e2f590002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40612 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33992 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929676/183975 0x000e2f8c0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40613 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33991 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929727/183975 0x000e2fbf0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40614 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33990 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929778/183975 0x000e2ff20002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=6a:8b:24:20:b7:e9 DstMAC=33:33:00:00:00:02 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..16..] Version=6 TrafficClass=0 FlowLabel=0 Length=16 NextHeader=ICMPv6 HopLimit=255 SrcIP=fe80::688b:24ff:fe20:b7e9 DstIP=ff02::2 HopByHop=nil}
ICMPv6	{Contents=[133, 0, 242, 3] Payload=[..12..] TypeCode=RouterSolicitation Checksum=61955 TypeBytes=[]}
  Failed to decode layer: No decoder for layer type ICMPv6RouterSolicitation
CPU 02: MARK 0x0 FROM 713 DROP: 70 bytes, reason Invalid source ip, identity 17387->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40615 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33989 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929829/183975 0x000e30250002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=da:1f:06:f2:75:a9 DstMAC=33:33:00:00:00:02 EthernetType=IPv6 Length=0}
IPv6	{Contents=[..40..] Payload=[..16..] Version=6 TrafficClass=0 FlowLabel=0 Length=16 NextHeader=ICMPv6 HopLimit=255 SrcIP=fe80::d81f:6ff:fef2:75a9 DstIP=ff02::2 HopByHop=nil}
ICMPv6	{Contents=[133, 0, 209, 183] Payload=[..12..] TypeCode=RouterSolicitation Checksum=53687 TypeBytes=[]}
  Failed to decode layer: No decoder for layer type ICMPv6RouterSolicitation
CPU 01: MARK 0x0 FROM 22 DROP: 70 bytes, reason Invalid source ip, identity 10064->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40616 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33988 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929880/183975 0x000e30580002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40617 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33987 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929931/183975 0x000e308b0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40618 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33986 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:929982/183975 0x000e30be0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40619 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33985 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930033/183975 0x000e30f10002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40620 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33984 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930084/183975 0x000e31240002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40621 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33983 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930135/183975 0x000e31570002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=da:1f:06:f2:75:a9 DstMAC=2a:48:66:ef:6e:fa EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=44025 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=30919 SrcIP=10.0.1.36 DstIP=10.0.0.216 Options=[] Padding=[]}
TCP	{Contents=[..40..] Payload=[] SrcPort=36446 DstPort=80(http) Seq=811193800 Ack=0 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=false URG=false ECE=false CWR=false NS=false Window=29200 Checksum=5674 Urgent=0 Options=[..5..] Padding=[]}
CPU 02: MARK 0x7a6a26d7 FROM 22 to-overlay: 74 bytes (74 captured), state new, interface cilium_vxlan, , identity 10064->unknown, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..62..] SrcMAC=2a:48:66:ef:6e:fa DstMAC=da:1f:06:f2:75:a9 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=0 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=9665 SrcIP=10.0.0.216 DstIP=10.0.1.36 Options=[] Padding=[]}
TCP	{Contents=[..40..] Payload=[] SrcPort=80(http) DstPort=36446 Seq=3654996678 Ack=811193801 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=28960 Checksum=1054 Urgent=0 Options=[..5..] Padding=[]}
CPU 01: MARK 0x0 FROM 22 to-endpoint: 74 bytes (74 captured), state reply, interface lxc9bdfad7be41d, , identity 16842->10064, orig-ip 10.0.0.216, to endpoint 22
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=da:1f:06:f2:75:a9 DstMAC=2a:48:66:ef:6e:fa EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=44029 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=30923 SrcIP=10.0.1.36 DstIP=10.0.0.216 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=36446 DstPort=80(http) Seq=811193875 Ack=3654997253 DataOffset=8 FIN=true SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=238 Checksum=5666 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930176/870825 0x000e3180000d49a9)] Padding=[]}
CPU 02: MARK 0x7a6a26d7 FROM 22 to-overlay: 66 bytes (66 captured), state established, interface cilium_vxlan, , identity 10064->unknown, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=2a:48:66:ef:6e:fa DstMAC=da:1f:06:f2:75:a9 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=4163 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=5510 SrcIP=10.0.0.216 DstIP=10.0.1.36 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=80(http) DstPort=36446 Seq=3654997253 Ack=811193876 DataOffset=8 FIN=true SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=227 Checksum=41116 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:870825/930176 0x000d49a9000e3180)] Padding=[]}
CPU 01: MARK 0x0 FROM 22 to-endpoint: 66 bytes (66 captured), state reply, interface lxc9bdfad7be41d, , identity 16842->10064, orig-ip 10.0.0.216, to endpoint 22
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=da:1f:06:f2:75:a9 DstMAC=2a:48:66:ef:6e:fa EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=44030 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=30922 SrcIP=10.0.1.36 DstIP=10.0.0.216 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=36446 DstPort=80(http) Seq=811193876 Ack=3654997254 DataOffset=8 FIN=false SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=238 Checksum=5666 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930176/870825 0x000e3180000d49a9)] Padding=[]}
CPU 01: MARK 0x7a6a26d7 FROM 22 to-overlay: 66 bytes (66 captured), state established, interface cilium_vxlan, , identity 10064->unknown, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40622 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33982 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930186/183975 0x000e318a0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40623 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33981 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930237/183975 0x000e31bd0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> Policy deleted: {"labels":["k8s:io.cilium.k8s.policy.derived-from=CiliumNetworkPolicy","k8s:io.cilium.k8s.policy.name=l3-policy-demo","k8s:io.cilium.k8s.policy.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","k8s:io.cilium.k8s.policy.uid=999f3898-8109-45a7-98ea-2cb3912f47ae"],"revision":3,"rule_count":1}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40624 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33980 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930288/183975 0x000e31f00002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40625 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33979 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930339/183975 0x000e32230002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
>> Endpoint regenerated: {"id":1401,"labels":["k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito","k8s:io.cilium.k8s.policy.serviceaccount=default","k8s:io.cilium.k8s.policy.cluster=default","k8s:zgroup=testDS"]}
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40626 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33978 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930390/183975 0x000e32560002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40627 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33977 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930441/183975 0x000e32890002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40628 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33976 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930492/183975 0x000e32bc0002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
------------------------------------------------------------------------------
Ethernet	{Contents=[..14..] Payload=[..54..] SrcMAC=de:e6:20:ba:ff:3b DstMAC=06:d1:de:4a:9e:58 EthernetType=IPv4 Length=0}
IPv4	{Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=40629 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33975 SrcIP=10.0.1.236 DstIP=10.0.1.108 Options=[] Padding=[]}
TCP	{Contents=[..32..] Payload=[] SrcPort=34704 DstPort=80(http) Seq=1218576212 Ack=3348402793 DataOffset=8 FIN=true SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=229 Checksum=6014 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:930544/183975 0x000e32f00002cea7)] Padding=[]}
CPU 01: MARK 0xe0fa371 FROM 2905 DROP: 66 bytes, reason Stale or unroutable IP, identity 11695->unknown
Listening for events on 3 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
level=info msg="Initializing dissection cache..." subsys=monitor

Expected
    <bool>: false
to be true
=== Test Finished at 2024-03-13T12:25:40Z====
12:25:40 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
12:25:40 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-5ffdc78d54-khxf4         2/2     Running   0          4m21s   10.0.0.155      k8s2   <none>           <none>
	 202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-2-69nwh                 1/1     Running   0          4m21s   10.0.0.11       k8s2   <none>           <none>
	 202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-2-zp4hq                 1/1     Running   0          4m21s   10.0.1.52       k8s1   <none>           <none>
	 202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-kwhmx                   1/1     Running   0          4m21s   10.0.1.36       k8s1   <none>           <none>
	 202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-m5j5l                   1/1     Running   0          4m21s   10.0.0.34       k8s2   <none>           <none>
	 202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-8b87z                       2/2     Running   0          4m21s   10.0.0.216      k8s2   <none>           <none>
	 202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-pk847                       2/2     Running   0          4m21s   10.0.1.142      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-d69c97b9b-l92ck            0/1     Running   0          64m     10.0.1.252      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-rmp27        1/1     Running   0          64m     10.0.1.82       k8s2   <none>           <none>
	 kube-system                                                       cilium-gk44r                       1/1     Running   0          4m54s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-j8wln                       1/1     Running   0          4m54s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-78fffd46c4-27x98   1/1     Running   0          4m54s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-78fffd46c4-jzr8z   1/1     Running   0          4m54s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-7c74c644b-b46bw            1/1     Running   0          54m     10.0.0.169      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          68m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          68m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   4          68m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-2ktlk                   1/1     Running   0          65m     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-t69ft                   1/1     Running   0          66m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   3          68m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-6qxs2                 1/1     Running   0          65m     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-mczmr                 1/1     Running   0          65m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-8bt9s               1/1     Running   0          65m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-xmr82               1/1     Running   0          65m     192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-gk44r cilium-j8wln]
cmd: kubectl exec -n kube-system cilium-gk44r -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.19 (v1.19.16) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 Cilium:                  Ok   1.12.19 (v1.12.19-968f7a86)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       31/31 healthy
	 Proxy Status:            OK, ip 10.0.1.236, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 1886/65535 (2.88%), Flows/s: 6.25   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-13T12:25:04Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-gk44r -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                             
	 22         Disabled           Disabled          10064      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1bb   10.0.1.36    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito                                    
	                                                            k8s:zgroup=testDSClient                                                                                                            
	 713        Disabled           Disabled          17387      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::13c   10.0.1.52    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito                                    
	                                                            k8s:zgroup=testDSClient2                                                                                                           
	 1085       Disabled           Disabled          4          reserved:health                                                                                   fd02::181   10.0.1.44    ready   
	 1401       Disabled           Disabled          16842      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1ad   10.0.1.142   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito                                    
	                                                            k8s:zgroup=testDS                                                                                                                  
	 2905       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                 ready   
	                                                            k8s:node-role.kubernetes.io/master                                                                                                 
	                                                            reserved:host                                                                                                                      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-j8wln -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.19 (v1.19.16) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 Cilium:                  Ok   1.12.19 (v1.12.19-968f7a86)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       40/40 healthy
	 Proxy Status:            OK, ip 10.0.0.122, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 4947/65535 (7.55%), Flows/s: 16.82   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-13T12:25:04Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-j8wln -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                            
	 417        Disabled           Disabled          4          reserved:health                                                                                   fd02::c3   10.0.0.9     ready   
	 891        Disabled           Disabled          16842      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::21   10.0.0.216   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito                                   
	                                                            k8s:zgroup=testDS                                                                                                                 
	 968        Disabled           Disabled          56400      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::f8   10.0.0.155   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                              
	 1126       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                ready   
	                                                            reserved:host                                                                                                                     
	 1736       Disabled           Disabled          13568      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::9f   10.0.0.169   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                       
	                                                            k8s:k8s-app=kube-dns                                                                                                              
	 2048       Disabled           Disabled          17387      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::fe   10.0.0.11    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito                                   
	                                                            k8s:zgroup=testDSClient2                                                                                                          
	 3509       Disabled           Disabled          10064      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::8e   10.0.0.34    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito                                   
	                                                            k8s:zgroup=testDSClient                                                                                                           
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
12:26:20 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
12:26:20 STEP: Deleting deployment demo_ds.yaml
12:26:21 STEP: Deleting namespace 202403131221k8sdatapathconfigmonitoraggregationchecksthatmonito
12:26:36 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|a61b0b69_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//385/artifact/a61b0b69_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//385/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//385/artifact/test_results_Cilium-PR-K8s-1.19-kernel-4.9_385_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9/385/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #31476 hit this flake with 91.65% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-kdd6s policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-kdd6s policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000a01980>: {
        s: "Cannot retrieve cilium pod cilium-kdd6s policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 1
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Network status error received, restarting client connections
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Unable to get node resource
Waiting for k8s node information
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-kdd6s cilium-mh998]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
grafana-698dc95f6c-9m7s7      false     false
prometheus-669755c8c5-qfw67   false     false
coredns-85fbf8f7dd-hg6xg      false     false
testclient-89pgm              false     false
testds-xkmpz                  false     false
Cilium agent 'cilium-kdd6s': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-mh998': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
13:27:09 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
13:27:09 STEP: Ensuring the namespace kube-system exists
13:27:09 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:27:09 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:27:09 STEP: Installing Cilium
13:27:10 STEP: Waiting for Cilium to become ready
13:27:55 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-g9szq in namespace kube-system
13:27:55 STEP: Validating if Kubernetes DNS is deployed
13:27:55 STEP: Checking if deployment is ready
13:27:55 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
13:27:55 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
13:27:55 STEP: Waiting for Kubernetes DNS to become operational
13:27:55 STEP: Checking if deployment is ready
13:27:55 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:27:56 STEP: Checking if deployment is ready
13:27:56 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:27:57 STEP: Checking if deployment is ready
13:27:57 STEP: Checking if kube-dns service is plumbed correctly
13:27:57 STEP: Checking if pods have identity
13:27:57 STEP: Checking if DNS can resolve
13:27:58 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
13:27:58 STEP: Checking if deployment is ready
13:27:58 STEP: Checking if kube-dns service is plumbed correctly
13:27:58 STEP: Checking if pods have identity
13:27:58 STEP: Checking if DNS can resolve
13:27:59 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
13:27:59 STEP: Checking if deployment is ready
13:27:59 STEP: Checking if kube-dns service is plumbed correctly
13:27:59 STEP: Checking if pods have identity
13:27:59 STEP: Checking if DNS can resolve
13:28:00 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
13:28:00 STEP: Checking if deployment is ready
13:28:00 STEP: Checking if kube-dns service is plumbed correctly
13:28:00 STEP: Checking if pods have identity
13:28:00 STEP: Checking if DNS can resolve
13:28:01 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
13:28:01 STEP: Checking if deployment is ready
13:28:01 STEP: Checking if kube-dns service is plumbed correctly
13:28:01 STEP: Checking if DNS can resolve
13:28:01 STEP: Checking if pods have identity
13:28:02 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
13:28:02 STEP: Checking if deployment is ready
13:28:02 STEP: Checking if kube-dns service is plumbed correctly
13:28:02 STEP: Checking if pods have identity
13:28:02 STEP: Checking if DNS can resolve
13:28:03 STEP: Validating Cilium Installation
13:28:03 STEP: Performing Cilium controllers preflight check
13:28:03 STEP: Performing Cilium status preflight check
13:28:03 STEP: Performing Cilium health check
13:28:03 STEP: Checking whether host EP regenerated
13:28:04 STEP: Performing Cilium service preflight check
13:28:04 STEP: Performing K8s service preflight check
13:28:05 STEP: Waiting for cilium-operator to be ready
13:28:05 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
13:28:05 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
13:28:05 STEP: Making sure all endpoints are in ready state
13:28:15 STEP: Launching cilium monitor on "cilium-mh998"
13:28:15 STEP: Creating namespace 202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito
13:28:15 STEP: Deploying demo_ds.yaml in namespace 202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito
13:28:16 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-kdd6s policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000a01980>: {
        s: "Cannot retrieve cilium pod cilium-kdd6s policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-03-19T13:28:27Z====
13:28:27 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
13:28:32 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                              READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-sdqh8         0/2     ContainerCreating   0          18s     <none>          k8s2   <none>           <none>
	 202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-5568k                  0/1     ContainerCreating   0          18s     <none>          k8s2   <none>           <none>
	 202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-89pgm                  1/1     Running             0          18s     10.0.1.168      k8s1   <none>           <none>
	 202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-wg8rv                      0/2     ContainerCreating   0          18s     <none>          k8s2   <none>           <none>
	 202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-xkmpz                      2/2     Running             0          18s     10.0.1.151      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-9m7s7          0/1     Running             0          85s     10.0.0.31       k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-qfw67       1/1     Running             0          85s     10.0.0.131      k8s2   <none>           <none>
	 kube-system                                                       cilium-kdd6s                      1/1     Running             0          84s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-mh998                      1/1     Running             0          84s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-8d7595f7d-g8m2s   1/1     Running             0          84s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-8d7595f7d-wxshz   1/1     Running             0          84s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-hg6xg          1/1     Running             0          39s     10.0.0.175      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                         1/1     Running             0          6m48s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1               1/1     Running             0          6m48s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1      1/1     Running             0          6m48s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-4dxxc                  1/1     Running             0          2m22s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-tzc5t                  1/1     Running             0          6m43s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1               1/1     Running             0          6m48s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-kv6rw                1/1     Running             0          102s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-w95zw                1/1     Running             0          102s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-5kls6              1/1     Running             0          2m20s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-tmn9m              1/1     Running             0          2m20s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-kdd6s cilium-mh998]
cmd: kubectl exec -n kube-system cilium-kdd6s -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.13 (v1.13.13-a31dc33d)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.50, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 279/65535 (0.43%), Flows/s: 4.07   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-19T13:28:04Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-kdd6s -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 189        Disabled           Disabled          33418      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::28   10.0.0.175   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 298        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::c4   10.0.0.221   ready   
	 303        Disabled           Disabled          33621      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::60   10.0.0.83    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 703        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 796        Disabled           Disabled          36849      k8s:app=grafana                                                                                                                  fd02::86   10.0.0.31    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1959       Disabled           Disabled          5493       k8s:app=prometheus                                                                                                               fd02::9    10.0.0.131   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2346       Disabled           Disabled          56530      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::9f   10.0.0.166   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 3703       Disabled           Disabled          15189      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::f6   10.0.0.171   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-mh998 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.13 (v1.13.13-a31dc33d)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.114, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 179/65535 (0.27%), Flows/s: 2.78   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-19T13:28:05Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-mh998 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 32         Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1a2   10.0.1.147   ready   
	 2621       Disabled           Disabled          33621      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::173   10.0.1.168   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 2961       Disabled           Disabled          56530      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::12d   10.0.1.151   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 3295       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
13:29:15 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
13:29:15 STEP: Deleting deployment demo_ds.yaml
13:29:16 STEP: Deleting namespace 202403191328k8sdatapathconfigmonitoraggregationchecksthatmonito
13:29:31 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|719b5819_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//440/artifact/719b5819_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//440/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//440/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_440_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/440/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #31476 hit this flake with 94.10% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-w8bxn policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-w8bxn policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc0004fdf80>: {
        s: "Cannot retrieve cilium pod cilium-w8bxn policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Waiting for k8s node information
Unable to get node resource
Cilium pods: [cilium-rdbwr cilium-w8bxn]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-bmjrr              false     false
testds-49pvg                  false     false
grafana-698dc95f6c-nkz67      false     false
prometheus-669755c8c5-44qrj   false     false
coredns-85fbf8f7dd-kms9w      false     false
Cilium agent 'cilium-rdbwr': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-w8bxn': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 36 Failed 0


Standard Error

Click to show.
17:00:23 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
17:00:23 STEP: Ensuring the namespace kube-system exists
17:00:23 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
17:00:23 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
17:00:23 STEP: Installing Cilium
17:00:24 STEP: Waiting for Cilium to become ready
17:01:09 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-xwppj in namespace kube-system
17:01:09 STEP: Validating if Kubernetes DNS is deployed
17:01:09 STEP: Checking if deployment is ready
17:01:09 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
17:01:09 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
17:01:09 STEP: Waiting for Kubernetes DNS to become operational
17:01:09 STEP: Checking if deployment is ready
17:01:09 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
17:01:10 STEP: Checking if deployment is ready
17:01:10 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
17:01:11 STEP: Checking if deployment is ready
17:01:11 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
17:01:12 STEP: Checking if deployment is ready
17:01:12 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
17:01:13 STEP: Checking if deployment is ready
17:01:13 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
17:01:14 STEP: Checking if deployment is ready
17:01:14 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
17:01:15 STEP: Checking if deployment is ready
17:01:15 STEP: Checking if pods have identity
17:01:15 STEP: Checking if kube-dns service is plumbed correctly
17:01:15 STEP: Checking if DNS can resolve
17:01:16 STEP: Validating Cilium Installation
17:01:16 STEP: Performing Cilium controllers preflight check
17:01:16 STEP: Performing Cilium health check
17:01:16 STEP: Performing Cilium status preflight check
17:01:16 STEP: Checking whether host EP regenerated
17:01:17 STEP: Performing Cilium service preflight check
17:01:17 STEP: Performing K8s service preflight check
17:01:18 STEP: Waiting for cilium-operator to be ready
17:01:18 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
17:01:18 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
17:01:18 STEP: Making sure all endpoints are in ready state
17:01:26 STEP: Launching cilium monitor on "cilium-rdbwr"
17:01:26 STEP: Creating namespace 202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito
17:01:26 STEP: Deploying demo_ds.yaml in namespace 202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito
17:01:27 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-w8bxn policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc0004fdf80>: {
        s: "Cannot retrieve cilium pod cilium-w8bxn policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-03-19T17:01:38Z====
17:01:38 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
17:01:40 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                              READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-chn4h         0/2     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-bmjrr                  1/1     Running             0          15s     10.0.1.102      k8s1   <none>           <none>
	 202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-dgmgw                  0/1     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-49pvg                      2/2     Running             0          15s     10.0.1.63       k8s1   <none>           <none>
	 202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-q2zpj                      0/2     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-nkz67          0/1     Running             0          79s     10.0.0.31       k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-44qrj       1/1     Running             0          79s     10.0.0.112      k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-8d7595f7d-7gls5   1/1     Running             0          78s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-8d7595f7d-fz5c8   1/1     Running             0          78s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-rdbwr                      1/1     Running             0          78s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-w8bxn                      1/1     Running             0          78s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-kms9w          1/1     Running             0          33s     10.0.0.123      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                         1/1     Running             0          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1               1/1     Running             0          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1      1/1     Running             1          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-8ccz6                  1/1     Running             0          3m26s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-h45l2                  1/1     Running             0          2m16s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1               1/1     Running             1          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-5mlgx                1/1     Running             0          95s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-tzq4l                1/1     Running             0          95s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-lwbrx              1/1     Running             0          2m13s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-zjssk              1/1     Running             0          2m13s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-rdbwr cilium-w8bxn]
cmd: kubectl exec -n kube-system cilium-rdbwr -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.13 (v1.13.13-a31dc33d)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.170, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 136/65535 (0.21%), Flows/s: 2.24   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-19T17:01:17Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-rdbwr -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 80         Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 1037       Disabled           Disabled          3974       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::119   10.0.1.63    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 2580       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::184   10.0.1.145   ready   
	 2697       Disabled           Disabled          62943      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1d9   10.0.1.102   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-w8bxn -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.13 (v1.13.13-a31dc33d)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 8/254 allocated from 10.0.0.0/24, IPv6: 8/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       36/36 healthy
	 Proxy Status:            OK, ip 10.0.0.215, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 224/65535 (0.34%), Flows/s: 3.86   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-19T17:01:18Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-w8bxn -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 30         Disabled           Disabled          35114      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::b5   10.0.0.123   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 299        Disabled           Disabled          50415      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::16   10.0.0.10    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 324        Disabled           Disabled          2585       k8s:app=prometheus                                                                                                               fd02::91   10.0.0.112   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1557       Disabled           Disabled          16648      k8s:app=grafana                                                                                                                  fd02::cd   10.0.0.31    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2140       Disabled           Disabled          3974       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::aa   10.0.0.39    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 2713       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::c8   10.0.0.22    ready   
	 3386       Disabled           Disabled          62943      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::f3   10.0.0.225   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 3899       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
17:02:24 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
17:02:24 STEP: Deleting deployment demo_ds.yaml
17:02:24 STEP: Deleting namespace 202403191701k8sdatapathconfigmonitoraggregationchecksthatmonito
17:02:40 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|e175602b_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//442/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//442/artifact/e175602b_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//442/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_442_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/442/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #31476 hit this flake with 92.36% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-clzth policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-clzth policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000b01450>: {
        s: "Cannot retrieve cilium pod cilium-clzth policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Waiting for k8s node information
Unable to get node resource
Cilium pods: [cilium-clzth cilium-r2t56]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-dj2kl              false     false
testds-f4rs7                  false     false
grafana-7ddfc74b5b-spdps      false     false
prometheus-669755c8c5-h6scs   false     false
coredns-bb76b858c-fndvw       false     false
Cilium agent 'cilium-clzth': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-r2t56': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
19:38:37 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
19:38:37 STEP: Ensuring the namespace kube-system exists
19:38:37 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
19:38:37 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
19:38:37 STEP: Installing Cilium
19:38:38 STEP: Waiting for Cilium to become ready
19:39:20 STEP: Restarting unmanaged pods coredns-bb76b858c-mbbdq in namespace kube-system
19:39:20 STEP: Validating if Kubernetes DNS is deployed
19:39:20 STEP: Checking if deployment is ready
19:39:20 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
19:39:20 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
19:39:21 STEP: Waiting for Kubernetes DNS to become operational
19:39:21 STEP: Checking if deployment is ready
19:39:21 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
19:39:22 STEP: Checking if deployment is ready
19:39:22 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
19:39:23 STEP: Checking if deployment is ready
19:39:23 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
19:39:24 STEP: Checking if deployment is ready
19:39:24 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
19:39:25 STEP: Checking if deployment is ready
19:39:25 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
19:39:26 STEP: Checking if deployment is ready
19:39:26 STEP: Checking if kube-dns service is plumbed correctly
19:39:26 STEP: Checking if pods have identity
19:39:26 STEP: Checking if DNS can resolve
19:39:26 STEP: Validating Cilium Installation
19:39:26 STEP: Performing Cilium controllers preflight check
19:39:26 STEP: Performing Cilium status preflight check
19:39:26 STEP: Performing Cilium health check
19:39:26 STEP: Checking whether host EP regenerated
19:39:27 STEP: Performing Cilium service preflight check
19:39:27 STEP: Performing K8s service preflight check
19:39:29 STEP: Waiting for cilium-operator to be ready
19:39:29 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
19:39:29 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
19:39:29 STEP: Making sure all endpoints are in ready state
19:39:30 STEP: Launching cilium monitor on "cilium-r2t56"
19:39:30 STEP: Creating namespace 202403191939k8sdatapathconfigmonitoraggregationchecksthatmonito
19:39:30 STEP: Deploying demo_ds.yaml in namespace 202403191939k8sdatapathconfigmonitoraggregationchecksthatmonito
19:39:31 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-clzth policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000b01450>: {
        s: "Cannot retrieve cilium pod cilium-clzth policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-03-19T19:39:41Z====
19:39:41 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
19:39:42 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                              READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202403191939k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-5ffdc78d54-pn8mv        0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202403191939k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-dj2kl                  1/1     Running             0          13s     10.0.1.86       k8s1   <none>           <none>
	 202403191939k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-fwvp5                  0/1     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202403191939k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-f4rs7                      2/2     Running             0          13s     10.0.1.249      k8s1   <none>           <none>
	 202403191939k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-h4tvq                      0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-7ddfc74b5b-spdps          0/1     Running             0          67s     10.0.0.149      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-h6scs       0/1     ContainerCreating   0          67s     <none>          k8s2   <none>           <none>
	 kube-system                                                       cilium-clzth                      1/1     Running             0          66s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-dbdbcf99f-2rf8w   1/1     Running             0          66s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-dbdbcf99f-vpzdw   1/1     Running             0          66s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-r2t56                      1/1     Running             0          66s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-bb76b858c-fndvw           1/1     Running             0          23s     10.0.1.77       k8s1   <none>           <none>
	 kube-system                                                       etcd-k8s1                         1/1     Running             0          4m56s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1               1/1     Running             0          4m56s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1      1/1     Running             0          4m56s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-82rbb                  1/1     Running             0          2m5s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-tltd7                  1/1     Running             0          4m38s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1               1/1     Running             0          4m56s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-8hgx2                1/1     Running             0          84s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-cbrxd                1/1     Running             0          84s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-j7g2d              1/1     Running             0          2m3s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-pvv7j              1/1     Running             0          2m3s    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-clzth cilium-r2t56]
cmd: kubectl exec -n kube-system cilium-clzth -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.20 (v1.20.15) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.13 (v1.13.13-a31dc33d)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.0.148, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 174/65535 (0.27%), Flows/s: 3.65   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-19T19:39:27Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-clzth -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                            
	 195        Disabled           Disabled          38574      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::f8   10.0.0.43    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403191939k8sdatapathconfigmonitoraggregationchecksthatmonito                                   
	                                                            k8s:zgroup=testDS                                                                                                                 
	 257        Disabled           Disabled          20243      k8s:app=prometheus                                                                                fd02::fc   10.0.0.1     ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                            
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                 
	 675        Disabled           Disabled          63411      k8s:app=grafana                                                                                   fd02::84   10.0.0.149   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                 
	 1181       Disabled           Disabled          2243       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::f3   10.0.0.67    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403191939k8sdatapathconfigmonitoraggregationchecksthatmonito                                   
	                                                            k8s:zgroup=testDSClient                                                                                                           
	 1482       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                ready   
	                                                            reserved:host                                                                                                                     
	 1841       Disabled           Disabled          28236      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1b   10.0.0.10    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403191939k8sdatapathconfigmonitoraggregationchecksthatmonito                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                              
	 3270       Disabled           Disabled          4          reserved:health                                                                                   fd02::50   10.0.0.42    ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-r2t56 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.20 (v1.20.15) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.13 (v1.13.13-a31dc33d)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.1.202, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 186/65535 (0.28%), Flows/s: 4.49   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-19T19:39:29Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-r2t56 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                             
	 323        Disabled           Disabled          4          reserved:health                                                                                   fd02::1d0   10.0.1.148   ready   
	 915        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                 ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                          
	                                                            k8s:node-role.kubernetes.io/master                                                                                                 
	                                                            reserved:host                                                                                                                      
	 2121       Disabled           Disabled          38574      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::133   10.0.1.249   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202403191939k8sdatapathconfigmonitoraggregationchecksthatmonito                                    
	                                                            k8s:zgroup=testDS                                                                                                                  
	 2669       Disabled           Disabled          17455      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::192   10.0.1.77    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                                               
	 3027       Disabled           Disabled          2243       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::17c   10.0.1.86    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202403191939k8sdatapathconfigmonitoraggregationchecksthatmonito                                    
	                                                            k8s:zgroup=testDSClient                                                                                                            
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
19:40:23 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
19:40:23 STEP: Deleting deployment demo_ds.yaml
19:40:24 STEP: Deleting namespace 202403191939k8sdatapathconfigmonitoraggregationchecksthatmonito
19:40:39 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|7d348862_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19//420/artifact/7d348862_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19//420/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19//420/artifact/test_results_Cilium-PR-K8s-1.20-kernel-4.19_420_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19/420/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #31498 hit this flake with 91.34% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-2dtrx policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-2dtrx policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000871730>: {
        s: "Cannot retrieve cilium pod cilium-2dtrx policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Cilium pods: [cilium-2dtrx cilium-vpzfw]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-rqppq              false     false
testds-8tlk2                  false     false
grafana-698dc95f6c-fd7q6      false     false
prometheus-669755c8c5-5tbgx   false     false
coredns-85fbf8f7dd-9k668      false     false
Cilium agent 'cilium-2dtrx': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0
Cilium agent 'cilium-vpzfw': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
08:26:11 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
08:26:11 STEP: Ensuring the namespace kube-system exists
08:26:11 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
08:26:11 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
08:26:11 STEP: Installing Cilium
08:26:12 STEP: Waiting for Cilium to become ready
08:26:54 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-zh8rj in namespace kube-system
08:26:54 STEP: Validating if Kubernetes DNS is deployed
08:26:54 STEP: Checking if deployment is ready
08:26:54 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
08:26:54 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
08:26:55 STEP: Waiting for Kubernetes DNS to become operational
08:26:55 STEP: Checking if deployment is ready
08:26:55 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:26:56 STEP: Checking if deployment is ready
08:26:56 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:26:57 STEP: Checking if deployment is ready
08:26:57 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:26:58 STEP: Checking if deployment is ready
08:26:58 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:26:59 STEP: Checking if deployment is ready
08:26:59 STEP: Checking if kube-dns service is plumbed correctly
08:26:59 STEP: Checking if DNS can resolve
08:26:59 STEP: Checking if pods have identity
08:26:59 STEP: Validating Cilium Installation
08:26:59 STEP: Performing Cilium controllers preflight check
08:26:59 STEP: Performing Cilium health check
08:26:59 STEP: Checking whether host EP regenerated
08:26:59 STEP: Performing Cilium status preflight check
08:27:00 STEP: Performing Cilium service preflight check
08:27:00 STEP: Performing K8s service preflight check
08:27:02 STEP: Waiting for cilium-operator to be ready
08:27:02 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
08:27:02 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
08:27:02 STEP: Making sure all endpoints are in ready state
08:27:08 STEP: Launching cilium monitor on "cilium-vpzfw"
08:27:08 STEP: Creating namespace 202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito
08:27:08 STEP: Deploying demo_ds.yaml in namespace 202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito
08:27:09 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-2dtrx policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc000871730>: {
        s: "Cannot retrieve cilium pod cilium-2dtrx policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-03-20T08:27:19Z====
08:27:19 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
08:27:20 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-25pbv          0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-rqppq                   1/1     Running             0          13s     10.0.1.163      k8s1   <none>           <none>
	 202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-tvdvr                   0/1     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-8tlk2                       2/2     Running             0          13s     10.0.1.190      k8s1   <none>           <none>
	 202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-gpv88                       0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-fd7q6           0/1     Running             0          71s     10.0.0.85       k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-5tbgx        1/1     Running             0          71s     10.0.0.94       k8s2   <none>           <none>
	 kube-system                                                       cilium-2dtrx                       1/1     Running             0          70s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-5cbd9d7865-g2fh7   1/1     Running             0          70s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-5cbd9d7865-gnb5f   1/1     Running             0          70s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-vpzfw                       1/1     Running             0          70s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-9k668           1/1     Running             0          27s     10.0.0.32       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          4m54s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          4m54s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          4m54s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-2ljxp                   1/1     Running             0          2m7s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-rcg89                   1/1     Running             0          4m33s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m1s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-6wgl7                 1/1     Running             0          87s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-smzrr                 1/1     Running             0          87s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-l4pqf               1/1     Running             0          2m4s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-vl98j               1/1     Running             0          2m4s    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-2dtrx cilium-vpzfw]
cmd: kubectl exec -n kube-system cilium-2dtrx -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.13 (v1.13.13-6f396456)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       33/33 healthy
	 Proxy Status:            OK, ip 10.0.0.159, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 224/65535 (0.34%), Flows/s: 4.54   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-20T08:27:01Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-2dtrx -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 487        Disabled           Disabled          2853       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::82   10.0.0.32    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 1143       Disabled           Disabled          60827      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::e8   10.0.0.205   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 1523       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 1539       Disabled           Disabled          27333      k8s:app=prometheus                                                                                                               fd02::64   10.0.0.94    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2101       Disabled           Disabled          26857      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::e0   10.0.0.220   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 3241       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::ef   10.0.0.157   ready   
	 3383       Disabled           Disabled          10797      k8s:app=grafana                                                                                                                  fd02::e2   10.0.0.85    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 3443       Disabled           Disabled          37245      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::97   10.0.0.109   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-vpzfw -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.13 (v1.13.13-6f396456)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.43, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 135/65535 (0.21%), Flows/s: 2.93   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-20T08:27:02Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-vpzfw -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 1187       Disabled           Disabled          60827      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::141   10.0.1.163   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 2438       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 2739       Disabled           Disabled          37245      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::175   10.0.1.190   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 4030       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::123   10.0.1.9     ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
08:28:04 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
08:28:04 STEP: Deleting deployment demo_ds.yaml
08:28:04 STEP: Deleting namespace 202403200827k8sdatapathconfigmonitoraggregationchecksthatmonito
08:28:20 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|b895c48f_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//444/artifact/b895c48f_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//444/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//444/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_444_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/444/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #31587 hit this flake with 92.51% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-rzwtb policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-rzwtb policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc0003e6a80>: {
        s: "Cannot retrieve cilium pod cilium-rzwtb policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Cilium pods: [cilium-c745q cilium-rzwtb]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testds-ptmgw                  false     false
grafana-698dc95f6c-wzs7l      false     false
prometheus-669755c8c5-n47xb   false     false
coredns-85fbf8f7dd-p952g      false     false
testclient-sqfd6              false     false
Cilium agent 'cilium-c745q': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-rzwtb': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
10:34:15 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
10:34:15 STEP: Ensuring the namespace kube-system exists
10:34:15 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
10:34:15 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
10:34:15 STEP: Installing Cilium
10:34:16 STEP: Waiting for Cilium to become ready
10:35:03 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-bsw6j in namespace kube-system
10:35:03 STEP: Validating if Kubernetes DNS is deployed
10:35:03 STEP: Checking if deployment is ready
10:35:03 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
10:35:03 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
10:35:03 STEP: Waiting for Kubernetes DNS to become operational
10:35:03 STEP: Checking if deployment is ready
10:35:03 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:35:04 STEP: Checking if deployment is ready
10:35:04 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:35:05 STEP: Checking if deployment is ready
10:35:05 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:35:06 STEP: Checking if deployment is ready
10:35:06 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:35:07 STEP: Checking if deployment is ready
10:35:07 STEP: Checking if kube-dns service is plumbed correctly
10:35:07 STEP: Checking if pods have identity
10:35:07 STEP: Checking if DNS can resolve
10:35:08 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
10:35:08 STEP: Checking if deployment is ready
10:35:08 STEP: Checking if kube-dns service is plumbed correctly
10:35:08 STEP: Checking if pods have identity
10:35:08 STEP: Checking if DNS can resolve
10:35:09 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
10:35:09 STEP: Checking if deployment is ready
10:35:09 STEP: Checking if kube-dns service is plumbed correctly
10:35:09 STEP: Checking if DNS can resolve
10:35:09 STEP: Checking if pods have identity
10:35:10 STEP: Validating Cilium Installation
10:35:10 STEP: Performing Cilium controllers preflight check
10:35:10 STEP: Performing Cilium status preflight check
10:35:10 STEP: Performing Cilium health check
10:35:10 STEP: Checking whether host EP regenerated
10:35:18 STEP: Performing Cilium service preflight check
10:35:18 STEP: Performing K8s service preflight check
10:35:19 STEP: Waiting for cilium-operator to be ready
10:35:19 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
10:35:19 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
10:35:19 STEP: Making sure all endpoints are in ready state
10:35:20 STEP: Launching cilium monitor on "cilium-c745q"
10:35:20 STEP: Creating namespace 202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito
10:35:20 STEP: Deploying demo_ds.yaml in namespace 202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito
10:35:21 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-rzwtb policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc0003e6a80>: {
        s: "Cannot retrieve cilium pod cilium-rzwtb policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-03-25T10:35:32Z====
10:35:32 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
10:35:34 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-c762s          0/2     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-lvg6w                   0/1     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-sqfd6                   1/1     Running             0          15s     10.0.1.152      k8s1   <none>           <none>
	 202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-p846d                       0/2     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-ptmgw                       2/2     Running             0          15s     10.0.1.84       k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-wzs7l           0/1     ContainerCreating   0          81s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-n47xb        1/1     Running             0          81s     10.0.0.249      k8s2   <none>           <none>
	 kube-system                                                       cilium-c745q                       1/1     Running             0          80s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-54b675776f-c5572   1/1     Running             0          80s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-54b675776f-srm7s   1/1     Running             0          80s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-rzwtb                       1/1     Running             0          80s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-p952g           1/1     Running             0          33s     10.0.0.65       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m33s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m33s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m33s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-2gfn4                   1/1     Running             0          5m6s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-n2fv7                   1/1     Running             0          2m17s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m33s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-297s9                 1/1     Running             0          98s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-q49pz                 1/1     Running             0          98s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-fqzb7               1/1     Running             0          2m15s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-xljgl               1/1     Running             0          2m15s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-c745q cilium-rzwtb]
cmd: kubectl exec -n kube-system cilium-c745q -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.13 (v1.13.13-6c7adea2)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.203, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 175/65535 (0.27%), Flows/s: 3.03   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-25T10:35:11Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-c745q -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 162        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::19b   10.0.1.141   ready   
	 943        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 1535       Disabled           Disabled          32734      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1b6   10.0.1.84    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 3014       Disabled           Disabled          29582      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::116   10.0.1.152   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-rzwtb -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.13 (v1.13.13-6c7adea2)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.218, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 248/65535 (0.38%), Flows/s: 4.94   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-03-25T10:35:19Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-rzwtb -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 23         Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 30         Disabled           Disabled          29582      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::ad   10.0.0.112   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 282        Disabled           Disabled          16508      k8s:app=grafana                                                                                                                  fd02::6a   10.0.0.240   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 512        Disabled           Disabled          16012      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::3e   10.0.0.65    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 888        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::d5   10.0.0.145   ready   
	 1131       Disabled           Disabled          29092      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::b8   10.0.0.66    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 1190       Disabled           Disabled          14753      k8s:app=prometheus                                                                                                               fd02::93   10.0.0.249   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1587       Disabled           Disabled          32734      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::10   10.0.0.117   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
10:36:17 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
10:36:17 STEP: Deleting deployment demo_ds.yaml
10:36:18 STEP: Deleting namespace 202403251035k8sdatapathconfigmonitoraggregationchecksthatmonito
10:36:33 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|d2322bd4_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//452/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//452/artifact/d2322bd4_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//452/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_452_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/452/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #31772 hit this flake with 91.80% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-hqt88 policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-hqt88 policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc0009e8da0>: {
        s: "Cannot retrieve cilium pod cilium-hqt88 policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Cilium pods: [cilium-cc98d cilium-hqt88]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-gfszz              false     false
testds-tctl4                  false     false
grafana-698dc95f6c-hwcgn      false     false
prometheus-669755c8c5-f2bzn   false     false
coredns-85fbf8f7dd-g5q5x      false     false
Cilium agent 'cilium-cc98d': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-hqt88': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
23:29:35 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
23:29:35 STEP: Ensuring the namespace kube-system exists
23:29:35 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
23:29:35 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
23:29:35 STEP: Installing Cilium
23:29:36 STEP: Waiting for Cilium to become ready
23:30:20 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-nkcwp in namespace kube-system
23:30:21 STEP: Validating if Kubernetes DNS is deployed
23:30:21 STEP: Checking if deployment is ready
23:30:21 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
23:30:21 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
23:30:21 STEP: Waiting for Kubernetes DNS to become operational
23:30:21 STEP: Checking if deployment is ready
23:30:21 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
23:30:22 STEP: Checking if deployment is ready
23:30:22 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
23:30:23 STEP: Checking if deployment is ready
23:30:23 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
23:30:24 STEP: Checking if deployment is ready
23:30:24 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
23:30:25 STEP: Checking if deployment is ready
23:30:25 STEP: Checking if kube-dns service is plumbed correctly
23:30:25 STEP: Checking if pods have identity
23:30:25 STEP: Checking if DNS can resolve
23:30:26 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
23:30:26 STEP: Checking if deployment is ready
23:30:26 STEP: Checking if kube-dns service is plumbed correctly
23:30:26 STEP: Checking if pods have identity
23:30:26 STEP: Checking if DNS can resolve
23:30:26 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
23:30:27 STEP: Checking if deployment is ready
23:30:27 STEP: Checking if kube-dns service is plumbed correctly
23:30:27 STEP: Checking if pods have identity
23:30:27 STEP: Checking if DNS can resolve
23:30:27 STEP: Validating Cilium Installation
23:30:27 STEP: Performing Cilium controllers preflight check
23:30:27 STEP: Performing Cilium health check
23:30:27 STEP: Performing Cilium status preflight check
23:30:27 STEP: Checking whether host EP regenerated
23:30:36 STEP: Performing Cilium service preflight check
23:30:36 STEP: Performing K8s service preflight check
23:30:36 STEP: Waiting for cilium-operator to be ready
23:30:36 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
23:30:36 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
23:30:36 STEP: Making sure all endpoints are in ready state
23:30:38 STEP: Launching cilium monitor on "cilium-cc98d"
23:30:38 STEP: Creating namespace 202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito
23:30:38 STEP: Deploying demo_ds.yaml in namespace 202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito
23:30:39 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-hqt88 policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc0009e8da0>: {
        s: "Cannot retrieve cilium pod cilium-hqt88 policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-04-04T23:30:49Z====
23:30:49 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
23:30:51 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-nd6ks          0/2     ContainerCreating   0          14s     <none>          k8s2   <none>           <none>
	 202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-4znxg                   0/1     ContainerCreating   0          14s     <none>          k8s2   <none>           <none>
	 202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-gfszz                   1/1     Running             0          14s     10.0.1.96       k8s1   <none>           <none>
	 202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-tctl4                       2/2     Running             0          14s     10.0.1.56       k8s1   <none>           <none>
	 202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-xscnw                       0/2     ContainerCreating   0          14s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-hwcgn           0/1     Running             0          78s     10.0.0.174      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-f2bzn        1/1     Running             0          78s     10.0.0.191      k8s2   <none>           <none>
	 kube-system                                                       cilium-cc98d                       1/1     Running             0          77s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-hqt88                       1/1     Running             0          77s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-545894d9bd-fbzpc   1/1     Running             0          77s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-545894d9bd-hmlnz   1/1     Running             0          77s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-g5q5x           1/1     Running             0          32s     10.0.0.235      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m34s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m34s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m34s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-mq2kc                   1/1     Running             0          5m8s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-z7jqb                   1/1     Running             0          2m16s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m34s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-kxqzq                 1/1     Running             0          96s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-prjfb                 1/1     Running             0          96s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-nb2kx               1/1     Running             0          2m13s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-trllm               1/1     Running             0          2m13s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-cc98d cilium-hqt88]
cmd: kubectl exec -n kube-system cilium-cc98d -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.14 (v1.13.14-ff026658)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.88, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 151/65535 (0.23%), Flows/s: 2.12   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-04-04T23:30:29Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-cc98d -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4        STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 846        Disabled           Disabled          13914      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::149   10.0.1.96   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 1090       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                               
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            reserved:host                                                                                                                                                    
	 1221       Disabled           Disabled          30559      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1ec   10.0.1.56   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 2021       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::13a   10.0.1.24   ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-hqt88 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.14 (v1.13.14-ff026658)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.211, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 237/65535 (0.36%), Flows/s: 4.09   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-04-04T23:30:36Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-hqt88 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 602        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1d   10.0.0.28    ready   
	 735        Disabled           Disabled          49854      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::8d   10.0.0.235   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 899        Disabled           Disabled          33657      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::61   10.0.0.163   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 1118       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 1478       Disabled           Disabled          30195      k8s:app=grafana                                                                                                                  fd02::34   10.0.0.174   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1677       Disabled           Disabled          6772       k8s:app=prometheus                                                                                                               fd02::66   10.0.0.191   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2467       Disabled           Disabled          30559      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::b9   10.0.0.226   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 3804       Disabled           Disabled          13914      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::ed   10.0.0.208   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
23:31:33 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
23:31:33 STEP: Deleting deployment demo_ds.yaml
23:31:33 STEP: Deleting namespace 202404042330k8sdatapathconfigmonitoraggregationchecksthatmonito
23:31:48 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|0dbf3025_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//471/artifact/0dbf3025_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//471/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//471/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_471_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/471/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #32887 hit this flake with 90.95% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-qq2bs policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-qq2bs policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0005a8680>: {
        msg: "Cannot retrieve cilium pod cilium-qq2bs policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc00090d350>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Unable to get node resource
Waiting for k8s node information
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-g4j8v cilium-qq2bs]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
grafana-698dc95f6c-jxtxs      false     false
prometheus-669755c8c5-jxwx6   false     false
coredns-85fbf8f7dd-6shb8      false     false
testclient-6hr6d              false     false
testds-5jzrq                  false     false
Cilium agent 'cilium-g4j8v': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-qq2bs': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
16:24:06 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
16:24:06 STEP: Ensuring the namespace kube-system exists
16:24:06 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
16:24:06 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
16:24:06 STEP: Installing Cilium
16:24:07 STEP: Waiting for Cilium to become ready
16:24:51 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-8hmwh in namespace kube-system
16:24:51 STEP: Validating if Kubernetes DNS is deployed
16:24:51 STEP: Checking if deployment is ready
16:24:51 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
16:24:51 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
16:24:52 STEP: Waiting for Kubernetes DNS to become operational
16:24:52 STEP: Checking if deployment is ready
16:24:52 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:24:53 STEP: Checking if deployment is ready
16:24:53 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:24:54 STEP: Checking if deployment is ready
16:24:54 STEP: Checking if kube-dns service is plumbed correctly
16:24:54 STEP: Checking if pods have identity
16:24:54 STEP: Checking if DNS can resolve
16:24:54 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
16:24:55 STEP: Checking if deployment is ready
16:24:55 STEP: Checking if kube-dns service is plumbed correctly
16:24:55 STEP: Checking if pods have identity
16:24:55 STEP: Checking if DNS can resolve
16:24:55 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
16:24:56 STEP: Checking if deployment is ready
16:24:56 STEP: Checking if kube-dns service is plumbed correctly
16:24:56 STEP: Checking if pods have identity
16:24:56 STEP: Checking if DNS can resolve
16:24:56 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
16:24:57 STEP: Checking if deployment is ready
16:24:57 STEP: Checking if kube-dns service is plumbed correctly
16:24:57 STEP: Checking if pods have identity
16:24:57 STEP: Checking if DNS can resolve
16:24:57 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
16:24:58 STEP: Checking if deployment is ready
16:24:58 STEP: Checking if kube-dns service is plumbed correctly
16:24:58 STEP: Checking if pods have identity
16:24:58 STEP: Checking if DNS can resolve
16:24:58 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
16:24:59 STEP: Checking if deployment is ready
16:24:59 STEP: Checking if kube-dns service is plumbed correctly
16:24:59 STEP: Checking if pods have identity
16:24:59 STEP: Checking if DNS can resolve
16:24:59 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
16:25:00 STEP: Checking if deployment is ready
16:25:00 STEP: Checking if kube-dns service is plumbed correctly
16:25:00 STEP: Checking if pods have identity
16:25:00 STEP: Checking if DNS can resolve
16:25:00 STEP: Validating Cilium Installation
16:25:00 STEP: Performing Cilium controllers preflight check
16:25:00 STEP: Performing Cilium status preflight check
16:25:00 STEP: Performing Cilium health check
16:25:00 STEP: Checking whether host EP regenerated
16:25:01 STEP: Performing Cilium service preflight check
16:25:01 STEP: Performing K8s service preflight check
16:25:03 STEP: Waiting for cilium-operator to be ready
16:25:03 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
16:25:03 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
16:25:03 STEP: Making sure all endpoints are in ready state
16:25:04 STEP: Launching cilium monitor on "cilium-g4j8v"
16:25:04 STEP: Creating namespace 202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito
16:25:04 STEP: Deploying demo_ds.yaml in namespace 202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito
16:25:05 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-qq2bs policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0005a8680>: {
        msg: "Cannot retrieve cilium pod cilium-qq2bs policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc00090d350>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-06-05T16:25:16Z====
16:25:16 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
16:25:20 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                              READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-2fb2s         0/2     ContainerCreating   0          17s     <none>          k8s2   <none>           <none>
	 202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-6hr6d                  1/1     Running             0          17s     10.0.1.105      k8s1   <none>           <none>
	 202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-gh85s                  0/1     ContainerCreating   0          17s     <none>          k8s2   <none>           <none>
	 202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-5jzrq                      2/2     Running             0          17s     10.0.1.50       k8s1   <none>           <none>
	 202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-nkv8l                      0/2     ContainerCreating   0          17s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-jxtxs          0/1     Running             0          76s     10.0.0.139      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-jxwx6       0/1     ContainerCreating   0          76s     <none>          k8s2   <none>           <none>
	 kube-system                                                       cilium-g4j8v                      1/1     Running             0          75s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-94d5fd5d9-24glw   1/1     Running             0          75s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-94d5fd5d9-7m5cr   1/1     Running             0          75s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-qq2bs                      1/1     Running             0          75s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-6shb8          1/1     Running             0          30s     10.0.0.123      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                         1/1     Running             0          5m55s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1               1/1     Running             0          5m55s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1      1/1     Running             0          5m55s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-st2xr                  1/1     Running             0          2m13s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-twmxv                  1/1     Running             0          5m32s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1               1/1     Running             0          5m55s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-rhvtr                1/1     Running             0          92s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-v8jl6                1/1     Running             0          92s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-2wxrf              1/1     Running             0          2m9s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-xj9bg              1/1     Running             0          2m9s    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-g4j8v cilium-qq2bs]
cmd: kubectl exec -n kube-system cilium-g4j8v -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.16 (v1.13.16-0bd4b482)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.182, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 164/65535 (0.25%), Flows/s: 3.37   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-05T16:25:02Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-g4j8v -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 239        Disabled           Disabled          2392       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1dd   10.0.1.105   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 336        Disabled           Disabled          26263      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1a4   10.0.1.50    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 515        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 878        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::119   10.0.1.178   ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-qq2bs -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.16 (v1.13.16-0bd4b482)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.19, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 208/65535 (0.32%), Flows/s: 3.43   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-05T16:25:03Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-qq2bs -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 2          Disabled           Disabled          2392       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::f5   10.0.0.30    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 1697       Disabled           Disabled          26263      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::49   10.0.0.226   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 1698       Disabled           Disabled          824        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::f0   10.0.0.244   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 2163       Disabled           Disabled          49247      k8s:app=grafana                                                                                                                  fd02::ac   10.0.0.139   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2494       Disabled           Disabled          39668      k8s:app=prometheus                                                                                                               fd02::48   10.0.0.171   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 3501       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 3700       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::b1   10.0.0.92    ready   
	 3890       Disabled           Disabled          43464      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::8d   10.0.0.123   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
16:26:01 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
16:26:01 STEP: Deleting deployment demo_ds.yaml
16:26:02 STEP: Deleting namespace 202406051625k8sdatapathconfigmonitoraggregationchecksthatmonito
16:26:17 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|0c2e5bff_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//570/artifact/0c2e5bff_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//570/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//570/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_570_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/570/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #32926 hit this flake with 90.53% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-47tzb policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-47tzb policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0000bf800>: {
        msg: "Cannot retrieve cilium pod cilium-47tzb policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc00096c550>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Waiting for k8s node information
Unable to get node resource
Cilium pods: [cilium-47tzb cilium-jklfd]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-5l8vr              false     false
testds-phx9g                  false     false
grafana-698dc95f6c-w4rg4      false     false
prometheus-669755c8c5-848q5   false     false
coredns-85fbf8f7dd-x2wlj      false     false
test-k8s2-794579c97-xt2dd     false     false
Cilium agent 'cilium-47tzb': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-jklfd': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
13:14:47 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
13:14:47 STEP: Ensuring the namespace kube-system exists
13:14:47 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:14:47 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:14:47 STEP: Installing Cilium
13:14:48 STEP: Waiting for Cilium to become ready
13:15:35 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-q2dz8 in namespace kube-system
13:15:35 STEP: Validating if Kubernetes DNS is deployed
13:15:35 STEP: Checking if deployment is ready
13:15:35 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
13:15:35 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
13:15:35 STEP: Waiting for Kubernetes DNS to become operational
13:15:35 STEP: Checking if deployment is ready
13:15:35 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:15:36 STEP: Checking if deployment is ready
13:15:36 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:15:37 STEP: Checking if deployment is ready
13:15:37 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:15:38 STEP: Checking if deployment is ready
13:15:38 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:15:39 STEP: Checking if deployment is ready
13:15:39 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:15:40 STEP: Checking if deployment is ready
13:15:40 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:15:41 STEP: Checking if deployment is ready
13:15:41 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:15:42 STEP: Checking if deployment is ready
13:15:42 STEP: Checking if kube-dns service is plumbed correctly
13:15:42 STEP: Checking if pods have identity
13:15:42 STEP: Checking if DNS can resolve
13:15:43 STEP: Validating Cilium Installation
13:15:43 STEP: Performing Cilium controllers preflight check
13:15:43 STEP: Performing Cilium status preflight check
13:15:43 STEP: Performing Cilium health check
13:15:43 STEP: Checking whether host EP regenerated
13:15:44 STEP: Performing Cilium service preflight check
13:15:44 STEP: Performing K8s service preflight check
13:15:45 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-jklfd': controller ipcache-inject-labels is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Disabled   
	 Host firewall:          Disabled
	 CNI Chaining:           none
	 CNI Config file:        CNI configuration file management disabled
	 Cilium:                 Ok   1.13.16 (v1.13.16-24c37b4e)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 2/254 allocated from 10.0.1.0/24, IPv6: 2/254 allocated from fd02::100/120
	 IPv6 BIG TCP:           Disabled
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      18/18 healthy
	   Name                                 Last success   Last error   Count   Message
	   cilium-health-ep                     8s ago         never        0       no error   
	   dns-garbage-collector-job            13s ago        never        0       no error   
	   endpoint-782-regeneration-recovery   never          never        0       no error   
	   endpoint-936-regeneration-recovery   never          never        0       no error   
	   endpoint-gc                          13s ago        never        0       no error   
	   ipcache-inject-labels                3s ago         11s ago      0       no error   
	   k8s-heartbeat                        13s ago        never        0       no error   
	   link-cache                           9s ago         never        0       no error   
	   metricsmap-bpf-prom-sync             3s ago         never        0       no error   
	   resolve-identity-782                 10s ago        never        0       no error   
	   resolve-identity-936                 8s ago         never        0       no error   
	   sync-endpoints-and-host-ips          10s ago        never        0       no error   
	   sync-lb-maps-with-k8s-services       10s ago        never        0       no error   
	   sync-policymap-782                   8s ago         never        0       no error   
	   sync-policymap-936                   4s ago         never        0       no error   
	   sync-to-k8s-ciliumendpoint (782)     0s ago         never        0       no error   
	   sync-to-k8s-ciliumendpoint (936)     8s ago         never        0       no error   
	   template-dir-watcher                 never          never        0       no error   
	 Proxy Status:            OK, ip 10.0.1.247, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 84/65535 (0.13%), Flows/s: 6.45   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          0/2 reachable   (2024-06-06T13:15:35Z)
	   Name                   IP              Node        Endpoints
	   k8s2 (localhost)       192.168.56.12   reachable   unreachable
	   k8s1                   192.168.56.11   reachable   unreachable
	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
	 

13:15:45 STEP: Performing Cilium status preflight check
13:15:45 STEP: Performing Cilium health check
13:15:45 STEP: Performing Cilium controllers preflight check
13:15:45 STEP: Checking whether host EP regenerated
13:15:53 STEP: Performing Cilium service preflight check
13:15:53 STEP: Performing K8s service preflight check
13:15:54 STEP: Waiting for cilium-operator to be ready
13:15:54 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
13:15:54 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
13:15:54 STEP: Making sure all endpoints are in ready state
13:15:55 STEP: Launching cilium monitor on "cilium-47tzb"
13:15:55 STEP: Creating namespace 202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito
13:15:56 STEP: Deploying demo_ds.yaml in namespace 202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito
13:15:57 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-47tzb policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0000bf800>: {
        msg: "Cannot retrieve cilium pod cilium-47tzb policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc00096c550>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-06-06T13:16:07Z====
13:16:07 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
13:16:07 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-xt2dd          2/2     Running             0          13s     10.0.1.114      k8s2   <none>           <none>
	 202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-5l8vr                   1/1     Running             0          13s     10.0.1.196      k8s2   <none>           <none>
	 202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-hr2j6                   0/1     ContainerCreating   0          13s     <none>          k8s1   <none>           <none>
	 202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-59nsd                       0/2     ContainerCreating   0          13s     <none>          k8s1   <none>           <none>
	 202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-phx9g                       2/2     Running             0          13s     10.0.1.92       k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-w4rg4           0/1     Running             0          82s     10.0.0.72       k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-848q5        1/1     Running             0          82s     10.0.0.168      k8s1   <none>           <none>
	 kube-system                                                       cilium-47tzb                       1/1     Running             0          81s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-jklfd                       1/1     Running             0          81s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-5dcdfc7c55-k5ktp   1/1     Running             0          81s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-5dcdfc7c55-zrx2p   1/1     Running             0          81s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-x2wlj           1/1     Running             0          34s     10.0.0.148      k8s1   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          6m13s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          6m13s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          6m13s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-447mp                   1/1     Running             0          5m44s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-xf98t                   1/1     Running             0          2m20s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          6m13s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-2hbck                 1/1     Running             0          99s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-czwfr                 1/1     Running             0          99s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-hgzxr               1/1     Running             0          2m17s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-v67hf               1/1     Running             0          2m17s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-47tzb cilium-jklfd]
cmd: kubectl exec -n kube-system cilium-47tzb -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.16 (v1.13.16-24c37b4e)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.84, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 251/65535 (0.38%), Flows/s: 4.75   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-06T13:15:53Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-47tzb -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 68         Disabled           Disabled          40168      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::87   10.0.0.103   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 114        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::b9   10.0.0.78    ready   
	 129        Disabled           Disabled          13041      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::69   10.0.0.148   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 573        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                               
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            reserved:host                                                                                                                                                    
	 1463       Disabled           Disabled          8756       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::4d   10.0.0.163   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 2741       Disabled           Disabled          26778      k8s:app=prometheus                                                                                                               fd02::2    10.0.0.168   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2858       Disabled           Disabled          50584      k8s:app=grafana                                                                                                                  fd02::5b   10.0.0.72    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-jklfd -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.16 (v1.13.16-24c37b4e)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.1.247, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 195/65535 (0.30%), Flows/s: 3.21   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-06T13:15:54Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-jklfd -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 782        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                                ready   
	                                                            reserved:host                                                                                                                                                     
	 936        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1a5   10.0.1.138   ready   
	 1798       Disabled           Disabled          40168      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::12b   10.0.1.92    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 2344       Disabled           Disabled          8756       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::12d   10.0.1.196   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 2940       Disabled           Disabled          23189      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::10a   10.0.1.114   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                                                              
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
13:16:49 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
13:16:49 STEP: Deleting deployment demo_ds.yaml
13:16:50 STEP: Deleting namespace 202406061315k8sdatapathconfigmonitoraggregationchecksthatmonito
13:17:04 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|67f4fedc_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//580/artifact/67f4fedc_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//580/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//580/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_580_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/580/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #32926 hit this flake with 90.67% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-fsbvd policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-fsbvd policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc002098560>: {
        msg: "Cannot retrieve cilium pod cilium-fsbvd policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc0002ebff0>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Waiting for k8s node information
Unable to get node resource
Cilium pods: [cilium-9xgt8 cilium-fsbvd]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-wqkwt              false     false
testds-dmgqq                  false     false
grafana-7ddfc74b5b-dt8qd      false     false
prometheus-669755c8c5-vcl48   false     false
coredns-bb76b858c-9txjm       false     false
Cilium agent 'cilium-9xgt8': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-fsbvd': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
13:11:46 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
13:11:46 STEP: Ensuring the namespace kube-system exists
13:11:46 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:11:47 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:11:47 STEP: Installing Cilium
13:11:48 STEP: Waiting for Cilium to become ready
13:12:52 STEP: Restarting unmanaged pods coredns-bb76b858c-8p4r7 in namespace kube-system
13:12:58 STEP: Validating if Kubernetes DNS is deployed
13:12:58 STEP: Checking if deployment is ready
13:12:58 STEP: Checking if kube-dns service is plumbed correctly
13:12:58 STEP: Checking if pods have identity
13:12:58 STEP: Checking if DNS can resolve
13:12:58 STEP: Kubernetes DNS is up and operational
13:12:58 STEP: Validating Cilium Installation
13:12:58 STEP: Performing Cilium controllers preflight check
13:12:58 STEP: Performing Cilium status preflight check
13:12:58 STEP: Performing Cilium health check
13:12:58 STEP: Checking whether host EP regenerated
13:13:00 STEP: Performing Cilium service preflight check
13:13:00 STEP: Performing K8s service preflight check
13:13:05 STEP: Waiting for cilium-operator to be ready
13:13:05 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
13:13:05 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
13:13:05 STEP: Making sure all endpoints are in ready state
13:13:06 STEP: Launching cilium monitor on "cilium-9xgt8"
13:13:06 STEP: Creating namespace 202406061313k8sdatapathconfigmonitoraggregationchecksthatmonito
13:13:06 STEP: Deploying demo_ds.yaml in namespace 202406061313k8sdatapathconfigmonitoraggregationchecksthatmonito
13:13:07 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-fsbvd policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc002098560>: {
        msg: "Cannot retrieve cilium pod cilium-fsbvd policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc0002ebff0>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-06-06T13:13:18Z====
13:13:18 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
13:13:21 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202406061313k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-5ffdc78d54-tgjbw         0/2     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 202406061313k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-bmlbw                   0/1     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 202406061313k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-wqkwt                   1/1     Running             0          16s     10.0.1.133      k8s1   <none>           <none>
	 202406061313k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-dmgqq                       2/2     Running             0          16s     10.0.1.38       k8s1   <none>           <none>
	 202406061313k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-j7svb                       0/2     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-7ddfc74b5b-dt8qd           0/1     Running             0          97s     10.0.0.144      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-vcl48        1/1     Running             0          97s     10.0.0.47       k8s2   <none>           <none>
	 kube-system                                                       cilium-9xgt8                       1/1     Running             0          96s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-fsbvd                       1/1     Running             0          96s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-7f5b4f4d8d-9kfkt   1/1     Running             0          95s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-7f5b4f4d8d-ntr9w   1/1     Running             0          96s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-bb76b858c-9txjm            1/1     Running             0          31s     10.0.1.117      k8s1   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          6m3s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          6m3s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          6m3s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-n9r5n                   1/1     Running             0          2m35s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-vpxds                   1/1     Running             0          5m43s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          6m3s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-97zsv                 1/1     Running             0          114s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-fpq8v                 1/1     Running             0          114s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-gt4nj               1/1     Running             0          2m32s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-n794m               1/1     Running             0          2m32s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-9xgt8 cilium-fsbvd]
cmd: kubectl exec -n kube-system cilium-9xgt8 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.20 (v1.20.15) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.16 (v1.13.16-24c37b4e)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.1.37, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 216/65535 (0.33%), Flows/s: 2.82   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-06T13:13:00Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-9xgt8 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                             
	 448        Disabled           Disabled          17286      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::185   10.0.1.133   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202406061313k8sdatapathconfigmonitoraggregationchecksthatmonito                                    
	                                                            k8s:zgroup=testDSClient                                                                                                            
	 934        Disabled           Disabled          4692       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1e7   10.0.1.117   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                                               
	 1130       Disabled           Disabled          4          reserved:health                                                                                   fd02::192   10.0.1.89    ready   
	 2204       Disabled           Disabled          59691      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1d9   10.0.1.38    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202406061313k8sdatapathconfigmonitoraggregationchecksthatmonito                                    
	                                                            k8s:zgroup=testDS                                                                                                                  
	 3856       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                 ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                          
	                                                            k8s:node-role.kubernetes.io/master                                                                                                 
	                                                            reserved:host                                                                                                                      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-fsbvd -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.20 (v1.20.15) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.16 (v1.13.16-24c37b4e)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.0.7, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 221/65535 (0.34%), Flows/s: 2.88   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-06T13:13:05Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-fsbvd -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                            
	 263        Disabled           Disabled          59691      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::ed   10.0.0.131   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406061313k8sdatapathconfigmonitoraggregationchecksthatmonito                                   
	                                                            k8s:zgroup=testDS                                                                                                                 
	 619        Disabled           Disabled          2091       k8s:app=grafana                                                                                   fd02::5e   10.0.0.144   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                 
	 1814       Disabled           Disabled          17286      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::92   10.0.0.236   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406061313k8sdatapathconfigmonitoraggregationchecksthatmonito                                   
	                                                            k8s:zgroup=testDSClient                                                                                                           
	 1875       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                ready   
	                                                            reserved:host                                                                                                                     
	 1973       Disabled           Disabled          26269      k8s:app=prometheus                                                                                fd02::df   10.0.0.47    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                            
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                 
	 2069       Disabled           Disabled          4          reserved:health                                                                                   fd02::cd   10.0.0.240   ready   
	 3270       Disabled           Disabled          34459      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::ee   10.0.0.170   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406061313k8sdatapathconfigmonitoraggregationchecksthatmonito                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                              
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
13:14:02 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
13:14:02 STEP: Deleting deployment demo_ds.yaml
13:14:03 STEP: Deleting namespace 202406061313k8sdatapathconfigmonitoraggregationchecksthatmonito
13:14:18 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|bc7816cb_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19//539/artifact/7a15f95c_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Tests_NodePort_inside_cluster_(kube-proxy)_.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19//539/artifact/bc7816cb_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19//539/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19//539/artifact/test_results_Cilium-PR-K8s-1.20-kernel-4.19_539_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19/539/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #32966 hit this flake with 90.53% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-dmg6t policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-dmg6t policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0006425e0>: {
        msg: "Cannot retrieve cilium pod cilium-dmg6t policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc000400a50>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 6
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 4 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Unable to update CiliumNode resource, will retry
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-dmg6t cilium-gcx2b]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-rpq4k              false     false
testds-cd9ff                  false     false
grafana-698dc95f6c-2drt5      false     false
prometheus-669755c8c5-bxzt5   false     false
coredns-85fbf8f7dd-jn6qb      false     false
Cilium agent 'cilium-dmg6t': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-gcx2b': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
21:44:00 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
21:44:00 STEP: Ensuring the namespace kube-system exists
21:44:00 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
21:44:00 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
21:44:00 STEP: Installing Cilium
21:44:01 STEP: Waiting for Cilium to become ready
21:44:46 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-nl6f4 in namespace kube-system
21:44:46 STEP: Validating if Kubernetes DNS is deployed
21:44:46 STEP: Checking if deployment is ready
21:44:46 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
21:44:46 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
21:44:47 STEP: Waiting for Kubernetes DNS to become operational
21:44:47 STEP: Checking if deployment is ready
21:44:47 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
21:44:48 STEP: Checking if deployment is ready
21:44:48 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
21:44:49 STEP: Checking if deployment is ready
21:44:49 STEP: Checking if kube-dns service is plumbed correctly
21:44:49 STEP: Checking if pods have identity
21:44:49 STEP: Checking if DNS can resolve
21:44:49 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
21:44:50 STEP: Checking if deployment is ready
21:44:50 STEP: Checking if kube-dns service is plumbed correctly
21:44:50 STEP: Checking if pods have identity
21:44:50 STEP: Checking if DNS can resolve
21:44:50 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
21:44:51 STEP: Checking if deployment is ready
21:44:51 STEP: Checking if kube-dns service is plumbed correctly
21:44:51 STEP: Checking if pods have identity
21:44:51 STEP: Checking if DNS can resolve
21:44:51 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
21:44:52 STEP: Checking if deployment is ready
21:44:52 STEP: Checking if kube-dns service is plumbed correctly
21:44:52 STEP: Checking if pods have identity
21:44:52 STEP: Checking if DNS can resolve
21:44:52 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
21:44:53 STEP: Checking if deployment is ready
21:44:53 STEP: Checking if kube-dns service is plumbed correctly
21:44:53 STEP: Checking if pods have identity
21:44:53 STEP: Checking if DNS can resolve
21:44:53 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
21:44:54 STEP: Checking if deployment is ready
21:44:54 STEP: Checking if kube-dns service is plumbed correctly
21:44:54 STEP: Checking if pods have identity
21:44:54 STEP: Checking if DNS can resolve
21:44:54 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
21:44:55 STEP: Checking if deployment is ready
21:44:55 STEP: Checking if kube-dns service is plumbed correctly
21:44:55 STEP: Checking if pods have identity
21:44:55 STEP: Checking if DNS can resolve
21:44:55 STEP: Validating Cilium Installation
21:44:55 STEP: Performing Cilium controllers preflight check
21:44:55 STEP: Performing Cilium status preflight check
21:44:55 STEP: Performing Cilium health check
21:44:55 STEP: Checking whether host EP regenerated
21:44:56 STEP: Performing Cilium service preflight check
21:44:56 STEP: Performing K8s service preflight check
21:44:58 STEP: Waiting for cilium-operator to be ready
21:44:58 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
21:44:58 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
21:44:58 STEP: Making sure all endpoints are in ready state
21:45:05 STEP: Launching cilium monitor on "cilium-gcx2b"
21:45:05 STEP: Creating namespace 202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito
21:45:05 STEP: Deploying demo_ds.yaml in namespace 202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito
21:45:06 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-dmg6t policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0006425e0>: {
        msg: "Cannot retrieve cilium pod cilium-dmg6t policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc000400a50>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-06-07T21:45:16Z====
21:45:16 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
21:45:17 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-hl4sw          0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-qq8xl                   0/1     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-rpq4k                   1/1     Running             0          13s     10.0.1.85       k8s1   <none>           <none>
	 202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-cd9ff                       2/2     Running             0          13s     10.0.1.202      k8s1   <none>           <none>
	 202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-nq9np                       0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-2drt5           0/1     Running             0          79s     10.0.0.146      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-bxzt5        1/1     Running             0          79s     10.0.0.237      k8s2   <none>           <none>
	 kube-system                                                       cilium-dmg6t                       1/1     Running             0          78s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-gcx2b                       1/1     Running             0          78s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-7997bc5fd7-4l2fx   1/1     Running             0          78s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-7997bc5fd7-s7489   1/1     Running             0          78s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-jn6qb           1/1     Running             0          32s     10.0.0.14       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m34s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m34s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m34s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-l6kgk                   1/1     Running             0          2m15s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-v5k9m                   1/1     Running             0          5m8s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m34s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-d5jds                 1/1     Running             0          96s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-m6g8d                 1/1     Running             0          96s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-dvzl6               1/1     Running             0          2m12s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-kchfq               1/1     Running             0          2m12s   192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-dmg6t cilium-gcx2b]
cmd: kubectl exec -n kube-system cilium-dmg6t -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.16 (v1.13.16-e869c112)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       33/33 healthy
	 Proxy Status:            OK, ip 10.0.0.161, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 284/65535 (0.43%), Flows/s: 4.58   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-07T21:44:56Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-dmg6t -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 69         Disabled           Disabled          10205      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::6b   10.0.0.108   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 162        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 174        Disabled           Disabled          7115       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::b1   10.0.0.164   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 571        Disabled           Disabled          45247      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::50   10.0.0.14    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 594        Disabled           Disabled          2516       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::e5   10.0.0.176   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 757        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::be   10.0.0.241   ready   
	 1401       Disabled           Disabled          27729      k8s:app=prometheus                                                                                                               fd02::1b   10.0.0.237   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1699       Disabled           Disabled          62390      k8s:app=grafana                                                                                                                  fd02::27   10.0.0.146   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-gcx2b -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.16 (v1.13.16-e869c112)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.91, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 193/65535 (0.29%), Flows/s: 3.38   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-07T21:44:58Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-gcx2b -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 14         Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 159        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1fb   10.0.1.68    ready   
	 817        Disabled           Disabled          2516       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1f2   10.0.1.85    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 1682       Disabled           Disabled          7115       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1cc   10.0.1.202   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
21:45:58 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
21:45:58 STEP: Deleting deployment demo_ds.yaml
21:45:59 STEP: Deleting namespace 202406072145k8sdatapathconfigmonitoraggregationchecksthatmonito
21:46:14 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|df10e890_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//592/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//592/artifact/df10e890_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//592/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_592_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/592/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #33112 hit this flake with 92.33% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-8wzk5 policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-8wzk5 policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc000988ac0>: {
        msg: "Cannot retrieve cilium pod cilium-8wzk5 policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc0005fc6a0>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Cilium pods: [cilium-8wzk5 cilium-rlr8x]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
prometheus-669755c8c5-wc4pr   false     false
coredns-85fbf8f7dd-49mw7      false     false
testclient-q8kh5              false     false
testds-ch6g7                  false     false
grafana-698dc95f6c-kjmfw      false     false
Cilium agent 'cilium-8wzk5': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-rlr8x': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
08:00:24 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
08:00:24 STEP: Ensuring the namespace kube-system exists
08:00:24 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
08:00:24 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
08:00:24 STEP: Installing Cilium
08:00:25 STEP: Waiting for Cilium to become ready
08:01:08 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-44sjx in namespace kube-system
08:01:08 STEP: Validating if Kubernetes DNS is deployed
08:01:08 STEP: Checking if deployment is ready
08:01:08 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
08:01:08 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
08:01:08 STEP: Waiting for Kubernetes DNS to become operational
08:01:08 STEP: Checking if deployment is ready
08:01:08 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:01:09 STEP: Checking if deployment is ready
08:01:09 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:01:10 STEP: Checking if deployment is ready
08:01:10 STEP: Checking if kube-dns service is plumbed correctly
08:01:10 STEP: Checking if pods have identity
08:01:10 STEP: Checking if DNS can resolve
08:01:11 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
08:01:11 STEP: Checking if deployment is ready
08:01:11 STEP: Checking if kube-dns service is plumbed correctly
08:01:11 STEP: Checking if pods have identity
08:01:11 STEP: Checking if DNS can resolve
08:01:12 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
08:01:12 STEP: Checking if deployment is ready
08:01:12 STEP: Checking if kube-dns service is plumbed correctly
08:01:12 STEP: Checking if pods have identity
08:01:12 STEP: Checking if DNS can resolve
08:01:13 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
08:01:13 STEP: Checking if deployment is ready
08:01:13 STEP: Checking if kube-dns service is plumbed correctly
08:01:13 STEP: Checking if pods have identity
08:01:13 STEP: Checking if DNS can resolve
08:01:14 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
08:01:14 STEP: Checking if deployment is ready
08:01:14 STEP: Checking if kube-dns service is plumbed correctly
08:01:14 STEP: Checking if DNS can resolve
08:01:14 STEP: Checking if pods have identity
08:01:15 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
08:01:15 STEP: Checking if deployment is ready
08:01:15 STEP: Checking if kube-dns service is plumbed correctly
08:01:15 STEP: Checking if pods have identity
08:01:15 STEP: Checking if DNS can resolve
08:01:16 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
08:01:16 STEP: Checking if deployment is ready
08:01:16 STEP: Checking if kube-dns service is plumbed correctly
08:01:16 STEP: Checking if pods have identity
08:01:16 STEP: Checking if DNS can resolve
08:01:17 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
08:01:17 STEP: Checking if deployment is ready
08:01:17 STEP: Checking if kube-dns service is plumbed correctly
08:01:17 STEP: Checking if DNS can resolve
08:01:17 STEP: Checking if pods have identity
08:01:18 STEP: Validating Cilium Installation
08:01:18 STEP: Performing Cilium controllers preflight check
08:01:18 STEP: Performing Cilium health check
08:01:18 STEP: Performing Cilium status preflight check
08:01:18 STEP: Checking whether host EP regenerated
08:01:19 STEP: Performing Cilium service preflight check
08:01:19 STEP: Performing K8s service preflight check
08:01:20 STEP: Waiting for cilium-operator to be ready
08:01:20 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
08:01:21 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
08:01:21 STEP: Making sure all endpoints are in ready state
08:01:26 STEP: Launching cilium monitor on "cilium-rlr8x"
08:01:26 STEP: Creating namespace 202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito
08:01:26 STEP: Deploying demo_ds.yaml in namespace 202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito
08:01:27 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-8wzk5 policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc000988ac0>: {
        msg: "Cannot retrieve cilium pod cilium-8wzk5 policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc0005fc6a0>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-06-13T08:01:38Z====
08:01:38 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
08:01:41 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-42qbh          0/2     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-5hktr                   0/1     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-q8kh5                   1/1     Running             0          16s     10.0.1.35       k8s1   <none>           <none>
	 202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-ch6g7                       2/2     Running             0          16s     10.0.1.117      k8s1   <none>           <none>
	 202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-f5xvx                       0/2     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-kjmfw           0/1     Running             0          79s     10.0.0.205      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-wc4pr        1/1     Running             0          79s     10.0.0.244      k8s2   <none>           <none>
	 kube-system                                                       cilium-8wzk5                       1/1     Running             0          78s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-79988965bc-57qb6   1/1     Running             0          78s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-79988965bc-xzv55   1/1     Running             0          78s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-rlr8x                       1/1     Running             0          78s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-49mw7           1/1     Running             0          35s     10.0.0.204      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          6m7s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          6m7s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          6m7s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-bgr4k                   1/1     Running             0          2m16s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-ckcx6                   1/1     Running             0          5m40s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          6m7s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-fc9bn                 1/1     Running             0          96s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-rhzcz                 1/1     Running             0          96s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-gjfbs               1/1     Running             0          2m14s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-tv768               1/1     Running             0          2m14s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-8wzk5 cilium-rlr8x]
cmd: kubectl exec -n kube-system cilium-8wzk5 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-0b108ad7)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 8/254 allocated from 10.0.0.0/24, IPv6: 8/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       42/42 healthy
	 Proxy Status:            OK, ip 10.0.0.143, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 292/65535 (0.45%), Flows/s: 4.58   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-13T08:01:19Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-8wzk5 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 468        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 653        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::4f   10.0.0.44    ready   
	 832        Disabled           Disabled          395        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::91   10.0.0.236   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 1343       Disabled           Disabled          27485      k8s:app=prometheus                                                                                                               fd02::eb   10.0.0.244   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2470       Disabled           Disabled          5138       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::b6   10.0.0.204   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 2630       Disabled           Disabled          34180      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::4e   10.0.0.19    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 3104       Disabled           Disabled          49765      k8s:app=grafana                                                                                                                  fd02::e2   10.0.0.205   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 3704       Disabled           Disabled          47198      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::4a   10.0.0.2     ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-rlr8x -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-0b108ad7)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.151, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 164/65535 (0.25%), Flows/s: 2.74   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-13T08:01:20Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-rlr8x -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 873        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::182   10.0.1.184   ready   
	 932        Disabled           Disabled          395        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::13a   10.0.1.35    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 1970       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 2296       Disabled           Disabled          47198      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::137   10.0.1.117   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
08:02:22 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
08:02:22 STEP: Deleting deployment demo_ds.yaml
08:02:23 STEP: Deleting namespace 202406130801k8sdatapathconfigmonitoraggregationchecksthatmonito
08:02:38 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|39679d7e_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//601/artifact/39679d7e_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//601/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//601/artifact/df082fe1_K8sDatapathConfig_Host_firewall_Check_connectivity_with_IPv6_disabled.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//601/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_601_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/601/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #33193 hit this flake with 91.62% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-xkfg4 policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-xkfg4 policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0010b2020>: {
        msg: "Cannot retrieve cilium pod cilium-xkfg4 policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc00012ab20>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Unable to get node resource
Waiting for k8s node information
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-llht5 cilium-xkfg4]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
prometheus-669755c8c5-zzpdk   false     false
coredns-85fbf8f7dd-rdtw6      false     false
testclient-jghls              false     false
testds-cgt57                  false     false
grafana-698dc95f6c-r62s2      false     false
Cilium agent 'cilium-llht5': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-xkfg4': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 40 Failed 0


Standard Error

Click to show.
10:02:21 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
10:02:21 STEP: Ensuring the namespace kube-system exists
10:02:21 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
10:02:21 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
10:02:21 STEP: Installing Cilium
10:02:22 STEP: Waiting for Cilium to become ready
10:03:09 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-8msb2 in namespace kube-system
10:03:09 STEP: Validating if Kubernetes DNS is deployed
10:03:09 STEP: Checking if deployment is ready
10:03:09 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
10:03:09 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
10:03:09 STEP: Waiting for Kubernetes DNS to become operational
10:03:09 STEP: Checking if deployment is ready
10:03:10 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:03:10 STEP: Checking if deployment is ready
10:03:11 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:03:11 STEP: Checking if deployment is ready
10:03:12 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:03:12 STEP: Checking if deployment is ready
10:03:13 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:03:13 STEP: Checking if deployment is ready
10:03:14 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:03:14 STEP: Checking if deployment is ready
10:03:15 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:03:15 STEP: Checking if deployment is ready
10:03:16 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:03:16 STEP: Checking if deployment is ready
10:03:17 STEP: Checking if kube-dns service is plumbed correctly
10:03:17 STEP: Checking if DNS can resolve
10:03:17 STEP: Checking if pods have identity
10:03:17 STEP: Validating Cilium Installation
10:03:17 STEP: Performing Cilium controllers preflight check
10:03:17 STEP: Performing Cilium status preflight check
10:03:17 STEP: Performing Cilium health check
10:03:17 STEP: Checking whether host EP regenerated
10:03:18 STEP: Performing Cilium service preflight check
10:03:18 STEP: Performing K8s service preflight check
10:03:19 STEP: Waiting for cilium-operator to be ready
10:03:19 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
10:03:20 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
10:03:20 STEP: Making sure all endpoints are in ready state
10:03:26 STEP: Launching cilium monitor on "cilium-llht5"
10:03:26 STEP: Creating namespace 202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito
10:03:26 STEP: Deploying demo_ds.yaml in namespace 202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito
10:03:28 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-xkfg4 policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0010b2020>: {
        msg: "Cannot retrieve cilium pod cilium-xkfg4 policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc00012ab20>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-06-17T10:03:38Z====
10:03:38 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
10:03:41 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-k5zhp          0/2     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-68jzd                   0/1     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-jghls                   1/1     Running             0          16s     10.0.1.91       k8s1   <none>           <none>
	 202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-4d4zg                       0/2     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-cgt57                       2/2     Running             0          16s     10.0.1.195      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-r62s2           0/1     ContainerCreating   0          82s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-zzpdk        1/1     Running             0          82s     10.0.0.22       k8s2   <none>           <none>
	 kube-system                                                       cilium-llht5                       1/1     Running             0          81s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-85b9bdfc5b-dwddw   1/1     Running             0          81s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-85b9bdfc5b-s8jmb   1/1     Running             0          81s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-xkfg4                       1/1     Running             0          81s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-rdtw6           1/1     Running             0          34s     10.0.0.136      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m34s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-5x5v7                   1/1     Running             0          5m18s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-mzb48                   1/1     Running             0          2m26s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-ksgpz                 1/1     Running             0          99s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-pfc6k                 1/1     Running             0          99s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-hlzfg               1/1     Running             0          2m17s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-l7mqh               1/1     Running             0          2m17s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-llht5 cilium-xkfg4]
cmd: kubectl exec -n kube-system cilium-llht5 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.22 (v1.22.17) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-b56377a4)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.250, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 159/65535 (0.24%), Flows/s: 2.75   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-17T10:03:18Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-llht5 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 566        Disabled           Disabled          3245       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::168   10.0.1.195   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 2243       Disabled           Disabled          17407      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::19d   10.0.1.91    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 2911       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::190   10.0.1.130   ready   
	 3544       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-xkfg4 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.22 (v1.22.17) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-b56377a4)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 8/254 allocated from 10.0.0.0/24, IPv6: 8/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       40/40 healthy
	 Proxy Status:            OK, ip 10.0.0.139, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 262/65535 (0.40%), Flows/s: 3.98   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-17T10:03:19Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-xkfg4 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 121        Disabled           Disabled          44482      k8s:app=grafana                                                                                                                  fd02::44   10.0.0.227   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 364        Disabled           Disabled          20404      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::ab   10.0.0.136   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 570        Disabled           Disabled          27091      k8s:app=prometheus                                                                                                               fd02::4a   10.0.0.22    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 728        Disabled           Disabled          17407      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::76   10.0.0.221   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 1230       Disabled           Disabled          3245       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::f    10.0.0.84    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 1808       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 1871       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::c1   10.0.0.60    ready   
	 3969       Disabled           Disabled          37294      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::19   10.0.0.167   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
10:04:24 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
10:04:24 STEP: Deleting deployment demo_ds.yaml
10:04:25 STEP: Deleting namespace 202406171003k8sdatapathconfigmonitoraggregationchecksthatmonito
10:04:40 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|875e66eb_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//635/artifact/875e66eb_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//635/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//635/artifact/test_results_Cilium-PR-K8s-1.22-kernel-4.19_635_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19/635/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #33192 hit this flake with 90.62% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-m9hlf policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-m9hlf policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc000710300>: {
        msg: "Cannot retrieve cilium pod cilium-m9hlf policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc000366c20>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Cilium pods: [cilium-m9hlf cilium-t57pl]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
prometheus-669755c8c5-wtwwj   false     false
coredns-85fbf8f7dd-tkl2h      false     false
testclient-625n8              false     false
testds-h554s                  false     false
grafana-698dc95f6c-98mrg      false     false
Cilium agent 'cilium-m9hlf': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0
Cilium agent 'cilium-t57pl': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
10:29:15 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
10:29:15 STEP: Ensuring the namespace kube-system exists
10:29:15 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
10:29:16 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
10:29:16 STEP: Installing Cilium
10:29:16 STEP: Waiting for Cilium to become ready
10:30:03 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-kxvm5 in namespace kube-system
10:30:03 STEP: Validating if Kubernetes DNS is deployed
10:30:03 STEP: Checking if deployment is ready
10:30:03 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
10:30:03 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
10:30:03 STEP: Waiting for Kubernetes DNS to become operational
10:30:03 STEP: Checking if deployment is ready
10:30:03 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:30:04 STEP: Checking if deployment is ready
10:30:04 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:30:05 STEP: Checking if deployment is ready
10:30:05 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:30:06 STEP: Checking if deployment is ready
10:30:06 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:30:07 STEP: Checking if deployment is ready
10:30:07 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:30:08 STEP: Checking if deployment is ready
10:30:08 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:30:09 STEP: Checking if deployment is ready
10:30:09 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:30:10 STEP: Checking if deployment is ready
10:30:10 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:30:11 STEP: Checking if deployment is ready
10:30:11 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:30:12 STEP: Checking if deployment is ready
10:30:12 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:30:13 STEP: Checking if deployment is ready
10:30:13 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:30:14 STEP: Checking if deployment is ready
10:30:14 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:30:15 STEP: Checking if deployment is ready
10:30:15 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:30:16 STEP: Checking if deployment is ready
10:30:16 STEP: Checking if kube-dns service is plumbed correctly
10:30:16 STEP: Checking if pods have identity
10:30:16 STEP: Checking if DNS can resolve
10:30:17 STEP: Validating Cilium Installation
10:30:17 STEP: Performing Cilium controllers preflight check
10:30:17 STEP: Performing Cilium health check
10:30:17 STEP: Checking whether host EP regenerated
10:30:17 STEP: Performing Cilium status preflight check
10:30:18 STEP: Performing Cilium service preflight check
10:30:18 STEP: Performing K8s service preflight check
10:30:19 STEP: Waiting for cilium-operator to be ready
10:30:19 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
10:30:19 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
10:30:19 STEP: Making sure all endpoints are in ready state
10:30:25 STEP: Launching cilium monitor on "cilium-t57pl"
10:30:25 STEP: Creating namespace 202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito
10:30:25 STEP: Deploying demo_ds.yaml in namespace 202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito
10:30:26 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-m9hlf policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc000710300>: {
        msg: "Cannot retrieve cilium pod cilium-m9hlf policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc000366c20>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-06-17T10:30:36Z====
10:30:36 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
10:30:38 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-4mmgn          0/2     ContainerCreating   0          14s     <none>          k8s2   <none>           <none>
	 202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-5clx4                   0/1     ContainerCreating   0          14s     <none>          k8s2   <none>           <none>
	 202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-625n8                   1/1     Running             0          14s     10.0.1.73       k8s1   <none>           <none>
	 202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-h554s                       2/2     Running             0          14s     10.0.1.186      k8s1   <none>           <none>
	 202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-qmk4z                       0/2     ContainerCreating   0          14s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-98mrg           0/1     Running             0          85s     10.0.0.64       k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-wtwwj        1/1     Running             0          85s     10.0.0.150      k8s2   <none>           <none>
	 kube-system                                                       cilium-m9hlf                       1/1     Running             0          84s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-5b6df4f764-9ph29   1/1     Running             0          84s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-5b6df4f764-pqwgz   1/1     Running             0          84s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-t57pl                       1/1     Running             0          84s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-tkl2h           1/1     Running             0          37s     10.0.0.70       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-42mj8                   1/1     Running             0          2m23s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-mj9wp                   1/1     Running             0          5m15s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-4rmsp                 1/1     Running             0          103s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-98pjw                 1/1     Running             0          103s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-j4772               1/1     Running             0          2m21s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-lj2bv               1/1     Running             0          2m21s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-m9hlf cilium-t57pl]
cmd: kubectl exec -n kube-system cilium-m9hlf -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-0856d2ee)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       33/33 healthy
	 Proxy Status:            OK, ip 10.0.0.227, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 236/65535 (0.36%), Flows/s: 3.83   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-17T10:30:18Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-m9hlf -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 92         Disabled           Disabled          16351      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::bd   10.0.0.102   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 420        Disabled           Disabled          26171      k8s:app=prometheus                                                                                                               fd02::65   10.0.0.150   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 437        Disabled           Disabled          26826      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::2d   10.0.0.233   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 707        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::8c   10.0.0.196   ready   
	 1490       Disabled           Disabled          22209      k8s:app=grafana                                                                                                                  fd02::a4   10.0.0.64    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1866       Disabled           Disabled          5707       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::aa   10.0.0.251   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 2205       Disabled           Disabled          2557       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::82   10.0.0.70    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 3109       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-t57pl -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-0856d2ee)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.171, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 158/65535 (0.24%), Flows/s: 2.69   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-17T10:30:19Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-t57pl -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 216        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::129   10.0.1.87    ready   
	 1097       Disabled           Disabled          16351      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1ae   10.0.1.73    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 2217       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 2477       Disabled           Disabled          5707       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1f2   10.0.1.186   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
10:31:19 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
10:31:19 STEP: Deleting deployment demo_ds.yaml
10:31:20 STEP: Deleting namespace 202406171030k8sdatapathconfigmonitoraggregationchecksthatmonito
10:31:35 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|0cd1827d_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//604/artifact/0cd1827d_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//604/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//604/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_604_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/604/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #33189 hit this flake with 90.53% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-nc52s policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-nc52s policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0006b4660>: {
        msg: "Cannot retrieve cilium pod cilium-nc52s policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc000c3a270>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Waiting for k8s node information
Unable to get node resource
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-m5jbl cilium-nc52s]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-l5lzq              false     false
testds-4gldh                  false     false
grafana-698dc95f6c-q46cm      false     false
prometheus-669755c8c5-j7lnt   false     false
coredns-85fbf8f7dd-5qbxt      false     false
Cilium agent 'cilium-m5jbl': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-nc52s': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
12:28:50 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
12:28:50 STEP: Ensuring the namespace kube-system exists
12:28:50 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
12:28:50 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
12:28:50 STEP: Installing Cilium
12:28:50 STEP: Waiting for Cilium to become ready
12:29:34 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-jlbmm in namespace kube-system
12:29:34 STEP: Validating if Kubernetes DNS is deployed
12:29:34 STEP: Checking if deployment is ready
12:29:34 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
12:29:34 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
12:29:34 STEP: Waiting for Kubernetes DNS to become operational
12:29:34 STEP: Checking if deployment is ready
12:29:34 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
12:29:35 STEP: Checking if deployment is ready
12:29:35 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
12:29:36 STEP: Checking if deployment is ready
12:29:36 STEP: Checking if kube-dns service is plumbed correctly
12:29:36 STEP: Checking if pods have identity
12:29:36 STEP: Checking if DNS can resolve
12:29:37 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
12:29:37 STEP: Checking if deployment is ready
12:29:37 STEP: Checking if kube-dns service is plumbed correctly
12:29:37 STEP: Checking if pods have identity
12:29:37 STEP: Checking if DNS can resolve
12:29:38 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
12:29:38 STEP: Checking if deployment is ready
12:29:38 STEP: Checking if kube-dns service is plumbed correctly
12:29:38 STEP: Checking if pods have identity
12:29:38 STEP: Checking if DNS can resolve
12:29:39 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
12:29:39 STEP: Checking if deployment is ready
12:29:39 STEP: Checking if kube-dns service is plumbed correctly
12:29:39 STEP: Checking if pods have identity
12:29:39 STEP: Checking if DNS can resolve
12:29:40 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
12:29:40 STEP: Checking if deployment is ready
12:29:40 STEP: Checking if kube-dns service is plumbed correctly
12:29:40 STEP: Checking if DNS can resolve
12:29:40 STEP: Checking if pods have identity
12:29:41 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
12:29:41 STEP: Checking if deployment is ready
12:29:41 STEP: Checking if kube-dns service is plumbed correctly
12:29:41 STEP: Checking if pods have identity
12:29:41 STEP: Checking if DNS can resolve
12:29:42 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
12:29:42 STEP: Checking if deployment is ready
12:29:42 STEP: Checking if kube-dns service is plumbed correctly
12:29:42 STEP: Checking if pods have identity
12:29:42 STEP: Checking if DNS can resolve
12:29:43 STEP: Validating Cilium Installation
12:29:43 STEP: Performing Cilium controllers preflight check
12:29:43 STEP: Checking whether host EP regenerated
12:29:43 STEP: Performing Cilium status preflight check
12:29:43 STEP: Performing Cilium health check
12:29:44 STEP: Performing Cilium service preflight check
12:29:44 STEP: Performing K8s service preflight check
12:29:45 STEP: Waiting for cilium-operator to be ready
12:29:45 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
12:29:45 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
12:29:45 STEP: Making sure all endpoints are in ready state
12:29:52 STEP: Launching cilium monitor on "cilium-m5jbl"
12:29:52 STEP: Creating namespace 202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito
12:29:52 STEP: Deploying demo_ds.yaml in namespace 202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito
12:29:54 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-nc52s policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0006b4660>: {
        msg: "Cannot retrieve cilium pod cilium-nc52s policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc000c3a270>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-06-17T12:30:04Z====
12:30:04 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
12:30:05 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-x29rh          0/2     ContainerCreating   0          14s     <none>          k8s2   <none>           <none>
	 202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-kcwjq                   0/1     ContainerCreating   0          14s     <none>          k8s2   <none>           <none>
	 202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-l5lzq                   1/1     Running             0          14s     10.0.1.100      k8s1   <none>           <none>
	 202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-4gldh                       2/2     Running             0          14s     10.0.1.123      k8s1   <none>           <none>
	 202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-fswhh                       0/2     ContainerCreating   0          14s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-q46cm           0/1     Running             0          77s     10.0.0.10       k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-j7lnt        1/1     Running             0          77s     10.0.0.146      k8s2   <none>           <none>
	 kube-system                                                       cilium-m5jbl                       1/1     Running             0          77s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-nc52s                       1/1     Running             0          77s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-85f7c7c8cf-8rdtg   1/1     Running             0          77s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-85f7c7c8cf-ph7kv   1/1     Running             0          77s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-5qbxt           1/1     Running             0          33s     10.0.0.59       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m31s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m31s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m31s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-5zpkc                   1/1     Running             0          5m8s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-cww66                   1/1     Running             0          2m16s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m31s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-9frzs                 1/1     Running             0          94s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-zt97s                 1/1     Running             0          94s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-dphp2               1/1     Running             0          2m13s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-pbpg2               1/1     Running             0          2m13s   192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-m5jbl cilium-nc52s]
cmd: kubectl exec -n kube-system cilium-m5jbl -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-a448f466)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.34, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 172/65535 (0.26%), Flows/s: 2.69   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-17T12:29:44Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-m5jbl -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 319        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 743        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::177   10.0.1.210   ready   
	 2157       Disabled           Disabled          36353      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::17e   10.0.1.123   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 3121       Disabled           Disabled          17817      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::14d   10.0.1.100   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-nc52s -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-a448f466)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.11, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 264/65535 (0.40%), Flows/s: 4.21   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-17T12:29:45Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-nc52s -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 1011       Disabled           Disabled          47576      k8s:app=grafana                                                                                                                  fd02::59   10.0.0.10    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1015       Disabled           Disabled          17817      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::6d   10.0.0.159   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 1311       Disabled           Disabled          57949      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::ce   10.0.0.116   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 1356       Disabled           Disabled          36353      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::af   10.0.0.13    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 1788       Disabled           Disabled          51735      k8s:app=prometheus                                                                                                               fd02::3e   10.0.0.146   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2556       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::3d   10.0.0.221   ready   
	 2770       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 2982       Disabled           Disabled          15210      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::a1   10.0.0.59    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
12:30:48 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
12:30:48 STEP: Deleting deployment demo_ds.yaml
12:30:49 STEP: Deleting namespace 202406171229k8sdatapathconfigmonitoraggregationchecksthatmonito
12:31:04 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|1c57bae5_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//605/artifact/1c57bae5_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//605/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//605/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_605_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/605/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #33253 hit this flake with 90.90% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-kwmkj policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-kwmkj policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc000518520>: {
        msg: "Cannot retrieve cilium pod cilium-kwmkj policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc000253600>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Unable to get node resource
Waiting for k8s node information
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-86szm cilium-kwmkj]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-5bjl2              false     false
testds-v7kn5                  false     false
grafana-698dc95f6c-vkqql      false     false
prometheus-669755c8c5-cpqwc   false     false
coredns-85fbf8f7dd-qsxng      false     false
Cilium agent 'cilium-86szm': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-kwmkj': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
10:45:33 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
10:45:33 STEP: Ensuring the namespace kube-system exists
10:45:33 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
10:45:33 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
10:45:33 STEP: Installing Cilium
10:45:34 STEP: Waiting for Cilium to become ready
10:46:22 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-gtgzz in namespace kube-system
10:46:23 STEP: Validating if Kubernetes DNS is deployed
10:46:23 STEP: Checking if deployment is ready
10:46:23 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
10:46:23 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
10:46:23 STEP: Waiting for Kubernetes DNS to become operational
10:46:23 STEP: Checking if deployment is ready
10:46:23 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:46:24 STEP: Checking if deployment is ready
10:46:24 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:46:25 STEP: Checking if deployment is ready
10:46:25 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:46:26 STEP: Checking if deployment is ready
10:46:26 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:46:27 STEP: Checking if deployment is ready
10:46:27 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:46:28 STEP: Checking if deployment is ready
10:46:28 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:46:29 STEP: Checking if deployment is ready
10:46:29 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:46:30 STEP: Checking if deployment is ready
10:46:30 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:46:31 STEP: Checking if deployment is ready
10:46:31 STEP: Checking if pods have identity
10:46:31 STEP: Checking if kube-dns service is plumbed correctly
10:46:31 STEP: Checking if DNS can resolve
10:46:31 STEP: Validating Cilium Installation
10:46:31 STEP: Performing Cilium controllers preflight check
10:46:31 STEP: Performing Cilium status preflight check
10:46:31 STEP: Performing Cilium health check
10:46:31 STEP: Checking whether host EP regenerated
10:46:36 STEP: Performing Cilium service preflight check
10:46:36 STEP: Performing K8s service preflight check
10:46:37 STEP: Waiting for cilium-operator to be ready
10:46:37 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
10:46:37 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
10:46:37 STEP: Making sure all endpoints are in ready state
10:46:38 STEP: Launching cilium monitor on "cilium-86szm"
10:46:38 STEP: Creating namespace 202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito
10:46:38 STEP: Deploying demo_ds.yaml in namespace 202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito
10:46:39 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-kwmkj policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc000518520>: {
        msg: "Cannot retrieve cilium pod cilium-kwmkj policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc000253600>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-06-24T10:46:50Z====
10:46:50 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
10:46:52 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-g5k4j          0/2     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-5bjl2                   1/1     Running             0          15s     10.0.1.124      k8s1   <none>           <none>
	 202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-8nd9x                   0/1     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-gdrg6                       0/2     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-v7kn5                       2/2     Running             0          15s     10.0.1.68       k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-vkqql           0/1     Running             0          81s     10.0.0.33       k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-cpqwc        1/1     Running             0          81s     10.0.0.91       k8s2   <none>           <none>
	 kube-system                                                       cilium-86szm                       1/1     Running             0          80s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-kwmkj                       1/1     Running             0          80s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-576b44795d-2zv9j   1/1     Running             0          80s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-576b44795d-hwmq4   1/1     Running             0          80s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-qsxng           1/1     Running             0          31s     10.0.0.89       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m50s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m50s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m50s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-s7szc                   1/1     Running             0          2m17s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-tcshl                   1/1     Running             0          5m36s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m50s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-99c5r                 1/1     Running             0          98s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-tmb2f                 1/1     Running             0          98s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-ldmgz               1/1     Running             0          2m15s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-mn7q2               1/1     Running             0          2m15s   192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-86szm cilium-kwmkj]
cmd: kubectl exec -n kube-system cilium-86szm -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-2c5d85e6)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.241, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 195/65535 (0.30%), Flows/s: 4.31   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-24T10:46:33Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-86szm -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 260        Disabled           Disabled          29753      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1d8   10.0.1.124   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 381        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1ad   10.0.1.69    ready   
	 1010       Disabled           Disabled          56565      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1c3   10.0.1.68    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 2659       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-kwmkj -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-2c5d85e6)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.248, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 283/65535 (0.43%), Flows/s: 5.30   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-24T10:46:37Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-kwmkj -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 26         Disabled           Disabled          22471      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::19   10.0.0.84    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 88         Disabled           Disabled          56565      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::c2   10.0.0.83    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 119        Disabled           Disabled          11062      k8s:app=grafana                                                                                                                  fd02::a9   10.0.0.33    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 143        Disabled           Disabled          52553      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::11   10.0.0.89    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 176        Disabled           Disabled          29753      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::c4   10.0.0.6     ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 347        Disabled           Disabled          42467      k8s:app=prometheus                                                                                                               fd02::4d   10.0.0.91    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1860       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 1963       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::b9   10.0.0.144   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
10:47:33 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
10:47:33 STEP: Deleting deployment demo_ds.yaml
10:47:34 STEP: Deleting namespace 202406241046k8sdatapathconfigmonitoraggregationchecksthatmonito
10:47:49 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|a849399c_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//613/artifact/a849399c_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//613/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//613/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_613_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/613/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #33253 hit this flake with 91.90% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-vhmrc policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-vhmrc policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc000729040>: {
        msg: "Cannot retrieve cilium pod cilium-vhmrc policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc0010c6050>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 1
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Network status error received, restarting client connections
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Unable to get node resource
Waiting for k8s node information
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-8c869 cilium-vhmrc]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-hbjfm              false     false
testds-w5hl7                  false     false
grafana-7ddfc74b5b-bz68m      false     false
prometheus-669755c8c5-vdqjl   false     false
coredns-bb76b858c-5dddm       false     false
Cilium agent 'cilium-8c869': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-vhmrc': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
10:43:51 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
10:43:51 STEP: Ensuring the namespace kube-system exists
10:43:51 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
10:43:51 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
10:43:51 STEP: Installing Cilium
10:43:51 STEP: Waiting for Cilium to become ready
10:44:47 STEP: Restarting unmanaged pods coredns-bb76b858c-gv8w6 in namespace kube-system
10:44:51 STEP: Validating if Kubernetes DNS is deployed
10:44:51 STEP: Checking if deployment is ready
10:44:51 STEP: Checking if kube-dns service is plumbed correctly
10:44:51 STEP: Checking if pods have identity
10:44:51 STEP: Checking if DNS can resolve
10:44:52 STEP: Kubernetes DNS is up and operational
10:44:52 STEP: Validating Cilium Installation
10:44:52 STEP: Performing Cilium controllers preflight check
10:44:52 STEP: Performing Cilium health check
10:44:52 STEP: Performing Cilium status preflight check
10:44:52 STEP: Checking whether host EP regenerated
10:44:53 STEP: Performing Cilium service preflight check
10:44:53 STEP: Performing K8s service preflight check
10:44:57 STEP: Waiting for cilium-operator to be ready
10:44:57 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
10:44:57 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
10:44:57 STEP: Making sure all endpoints are in ready state
10:44:59 STEP: Launching cilium monitor on "cilium-8c869"
10:44:59 STEP: Creating namespace 202406241044k8sdatapathconfigmonitoraggregationchecksthatmonito
10:44:59 STEP: Deploying demo_ds.yaml in namespace 202406241044k8sdatapathconfigmonitoraggregationchecksthatmonito
10:45:00 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-vhmrc policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc000729040>: {
        msg: "Cannot retrieve cilium pod cilium-vhmrc policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc0010c6050>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-06-24T10:45:10Z====
10:45:10 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
10:45:11 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202406241044k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-5ffdc78d54-w74gc         0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202406241044k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-hbjfm                   1/1     Running             0          13s     10.0.1.110      k8s1   <none>           <none>
	 202406241044k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-vcxjt                   0/1     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202406241044k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-pksg6                       0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202406241044k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-w5hl7                       2/2     Running             0          13s     10.0.1.127      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-7ddfc74b5b-bz68m           0/1     ContainerCreating   0          83s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-vdqjl        1/1     Running             0          82s     10.0.0.170      k8s2   <none>           <none>
	 kube-system                                                       cilium-8c869                       1/1     Running             0          82s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-55dfbb7b95-h5r6x   1/1     Running             1          81s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-55dfbb7b95-qptgq   1/1     Running             0          81s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-vhmrc                       1/1     Running             0          82s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-bb76b858c-5dddm            1/1     Running             0          26s     10.0.1.239      k8s1   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m32s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m32s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m32s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-hwpcj                   1/1     Running             0          5m10s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-nt8zg                   1/1     Running             0          2m19s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m32s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-bb4lx                 1/1     Running             0          100s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-qb9m2                 1/1     Running             0          100s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-dx8mt               1/1     Running             0          2m17s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-z666n               1/1     Running             0          2m17s   192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-8c869 cilium-vhmrc]
cmd: kubectl exec -n kube-system cilium-8c869 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.20 (v1.20.15) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-2c5d85e6)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.1.14, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 232/65535 (0.35%), Flows/s: 4.21   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-24T10:44:53Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-8c869 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                             
	 126        Disabled           Disabled          60390      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1db   10.0.1.127   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202406241044k8sdatapathconfigmonitoraggregationchecksthatmonito                                    
	                                                            k8s:zgroup=testDS                                                                                                                  
	 578        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                 ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                          
	                                                            k8s:node-role.kubernetes.io/master                                                                                                 
	                                                            reserved:host                                                                                                                      
	 1215       Disabled           Disabled          31197      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1fc   10.0.1.239   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                                               
	 1239       Disabled           Disabled          4          reserved:health                                                                                   fd02::175   10.0.1.250   ready   
	 1467       Disabled           Disabled          24036      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1af   10.0.1.110   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202406241044k8sdatapathconfigmonitoraggregationchecksthatmonito                                    
	                                                            k8s:zgroup=testDSClient                                                                                                            
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-vhmrc -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.20 (v1.20.15) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-2c5d85e6)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.0.181, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 228/65535 (0.35%), Flows/s: 3.61   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-06-24T10:44:57Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-vhmrc -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                            
	 259        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                ready   
	                                                            reserved:host                                                                                                                     
	 827        Disabled           Disabled          60390      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::8d   10.0.0.109   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406241044k8sdatapathconfigmonitoraggregationchecksthatmonito                                   
	                                                            k8s:zgroup=testDS                                                                                                                 
	 1139       Disabled           Disabled          4          reserved:health                                                                                   fd02::3c   10.0.0.187   ready   
	 2966       Disabled           Disabled          11501      k8s:app=grafana                                                                                   fd02::13   10.0.0.84    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                 
	 3199       Disabled           Disabled          31993      k8s:app=prometheus                                                                                fd02::95   10.0.0.170   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                            
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                 
	 3282       Disabled           Disabled          30997      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::31   10.0.0.11    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406241044k8sdatapathconfigmonitoraggregationchecksthatmonito                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                              
	 3463       Disabled           Disabled          24036      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::f6   10.0.0.188   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202406241044k8sdatapathconfigmonitoraggregationchecksthatmonito                                   
	                                                            k8s:zgroup=testDSClient                                                                                                           
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
10:45:54 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
10:45:54 STEP: Deleting deployment demo_ds.yaml
10:45:55 STEP: Deleting namespace 202406241044k8sdatapathconfigmonitoraggregationchecksthatmonito
10:46:10 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|421d435c_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19//570/artifact/421d435c_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19//570/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19//570/artifact/test_results_Cilium-PR-K8s-1.20-kernel-4.19_570_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19/570/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #33535 hit this flake with 90.76% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-j4l9h policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-j4l9h policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc000f16200>: {
        msg: "Cannot retrieve cilium pod cilium-j4l9h policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc0016945d0>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Unable to update ipcache map entry on pod update
Cilium pods: [cilium-j4l9h cilium-kz684]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
coredns-85fbf8f7dd-7tq8s      false     false
testclient-rkvvq              false     false
testds-p8pc8                  false     false
grafana-698dc95f6c-m68sn      false     false
prometheus-669755c8c5-2mrkm   false     false
Cilium agent 'cilium-j4l9h': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-kz684': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
16:08:27 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
16:08:27 STEP: Ensuring the namespace kube-system exists
16:08:28 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
16:08:28 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
16:08:28 STEP: Installing Cilium
16:08:28 STEP: Waiting for Cilium to become ready
16:09:17 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-rccqn in namespace kube-system
16:09:17 STEP: Validating if Kubernetes DNS is deployed
16:09:17 STEP: Checking if deployment is ready
16:09:17 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
16:09:17 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
16:09:17 STEP: Waiting for Kubernetes DNS to become operational
16:09:17 STEP: Checking if deployment is ready
16:09:17 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:09:18 STEP: Checking if deployment is ready
16:09:18 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:09:19 STEP: Checking if deployment is ready
16:09:19 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:09:20 STEP: Checking if deployment is ready
16:09:20 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:09:21 STEP: Checking if deployment is ready
16:09:21 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:09:22 STEP: Checking if deployment is ready
16:09:22 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
16:09:23 STEP: Checking if deployment is ready
16:09:23 STEP: Checking if kube-dns service is plumbed correctly
16:09:23 STEP: Checking if DNS can resolve
16:09:23 STEP: Checking if pods have identity
16:09:24 STEP: Validating Cilium Installation
16:09:24 STEP: Performing Cilium controllers preflight check
16:09:24 STEP: Performing Cilium status preflight check
16:09:24 STEP: Performing Cilium health check
16:09:24 STEP: Checking whether host EP regenerated
16:09:25 STEP: Performing Cilium service preflight check
16:09:25 STEP: Performing K8s service preflight check
16:09:26 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-kz684': controller ipcache-inject-labels is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Disabled   
	 Host firewall:          Disabled
	 CNI Chaining:           none
	 CNI Config file:        CNI configuration file management disabled
	 Cilium:                 Ok   1.13.17 (v1.13.17-aee60eff)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 2/254 allocated from 10.0.1.0/24, IPv6: 2/254 allocated from fd02::100/120
	 IPv6 BIG TCP:           Disabled
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      18/18 healthy
	   Name                                 Last success   Last error   Count   Message
	   cilium-health-ep                     8s ago         never        0       no error   
	   dns-garbage-collector-job            12s ago        never        0       no error   
	   endpoint-187-regeneration-recovery   never          never        0       no error   
	   endpoint-355-regeneration-recovery   never          never        0       no error   
	   endpoint-gc                          12s ago        never        0       no error   
	   ipcache-inject-labels                4s ago         11s ago      0       no error   
	   k8s-heartbeat                        12s ago        never        0       no error   
	   link-cache                           9s ago         never        0       no error   
	   metricsmap-bpf-prom-sync             2s ago         never        0       no error   
	   resolve-identity-187                 10s ago        never        0       no error   
	   resolve-identity-355                 8s ago         never        0       no error   
	   sync-endpoints-and-host-ips          10s ago        never        0       no error   
	   sync-lb-maps-with-k8s-services       10s ago        never        0       no error   
	   sync-policymap-187                   8s ago         never        0       no error   
	   sync-policymap-355                   4s ago         never        0       no error   
	   sync-to-k8s-ciliumendpoint (187)     0s ago         never        0       no error   
	   sync-to-k8s-ciliumendpoint (355)     8s ago         never        0       no error   
	   template-dir-watcher                 never          never        0       no error   
	 Proxy Status:            OK, ip 10.0.1.157, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 91/65535 (0.14%), Flows/s: 7.55   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          0/2 reachable   (2024-07-02T16:09:17Z)
	   Name                   IP              Node        Endpoints
	   k8s1 (localhost)       192.168.56.11   reachable   unreachable
	   k8s2                   192.168.56.12   reachable   unreachable
	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
	 

16:09:26 STEP: Performing Cilium controllers preflight check
16:09:26 STEP: Performing Cilium status preflight check
16:09:26 STEP: Performing Cilium health check
16:09:26 STEP: Checking whether host EP regenerated
16:09:30 STEP: Performing Cilium service preflight check
16:09:30 STEP: Performing K8s service preflight check
16:09:32 STEP: Waiting for cilium-operator to be ready
16:09:32 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
16:09:32 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
16:09:32 STEP: Making sure all endpoints are in ready state
16:09:33 STEP: Launching cilium monitor on "cilium-kz684"
16:09:33 STEP: Creating namespace 202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito
16:09:33 STEP: Deploying demo_ds.yaml in namespace 202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito
16:09:35 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-j4l9h policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc000f16200>: {
        msg: "Cannot retrieve cilium pod cilium-j4l9h policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc0016945d0>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-07-02T16:09:45Z====
16:09:45 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
16:09:45 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-hkr4j          0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-rkvvq                   1/1     Running             0          13s     10.0.1.141      k8s1   <none>           <none>
	 202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-v6dkv                   0/1     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-7s22k                       0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-p8pc8                       2/2     Running             0          13s     10.0.1.203      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-m68sn           0/1     Running             0          80s     10.0.0.191      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-2mrkm        1/1     Running             0          80s     10.0.0.238      k8s2   <none>           <none>
	 kube-system                                                       cilium-j4l9h                       1/1     Running             0          79s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-kz684                       1/1     Running             0          79s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-6ff79c886b-fgg8h   1/1     Running             0          79s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-6ff79c886b-gxsww   1/1     Running             0          79s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-7tq8s           1/1     Running             0          30s     10.0.0.131      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m54s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m54s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m54s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-rbfrs                   1/1     Running             0          5m30s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-vgzlb                   1/1     Running             0          2m15s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m54s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-5gf7n                 1/1     Running             0          96s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-qf4jc                 1/1     Running             0          96s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-4xws5               1/1     Running             0          2m13s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-dxhzm               1/1     Running             0          2m13s   192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-j4l9h cilium-kz684]
cmd: kubectl exec -n kube-system cilium-j4l9h -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-aee60eff)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 8/254 allocated from 10.0.0.0/24, IPv6: 8/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       40/40 healthy
	 Proxy Status:            OK, ip 10.0.0.251, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 286/65535 (0.44%), Flows/s: 5.32   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-07-02T16:09:31Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-j4l9h -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 104        Disabled           Disabled          37091      k8s:app=prometheus                                                                                                               fd02::a5   10.0.0.238   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 131        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::7b   10.0.0.211   ready   
	 332        Disabled           Disabled          15525      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::67   10.0.0.123   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 455        Disabled           Disabled          19404      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::6d   10.0.0.80    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 749        Disabled           Disabled          24637      k8s:app=grafana                                                                                                                  fd02::da   10.0.0.191   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1401       Disabled           Disabled          15769      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::b2   10.0.0.131   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 1825       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 2126       Disabled           Disabled          48591      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::7d   10.0.0.94    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-kz684 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-aee60eff)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.157, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 188/65535 (0.29%), Flows/s: 4.21   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-07-02T16:09:32Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-kz684 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 102        Disabled           Disabled          19404      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::179   10.0.1.141   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 187        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 226        Disabled           Disabled          15525      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1a3   10.0.1.203   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 355        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::143   10.0.1.158   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
16:10:27 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
16:10:27 STEP: Deleting deployment demo_ds.yaml
16:10:27 STEP: Deleting namespace 202407021609k8sdatapathconfigmonitoraggregationchecksthatmonito
16:10:43 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|6cc12a4c_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//626/artifact/6cc12a4c_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//626/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//626/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_626_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/626/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #33633 hit this flake with 91.36% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-w8zlt policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-w8zlt policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc00060cbe0>: {
        msg: "Cannot retrieve cilium pod cilium-w8zlt policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc0003f5a90>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 3
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Cilium pods: [cilium-mf6s9 cilium-w8zlt]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-cfkl2              false     false
testds-kdn8z                  false     false
grafana-698dc95f6c-qhhxc      false     false
prometheus-669755c8c5-pk9nd   false     false
coredns-85fbf8f7dd-2kctm      false     false
Cilium agent 'cilium-mf6s9': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-w8zlt': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
13:19:19 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
13:19:19 STEP: Ensuring the namespace kube-system exists
13:19:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:19:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:19:19 STEP: Installing Cilium
13:19:20 STEP: Waiting for Cilium to become ready
13:20:04 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-blq48 in namespace kube-system
13:20:04 STEP: Validating if Kubernetes DNS is deployed
13:20:04 STEP: Checking if deployment is ready
13:20:04 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
13:20:04 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
13:20:04 STEP: Waiting for Kubernetes DNS to become operational
13:20:04 STEP: Checking if deployment is ready
13:20:04 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:20:05 STEP: Checking if deployment is ready
13:20:05 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:20:06 STEP: Checking if deployment is ready
13:20:06 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:20:07 STEP: Checking if deployment is ready
13:20:07 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:20:08 STEP: Checking if deployment is ready
13:20:08 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:20:09 STEP: Checking if deployment is ready
13:20:09 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:20:10 STEP: Checking if deployment is ready
13:20:10 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:20:11 STEP: Checking if deployment is ready
13:20:11 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:20:12 STEP: Checking if deployment is ready
13:20:12 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:20:13 STEP: Checking if deployment is ready
13:20:13 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:20:14 STEP: Checking if deployment is ready
13:20:14 STEP: Checking if kube-dns service is plumbed correctly
13:20:14 STEP: Checking if pods have identity
13:20:14 STEP: Checking if DNS can resolve
13:20:15 STEP: Validating Cilium Installation
13:20:15 STEP: Performing Cilium controllers preflight check
13:20:15 STEP: Performing Cilium status preflight check
13:20:15 STEP: Performing Cilium health check
13:20:15 STEP: Checking whether host EP regenerated
13:20:16 STEP: Performing Cilium service preflight check
13:20:16 STEP: Performing K8s service preflight check
13:20:17 STEP: Waiting for cilium-operator to be ready
13:20:17 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
13:20:17 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
13:20:17 STEP: Making sure all endpoints are in ready state
13:20:24 STEP: Launching cilium monitor on "cilium-mf6s9"
13:20:24 STEP: Creating namespace 202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito
13:20:24 STEP: Deploying demo_ds.yaml in namespace 202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito
13:20:25 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-w8zlt policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc00060cbe0>: {
        msg: "Cannot retrieve cilium pod cilium-w8zlt policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc0003f5a90>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-07-09T13:20:36Z====
13:20:36 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
13:20:39 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-vpptf          0/2     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-cfkl2                   1/1     Running             0          15s     10.0.1.77       k8s1   <none>           <none>
	 202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-t7cs8                   0/1     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-kdn8z                       2/2     Running             0          15s     10.0.1.93       k8s1   <none>           <none>
	 202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-zht9w                       0/2     ContainerCreating   0          15s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-qhhxc           0/1     Running             0          81s     10.0.0.18       k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-pk9nd        1/1     Running             0          81s     10.0.0.114      k8s2   <none>           <none>
	 kube-system                                                       cilium-mf6s9                       1/1     Running             0          80s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-5684d9cf58-hh4wx   1/1     Running             0          80s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-5684d9cf58-rlrrt   1/1     Running             0          80s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-w8zlt                       1/1     Running             0          80s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-2kctm           1/1     Running             0          36s     10.0.0.53       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m32s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m32s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m32s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-7zhkm                   1/1     Running             0          5m14s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-vkxjs                   1/1     Running             0          2m23s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m32s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-845gq                 1/1     Running             0          97s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-rv92c                 1/1     Running             0          97s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-j9qxw               1/1     Running             0          2m15s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-tgzbz               1/1     Running             0          2m15s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-mf6s9 cilium-w8zlt]
cmd: kubectl exec -n kube-system cilium-mf6s9 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.22 (v1.22.17) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-bb17b982)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.33, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 201/65535 (0.31%), Flows/s: 3.92   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-07-09T13:20:16Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-mf6s9 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 228        Disabled           Disabled          4924       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::14f   10.0.1.93    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 406        Disabled           Disabled          3483       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::13e   10.0.1.77    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 1362       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 1518       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::170   10.0.1.142   ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-w8zlt -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.22 (v1.22.17) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-bb17b982)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 8/254 allocated from 10.0.0.0/24, IPv6: 8/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       40/40 healthy
	 Proxy Status:            OK, ip 10.0.0.72, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 295/65535 (0.45%), Flows/s: 4.98   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-07-09T13:20:17Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-w8zlt -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 360        Disabled           Disabled          4763       k8s:app=prometheus                                                                                                               fd02::b3   10.0.0.114   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 529        Disabled           Disabled          3483       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::d3   10.0.0.188   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 724        Disabled           Disabled          4924       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::59   10.0.0.253   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 885        Disabled           Disabled          2016       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::60   10.0.0.53    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 1212       Disabled           Disabled          32505      k8s:app=grafana                                                                                                                  fd02::ca   10.0.0.18    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1291       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 2541       Disabled           Disabled          10226      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::b2   10.0.0.5     ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 3318       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::27   10.0.0.234   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
13:21:20 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
13:21:20 STEP: Deleting deployment demo_ds.yaml
13:21:21 STEP: Deleting namespace 202407091320k8sdatapathconfigmonitoraggregationchecksthatmonito
13:21:35 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|c1ae6f9a_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//670/artifact/c1ae6f9a_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//670/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//670/artifact/test_results_Cilium-PR-K8s-1.22-kernel-4.19_670_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19/670/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #33633 hit this flake with 92.04% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-6lnfw policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-6lnfw policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc000ba0540>: {
        msg: "Cannot retrieve cilium pod cilium-6lnfw policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc00062d430>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Cilium pods: [cilium-6lnfw cilium-6nn49]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-sqlqq              false     false
testds-xs5d4                  false     false
grafana-698dc95f6c-flqdw      false     false
prometheus-669755c8c5-zn425   false     false
coredns-85fbf8f7dd-p4jwg      false     false
Cilium agent 'cilium-6lnfw': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-6nn49': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
13:33:01 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
13:33:01 STEP: Ensuring the namespace kube-system exists
13:33:02 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:33:02 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:33:02 STEP: Installing Cilium
13:33:02 STEP: Waiting for Cilium to become ready
13:33:47 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-h6dpv in namespace kube-system
13:33:47 STEP: Validating if Kubernetes DNS is deployed
13:33:47 STEP: Checking if deployment is ready
13:33:47 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
13:33:47 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
13:33:47 STEP: Waiting for Kubernetes DNS to become operational
13:33:47 STEP: Checking if deployment is ready
13:33:47 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:33:48 STEP: Checking if deployment is ready
13:33:48 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:33:49 STEP: Checking if deployment is ready
13:33:49 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:33:50 STEP: Checking if deployment is ready
13:33:50 STEP: Checking if kube-dns service is plumbed correctly
13:33:50 STEP: Checking if DNS can resolve
13:33:50 STEP: Checking if pods have identity
13:33:51 STEP: Validating Cilium Installation
13:33:51 STEP: Performing Cilium health check
13:33:51 STEP: Performing Cilium status preflight check
13:33:51 STEP: Checking whether host EP regenerated
13:33:51 STEP: Performing Cilium controllers preflight check
13:33:52 STEP: Performing Cilium service preflight check
13:33:52 STEP: Performing K8s service preflight check
13:33:54 STEP: Waiting for cilium-operator to be ready
13:33:54 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
13:33:54 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
13:33:54 STEP: Making sure all endpoints are in ready state
13:33:56 STEP: Launching cilium monitor on "cilium-6nn49"
13:33:56 STEP: Creating namespace 202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito
13:33:56 STEP: Deploying demo_ds.yaml in namespace 202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito
13:33:57 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-6lnfw policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc000ba0540>: {
        msg: "Cannot retrieve cilium pod cilium-6lnfw policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc00062d430>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-07-09T13:34:07Z====
13:34:07 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
13:34:10 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-5924w          0/2     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-d5h7d                   0/1     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-sqlqq                   1/1     Running             0          16s     10.0.1.170      k8s1   <none>           <none>
	 202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-9rcln                       0/2     ContainerCreating   0          16s     <none>          k8s2   <none>           <none>
	 202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-xs5d4                       2/2     Running             0          16s     10.0.1.144      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-flqdw           0/1     Running             0          71s     10.0.0.111      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-zn425        0/1     ContainerCreating   0          71s     <none>          k8s2   <none>           <none>
	 kube-system                                                       cilium-6lnfw                       1/1     Running             0          70s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-6nn49                       1/1     Running             0          70s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-5684d9cf58-l2sct   1/1     Running             0          70s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-5684d9cf58-vsqzb   1/1     Running             0          70s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-p4jwg           1/1     Running             0          25s     10.0.0.231      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m47s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m47s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m47s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-4nfls                   1/1     Running             0          5m26s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-ttk4d                   1/1     Running             0          2m8s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m47s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-4gcgb                 1/1     Running             0          87s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-wx5fz                 1/1     Running             0          87s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-jmz2j               1/1     Running             0          2m6s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-nqqct               1/1     Running             0          2m6s    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-6lnfw cilium-6nn49]
cmd: kubectl exec -n kube-system cilium-6lnfw -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-bb17b982)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.203, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 226/65535 (0.34%), Flows/s: 5.21   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-07-09T13:33:53Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-6lnfw -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 67         Disabled           Disabled          4          reserved:health                                                                                                                  fd02::9e   10.0.0.237   ready   
	 334        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 640        Disabled           Disabled          1633       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::86   10.0.0.141   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 1268       Disabled           Disabled          3319       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::43   10.0.0.231   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 3130       Disabled           Disabled          11136      k8s:app=grafana                                                                                                                  fd02::fa   10.0.0.111   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 3504       Disabled           Disabled          33073      k8s:app=prometheus                                                                                                               fd02::64   10.0.0.162   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 3720       Disabled           Disabled          10874      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::6a   10.0.0.45    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 3920       Disabled           Disabled          4778       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::2c   10.0.0.14    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-6nn49 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.17 (v1.13.17-bb17b982)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.28, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 184/65535 (0.28%), Flows/s: 4.70   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-07-09T13:33:54Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-6nn49 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 191        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 1582       Disabled           Disabled          10874      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1ef   10.0.1.144   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 2846       Disabled           Disabled          1633       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1d6   10.0.1.170   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 3010       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1be   10.0.1.224   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
13:34:53 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
13:34:53 STEP: Deleting deployment demo_ds.yaml
13:34:54 STEP: Deleting namespace 202407091333k8sdatapathconfigmonitoraggregationchecksthatmonito
13:35:09 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|324eb108_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//639/artifact/324eb108_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//639/artifact/cf143d8d_K8sDatapathConfig_Host_firewall_Check_connectivity_with_IPv6_disabled.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//639/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//639/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_639_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/639/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@pippolo84
Copy link
Member

Another hit here

Full Logs: logs_25927356294.zip
Sysdump: cilium-sysdumps (4).zip

@maintainer-s-little-helper
Copy link
Author

PR #33859 hit this flake with 91.90% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-sv48l policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-sv48l policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0000bc180>: {
        msg: "Cannot retrieve cilium pod cilium-sv48l policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc000f66390>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Waiting for k8s node information
Unable to get node resource
Cilium pods: [cilium-cgvs6 cilium-sv48l]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-wg5pt              false     false
testds-h6rcm                  false     false
grafana-698dc95f6c-9pl9t      false     false
prometheus-669755c8c5-j6cr7   false     false
coredns-85fbf8f7dd-6psx8      false     false
Cilium agent 'cilium-cgvs6': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-sv48l': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
11:16:22 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
11:16:22 STEP: Ensuring the namespace kube-system exists
11:16:22 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
11:16:22 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
11:16:22 STEP: Installing Cilium
11:16:23 STEP: Waiting for Cilium to become ready
11:17:10 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-cjpjl in namespace kube-system
11:17:10 STEP: Validating if Kubernetes DNS is deployed
11:17:10 STEP: Checking if deployment is ready
11:17:10 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
11:17:10 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
11:17:10 STEP: Waiting for Kubernetes DNS to become operational
11:17:10 STEP: Checking if deployment is ready
11:17:10 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:17:11 STEP: Checking if deployment is ready
11:17:11 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:17:12 STEP: Checking if deployment is ready
11:17:12 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:17:13 STEP: Checking if deployment is ready
11:17:13 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:17:14 STEP: Checking if deployment is ready
11:17:14 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:17:15 STEP: Checking if deployment is ready
11:17:15 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:17:16 STEP: Checking if deployment is ready
11:17:16 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:17:17 STEP: Checking if deployment is ready
11:17:17 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:17:18 STEP: Checking if deployment is ready
11:17:18 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:17:19 STEP: Checking if deployment is ready
11:17:19 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:17:20 STEP: Checking if deployment is ready
11:17:20 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:17:21 STEP: Checking if deployment is ready
11:17:21 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:17:22 STEP: Checking if deployment is ready
11:17:22 STEP: Checking if kube-dns service is plumbed correctly
11:17:22 STEP: Checking if pods have identity
11:17:22 STEP: Checking if DNS can resolve
11:17:23 STEP: Validating Cilium Installation
11:17:23 STEP: Performing Cilium controllers preflight check
11:17:23 STEP: Performing Cilium health check
11:17:23 STEP: Checking whether host EP regenerated
11:17:23 STEP: Performing Cilium status preflight check
11:17:24 STEP: Performing Cilium service preflight check
11:17:24 STEP: Performing K8s service preflight check
11:17:25 STEP: Waiting for cilium-operator to be ready
11:17:25 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
11:17:25 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
11:17:25 STEP: Making sure all endpoints are in ready state
11:17:33 STEP: Launching cilium monitor on "cilium-cgvs6"
11:17:33 STEP: Creating namespace 202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito
11:17:33 STEP: Deploying demo_ds.yaml in namespace 202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito
11:17:34 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-sv48l policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0000bc180>: {
        msg: "Cannot retrieve cilium pod cilium-sv48l policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc000f66390>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-07-17T11:17:45Z====
11:17:45 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
11:17:49 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-8wnp6          0/2     ContainerCreating   0          17s     <none>          k8s2   <none>           <none>
	 202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-bkssj                   0/1     ContainerCreating   0          17s     <none>          k8s2   <none>           <none>
	 202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-wg5pt                   1/1     Running             0          17s     10.0.1.63       k8s1   <none>           <none>
	 202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-b4k8h                       0/2     ContainerCreating   0          17s     <none>          k8s2   <none>           <none>
	 202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-h6rcm                       2/2     Running             0          17s     10.0.1.28       k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-9pl9t           0/1     Running             0          89s     10.0.0.81       k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-j6cr7        1/1     Running             0          89s     10.0.0.154      k8s2   <none>           <none>
	 kube-system                                                       cilium-cgvs6                       1/1     Running             0          88s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-6cd7899ff5-75h4h   1/1     Running             0          88s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-6cd7899ff5-8rd4x   1/1     Running             0          88s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-sv48l                       1/1     Running             0          88s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-6psx8           1/1     Running             0          41s     10.0.0.171      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          6m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          6m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          6m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-f4hgn                   1/1     Running             0          2m27s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-zjgqz                   1/1     Running             0          5m39s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          6m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-7xcpv                 1/1     Running             0          105s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-nclpg                 1/1     Running             0          105s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-cjpml               1/1     Running             0          2m25s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-lkbsf               1/1     Running             0          2m25s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-cgvs6 cilium-sv48l]
cmd: kubectl exec -n kube-system cilium-cgvs6 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.18 (v1.13.18-a163d22b)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.200, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 214/65535 (0.33%), Flows/s: 3.76   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-07-17T11:17:24Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-cgvs6 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 43         Disabled           Disabled          1183       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1b8   10.0.1.28    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 822        Disabled           Disabled          51708      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1de   10.0.1.63    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 1723       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 3058       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1e2   10.0.1.182   ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-sv48l -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.18 (v1.13.18-a163d22b)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.36, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 295/65535 (0.45%), Flows/s: 5.36   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-07-17T11:17:25Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-sv48l -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 148        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 345        Disabled           Disabled          1183       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::b    10.0.0.33    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 462        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::bb   10.0.0.239   ready   
	 1354       Disabled           Disabled          31970      k8s:app=prometheus                                                                                                               fd02::f5   10.0.0.154   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2587       Disabled           Disabled          51708      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::d9   10.0.0.48    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 2655       Disabled           Disabled          16460      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::fd   10.0.0.85    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 2908       Disabled           Disabled          50612      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::66   10.0.0.171   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 3286       Disabled           Disabled          51315      k8s:app=grafana                                                                                                                  fd02::c4   10.0.0.81    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
11:18:31 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
11:18:31 STEP: Deleting deployment demo_ds.yaml
11:18:31 STEP: Deleting namespace 202407171117k8sdatapathconfigmonitoraggregationchecksthatmonito
11:18:46 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|6dacf532_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//645/artifact/6dacf532_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//645/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//645/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_645_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/645/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #33859 hit this flake with 92.33% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-2p4ks policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-2p4ks policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0009804a0>: {
        msg: "Cannot retrieve cilium pod cilium-2p4ks policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc000b205c0>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Waiting for k8s node information
Unable to get node resource
Cilium pods: [cilium-2p4ks cilium-r6wh4]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
prometheus-669755c8c5-lb6jl   false     false
coredns-85fbf8f7dd-4z8ld      false     false
testclient-fc2qj              false     false
testds-b76bb                  false     false
grafana-698dc95f6c-w6ql6      false     false
Cilium agent 'cilium-2p4ks': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-r6wh4': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
09:05:30 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
09:05:30 STEP: Ensuring the namespace kube-system exists
09:05:30 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
09:05:30 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
09:05:30 STEP: Installing Cilium
09:05:31 STEP: Waiting for Cilium to become ready
09:06:16 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-gnkkr in namespace kube-system
09:06:16 STEP: Validating if Kubernetes DNS is deployed
09:06:16 STEP: Checking if deployment is ready
09:06:16 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
09:06:16 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
09:06:16 STEP: Waiting for Kubernetes DNS to become operational
09:06:16 STEP: Checking if deployment is ready
09:06:16 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:17 STEP: Checking if deployment is ready
09:06:17 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:18 STEP: Checking if deployment is ready
09:06:18 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:19 STEP: Checking if deployment is ready
09:06:19 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:20 STEP: Checking if deployment is ready
09:06:20 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:21 STEP: Checking if deployment is ready
09:06:21 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:22 STEP: Checking if deployment is ready
09:06:22 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:23 STEP: Checking if deployment is ready
09:06:23 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:24 STEP: Checking if deployment is ready
09:06:24 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:25 STEP: Checking if deployment is ready
09:06:25 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:26 STEP: Checking if deployment is ready
09:06:26 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:27 STEP: Checking if deployment is ready
09:06:27 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:28 STEP: Checking if deployment is ready
09:06:28 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:29 STEP: Checking if deployment is ready
09:06:29 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:30 STEP: Checking if deployment is ready
09:06:30 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:31 STEP: Checking if deployment is ready
09:06:31 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:32 STEP: Checking if deployment is ready
09:06:32 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:06:33 STEP: Checking if deployment is ready
09:06:33 STEP: Checking if kube-dns service is plumbed correctly
09:06:33 STEP: Checking if pods have identity
09:06:33 STEP: Checking if DNS can resolve
09:06:34 STEP: Validating Cilium Installation
09:06:34 STEP: Performing Cilium controllers preflight check
09:06:34 STEP: Performing Cilium status preflight check
09:06:34 STEP: Performing Cilium health check
09:06:34 STEP: Checking whether host EP regenerated
09:06:34 STEP: Performing Cilium service preflight check
09:06:34 STEP: Performing K8s service preflight check
09:06:36 STEP: Waiting for cilium-operator to be ready
09:06:36 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
09:06:36 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
09:06:36 STEP: Making sure all endpoints are in ready state
09:06:39 STEP: Launching cilium monitor on "cilium-r6wh4"
09:06:39 STEP: Creating namespace 202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito
09:06:39 STEP: Deploying demo_ds.yaml in namespace 202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito
09:06:41 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-2p4ks policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0009804a0>: {
        msg: "Cannot retrieve cilium pod cilium-2p4ks policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc000b205c0>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-07-18T09:06:51Z====
09:06:51 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
09:06:55 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-r5hvt          0/2     ContainerCreating   0          17s     <none>          k8s2   <none>           <none>
	 202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-fc2qj                   1/1     Running             0          17s     10.0.1.7        k8s1   <none>           <none>
	 202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-nkl8j                   0/1     ContainerCreating   0          17s     <none>          k8s2   <none>           <none>
	 202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-97zzj                       0/2     ContainerCreating   0          17s     <none>          k8s2   <none>           <none>
	 202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-b76bb                       2/2     Running             0          17s     10.0.1.138      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-w6ql6           0/1     Running             0          87s     10.0.0.169      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-lb6jl        1/1     Running             0          87s     10.0.0.170      k8s2   <none>           <none>
	 kube-system                                                       cilium-2p4ks                       1/1     Running             0          86s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-6cd7899ff5-7wq6x   1/1     Running             0          86s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-6cd7899ff5-b9lwx   1/1     Running             0          86s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-r6wh4                       1/1     Running             0          86s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-4z8ld           1/1     Running             0          41s     10.0.0.130      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          6m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          6m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          6m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-b4x6p                   1/1     Running             0          5m36s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-vjzkb                   1/1     Running             0          2m26s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          6m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-nkt7t                 1/1     Running             0          104s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-zfhfp                 1/1     Running             0          104s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-426q7               1/1     Running             0          2m22s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-ppkxp               1/1     Running             0          2m22s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-2p4ks cilium-r6wh4]
cmd: kubectl exec -n kube-system cilium-2p4ks -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.18 (v1.13.18-a163d22b)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.86, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 227/65535 (0.35%), Flows/s: 3.85   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-07-18T09:06:35Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-2p4ks -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 426        Disabled           Disabled          12136      k8s:app=prometheus                                                                                                               fd02::7    10.0.0.170   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 754        Disabled           Disabled          14644      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::50   10.0.0.130   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 1105       Disabled           Disabled          2569       k8s:app=grafana                                                                                                                  fd02::6f   10.0.0.169   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1522       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 2470       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::e8   10.0.0.171   ready   
	 2529       Disabled           Disabled          54470      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::9a   10.0.0.159   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 3365       Disabled           Disabled          10085      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::b8   10.0.0.11    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 3843       Disabled           Disabled          27633      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::ea   10.0.0.30    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-r6wh4 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.18 (v1.13.18-a163d22b)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.1.196, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 153/65535 (0.23%), Flows/s: 2.23   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-07-18T09:06:36Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-r6wh4 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 967        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1dc   10.0.1.102   ready   
	 1349       Disabled           Disabled          27633      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1d7   10.0.1.7     ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 2319       Disabled           Disabled          54470      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1d9   10.0.1.138   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 3597       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
09:07:36 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
09:07:36 STEP: Deleting deployment demo_ds.yaml
09:07:37 STEP: Deleting namespace 202407180906k8sdatapathconfigmonitoraggregationchecksthatmonito
09:07:51 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|35727dec_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//649/artifact/35727dec_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//649/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//649/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_649_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/649/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #33966 hit this flake with 90.53% similarity:

Click to show.

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-8vb5l policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-8vb5l policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0000ba800>: {
        msg: "Cannot retrieve cilium pod cilium-8vb5l policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc00084a1a0>{
            s: "cannot get the revision ",
        },
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Unable to get node resource
Waiting for k8s node information
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-8vb5l cilium-scl4r]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testds-hknjs                  false     false
grafana-698dc95f6c-46gv6      false     false
prometheus-669755c8c5-kvksn   false     false
coredns-85fbf8f7dd-b85r4      false     false
testclient-hd478              false     false
Cilium agent 'cilium-8vb5l': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-scl4r': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
14:38:34 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
14:38:34 STEP: Ensuring the namespace kube-system exists
14:38:34 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
14:38:34 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
14:38:34 STEP: Installing Cilium
14:38:35 STEP: Waiting for Cilium to become ready
14:39:19 STEP: Restarting unmanaged pods coredns-85fbf8f7dd-pwxz4 in namespace kube-system
14:39:19 STEP: Validating if Kubernetes DNS is deployed
14:39:19 STEP: Checking if deployment is ready
14:39:19 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
14:39:19 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
14:39:19 STEP: Waiting for Kubernetes DNS to become operational
14:39:19 STEP: Checking if deployment is ready
14:39:20 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:39:20 STEP: Checking if deployment is ready
14:39:21 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:39:21 STEP: Checking if deployment is ready
14:39:22 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:39:22 STEP: Checking if deployment is ready
14:39:23 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:39:23 STEP: Checking if deployment is ready
14:39:24 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:39:24 STEP: Checking if deployment is ready
14:39:25 STEP: Checking if kube-dns service is plumbed correctly
14:39:25 STEP: Checking if pods have identity
14:39:25 STEP: Checking if DNS can resolve
14:39:25 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
14:39:25 STEP: Checking if deployment is ready
14:39:26 STEP: Checking if kube-dns service is plumbed correctly
14:39:26 STEP: Checking if pods have identity
14:39:26 STEP: Checking if DNS can resolve
14:39:26 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
14:39:26 STEP: Checking if deployment is ready
14:39:27 STEP: Checking if kube-dns service is plumbed correctly
14:39:27 STEP: Checking if DNS can resolve
14:39:27 STEP: Checking if pods have identity
14:39:27 STEP: Validating Cilium Installation
14:39:27 STEP: Performing Cilium controllers preflight check
14:39:27 STEP: Performing Cilium status preflight check
14:39:27 STEP: Performing Cilium health check
14:39:27 STEP: Checking whether host EP regenerated
14:39:28 STEP: Performing Cilium service preflight check
14:39:28 STEP: Performing K8s service preflight check
14:39:29 STEP: Waiting for cilium-operator to be ready
14:39:30 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
14:39:30 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
14:39:30 STEP: Making sure all endpoints are in ready state
14:39:37 STEP: Launching cilium monitor on "cilium-scl4r"
14:39:37 STEP: Creating namespace 202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito
14:39:37 STEP: Deploying demo_ds.yaml in namespace 202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito
14:39:38 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-8vb5l policy revision: cannot get the revision 
Expected
    <*fmt.wrapError | 0xc0000ba800>: {
        msg: "Cannot retrieve cilium pod cilium-8vb5l policy revision: cannot get the revision ",
        err: <*errors.errorString | 0xc00084a1a0>{
            s: "cannot get the revision ",
        },
    }
to be nil
=== Test Finished at 2024-07-23T14:39:48Z====
14:39:48 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
14:39:49 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-794579c97-rg9ll          0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-hd478                   1/1     Running             0          13s     10.0.1.169      k8s1   <none>           <none>
	 202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-hv9p5                   0/1     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-hknjs                       2/2     Running             0          13s     10.0.1.138      k8s1   <none>           <none>
	 202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-rznzw                       0/2     ContainerCreating   0          13s     <none>          k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-698dc95f6c-46gv6           0/1     Running             0          77s     10.0.0.250      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-669755c8c5-kvksn        1/1     Running             0          77s     10.0.0.153      k8s2   <none>           <none>
	 kube-system                                                       cilium-8vb5l                       1/1     Running             0          76s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-78c9cf99b5-gqwnn   1/1     Running             0          76s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-78c9cf99b5-qwj29   1/1     Running             0          76s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-scl4r                       1/1     Running             0          76s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-85fbf8f7dd-b85r4           1/1     Running             0          32s     10.0.1.52       k8s1   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-2qphv                   1/1     Running             0          5m12s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-t9phk                   1/1     Running             0          2m15s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m35s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-65lh9                 1/1     Running             0          94s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-lwv4c                 1/1     Running             0          94s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-d2q7w               1/1     Running             0          2m12s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-xflp4               1/1     Running             0          2m12s   192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-8vb5l cilium-scl4r]
cmd: kubectl exec -n kube-system cilium-8vb5l -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.18 (v1.13.18-9777baea)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       35/35 healthy
	 Proxy Status:            OK, ip 10.0.0.25, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 247/65535 (0.38%), Flows/s: 3.71   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-07-23T14:39:28Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-8vb5l -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 194        Disabled           Disabled          12898      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::6    10.0.0.31    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 423        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                               ready   
	                                                            reserved:host                                                                                                                                                    
	 479        Disabled           Disabled          54152      k8s:app=prometheus                                                                                                               fd02::19   10.0.0.153   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 701        Disabled           Disabled          10870      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::4d   10.0.0.246   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=test-k8s2                                                                                                                                             
	 1519       Disabled           Disabled          55085      k8s:app=grafana                                                                                                                  fd02::3b   10.0.0.250   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2440       Disabled           Disabled          1994       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::6e   10.0.0.177   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 2763       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::f5   10.0.0.88    ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-scl4r -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.21 (v1.21.14) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.18 (v1.13.18-9777baea)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.1.25, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 220/65535 (0.34%), Flows/s: 4.84   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-07-23T14:39:29Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-scl4r -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 286        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                                
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            reserved:host                                                                                                                                                     
	 713        Disabled           Disabled          48976      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::1ac   10.0.1.52    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                       
	                                                            k8s:k8s-app=kube-dns                                                                                                                                              
	 948        Disabled           Disabled          1994       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::114   10.0.1.169   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 1726       Disabled           Disabled          12898      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::148   10.0.1.138   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 3031       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::161   10.0.1.148   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
14:40:31 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
14:40:31 STEP: Deleting deployment demo_ds.yaml
14:40:31 STEP: Deleting namespace 202407231439k8sdatapathconfigmonitoraggregationchecksthatmonito
14:40:46 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|8b505b5f_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//659/artifact/8b505b5f_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//659/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//659/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_659_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/659/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!
Projects
None yet
Development

No branches or pull requests

1 participant