Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sKafkaPolicyTest Kafka Policy Tests KafkaPolicies #21533

Closed
maintainer-s-little-helper bot opened this issue Sep 30, 2022 · 2 comments
Closed

CI: K8sKafkaPolicyTest Kafka Policy Tests KafkaPolicies #21533

maintainer-s-little-helper bot opened this issue Sep 30, 2022 · 2 comments
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sKafkaPolicyTest Kafka Policy Tests KafkaPolicies

Failure Output

FAIL: L7 policy cannot be imported correctly

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
L7 policy cannot be imported correctly
Expected
    <*errors.errorString | 0xc0003766b0>: {
        s: "Timed out while waiting for policies to be enforced: 4m0s timeout expired",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-4.19/src/github.com/cilium/cilium/test/k8s/kafka_policies.go:181

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
⚠️  Number of "context deadline exceeded" in logs: 7
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 8
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Key allocation attempt failed
Disabling socket-LB tracing as it requires kernel 5.7 or newer
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Cilium pods: [cilium-bvnw6 cilium-vrh2s]
Netpols loaded: 
CiliumNetworkPolicies loaded: default::kafka-sw-security-policy 
Endpoint Policy Enforcement:
Pod                                    Ingress   Egress
empire-outpost-8888-768db5bcb4-77c2q   false     false
empire-outpost-9999-59f5f845cb-tlthc   false     false
kafka-broker-6c8bb9cf6-8kllm           false     false
coredns-8c79ffd8b-6sh28                false     false
grafana-b96dcb76b-c7shx                false     false
prometheus-5c59d656f5-wxf87            false     false
empire-backup-65bdf9b8bb-xjfpt         false     false
empire-hq-6dc4796877-mn2k8             false     false
⚠️  Cilium agent 'cilium-bvnw6': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 40 Failed 1
Failed controllers:
 controller sync-cnp-policy-status (v2 default/kafka-sw-security-policy) failure 'context deadline exceeded'
Cilium agent 'cilium-vrh2s': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 34 Failed 0
Failed controllers:
 controller sync-cnp-policy-status (v2 default/kafka-sw-security-policy) failure 'context deadline exceeded'


Standard Error

Click to show.
15:49:18 STEP: Running BeforeAll block for EntireTestsuite K8sKafkaPolicyTest Kafka Policy Tests
15:49:18 STEP: Ensuring the namespace kube-system exists
15:49:18 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
15:49:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
15:49:19 STEP: Installing Cilium
15:49:20 STEP: Waiting for Cilium to become ready
15:49:39 STEP: Validating if Kubernetes DNS is deployed
15:49:39 STEP: Checking if deployment is ready
15:49:39 STEP: Checking if kube-dns service is plumbed correctly
15:49:39 STEP: Checking if DNS can resolve
15:49:39 STEP: Checking if pods have identity
15:49:39 STEP: Kubernetes DNS is not ready: %!s(<nil>)
15:49:39 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
15:49:40 STEP: Waiting for Kubernetes DNS to become operational
15:49:40 STEP: Checking if deployment is ready
15:49:40 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:49:41 STEP: Checking if deployment is ready
15:49:41 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:49:42 STEP: Checking if deployment is ready
15:49:42 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:49:43 STEP: Checking if deployment is ready
15:49:43 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:49:44 STEP: Checking if deployment is ready
15:49:44 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:49:45 STEP: Checking if deployment is ready
15:49:45 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:49:46 STEP: Checking if deployment is ready
15:49:46 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:49:47 STEP: Checking if deployment is ready
15:49:47 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:49:48 STEP: Checking if deployment is ready
15:49:48 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
15:49:49 STEP: Checking if deployment is ready
15:49:49 STEP: Checking if kube-dns service is plumbed correctly
15:49:49 STEP: Checking if DNS can resolve
15:49:49 STEP: Checking if pods have identity
15:49:49 STEP: Validating Cilium Installation
15:49:49 STEP: Performing Cilium controllers preflight check
15:49:49 STEP: Performing Cilium status preflight check
15:49:49 STEP: Performing Cilium health check
15:49:49 STEP: Checking whether host EP regenerated
15:49:50 STEP: Performing Cilium service preflight check
15:49:50 STEP: Performing K8s service preflight check
15:49:52 STEP: Waiting for cilium-operator to be ready
15:49:52 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
15:49:52 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
15:49:52 STEP: WaitforPods(namespace="default", filter="-l zgroup=kafkaTestApp")
15:51:03 STEP: WaitforPods(namespace="default", filter="-l zgroup=kafkaTestApp") => <nil>
15:51:04 STEP: Wait for Kafka broker to be up
15:51:05 STEP: Creating new kafka topic empire-announce
15:51:06 STEP: Creating new kafka topic deathstar-plans
15:51:08 STEP: Waiting for DNS to resolve within pods for kafka-service
15:51:08 STEP: Testing basic Kafka Produce and Consume
15:51:15 STEP: Apply L7 kafka policy and wait
FAIL: L7 policy cannot be imported correctly
Expected
    <*errors.errorString | 0xc0003766b0>: {
        s: "Timed out while waiting for policies to be enforced: 4m0s timeout expired",
    }
to be nil
=== Test Finished at 2022-09-30T15:55:15Z====
15:55:15 STEP: Running JustAfterEach block for EntireTestsuite K8sKafkaPolicyTest Kafka Policy Tests
===================== TEST FAILED =====================
15:55:16 STEP: Running AfterFailed block for EntireTestsuite K8sKafkaPolicyTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                                   READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-b96dcb76b-c7shx                1/1     Running   0          14m     10.0.0.204      k8s1   <none>           <none>
	 cilium-monitoring   prometheus-5c59d656f5-wxf87            1/1     Running   0          14m     10.0.0.228      k8s1   <none>           <none>
	 default             empire-backup-65bdf9b8bb-xjfpt         1/1     Running   0          5m25s   10.0.1.225      k8s2   <none>           <none>
	 default             empire-hq-6dc4796877-mn2k8             1/1     Running   0          5m25s   10.0.0.41       k8s1   <none>           <none>
	 default             empire-outpost-8888-768db5bcb4-77c2q   1/1     Running   0          5m25s   10.0.1.59       k8s2   <none>           <none>
	 default             empire-outpost-9999-59f5f845cb-tlthc   1/1     Running   0          5m25s   10.0.1.19       k8s2   <none>           <none>
	 default             kafka-broker-6c8bb9cf6-8kllm           1/1     Running   0          5m25s   10.0.1.95       k8s2   <none>           <none>
	 kube-system         cilium-bvnw6                           1/1     Running   0          5m57s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-66dfc5464f-wh7xs       1/1     Running   0          5m57s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-operator-66dfc5464f-xwlkw       1/1     Running   0          5m57s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-vrh2s                           1/1     Running   0          5m57s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         coredns-8c79ffd8b-6sh28                1/1     Running   0          5m37s   10.0.1.86       k8s2   <none>           <none>
	 kube-system         etcd-k8s1                              1/1     Running   0          18m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                    1/1     Running   0          18m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1           1/1     Running   0          18m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-vzcpn                       1/1     Running   0          17m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-wldk2                       1/1     Running   0          14m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         kube-scheduler-k8s1                    1/1     Running   0          18m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-gdpdt                     1/1     Running   0          14m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-nmbf6                     1/1     Running   0          14m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-h57b6                   1/1     Running   0          14m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-m5vmk                   1/1     Running   0          14m     192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-bvnw6 cilium-vrh2s]
cmd: kubectl exec -n kube-system cilium-bvnw6 -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                            
	 2    10.96.0.1:443        ClusterIP      1 => 192.168.56.11:6443 (active)   
	 3    10.96.0.10:53        ClusterIP      1 => 10.0.1.86:53 (active)         
	 4    10.96.0.10:9153      ClusterIP      1 => 10.0.1.86:9153 (active)       
	 5    10.96.206.45:3000    ClusterIP      1 => 10.0.0.204:3000 (active)      
	 6    10.97.103.130:9090   ClusterIP      1 => 10.0.0.228:9090 (active)      
	 7    10.108.234.220:443   ClusterIP      1 => 192.168.56.11:4244 (active)   
	                                          2 => 192.168.56.12:4244 (active)   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-bvnw6 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                        
	 121        Disabled           Disabled          19546      k8s:app=empire-backup                                                        fd02::1f1   10.0.1.225   regenerating   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                               
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                             
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                              
	                                                            k8s:zgroup=kafkaTestApp                                                                                              
	 676        Disabled           Disabled          48589      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   fd02::114   10.0.1.86    ready          
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                             
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                      
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                          
	                                                            k8s:k8s-app=kube-dns                                                                                                 
	 739        Enabled            Enabled           2715       k8s:app=kafka                                                                fd02::119   10.0.1.95    ready          
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                               
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                             
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                              
	                                                            k8s:zgroup=kafkaTestApp                                                                                              
	 935        Disabled           Disabled          47519      k8s:app=empire-outpost                                                       fd02::15a   10.0.1.19    ready          
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                               
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                             
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                              
	                                                            k8s:outpostid=9999                                                                                                   
	                                                            k8s:zgroup=kafkaTestApp                                                                                              
	 1889       Disabled           Disabled          4          reserved:health                                                              fd02::14d   10.0.1.182   ready          
	 3408       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                            ready          
	                                                            reserved:host                                                                                                        
	 3457       Disabled           Disabled          26657      k8s:app=empire-outpost                                                       fd02::167   10.0.1.59    ready          
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                               
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                             
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                              
	                                                            k8s:outpostid=8888                                                                                                   
	                                                            k8s:zgroup=kafkaTestApp                                                                                              
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-vrh2s -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                            
	 1    10.96.0.1:443        ClusterIP      1 => 192.168.56.11:6443 (active)   
	 3    10.96.0.10:53        ClusterIP      1 => 10.0.1.86:53 (active)         
	 4    10.96.0.10:9153      ClusterIP      1 => 10.0.1.86:9153 (active)       
	 5    10.96.206.45:3000    ClusterIP      1 => 10.0.0.204:3000 (active)      
	 6    10.97.103.130:9090   ClusterIP      1 => 10.0.0.228:9090 (active)      
	 7    10.108.234.220:443   ClusterIP      1 => 192.168.56.11:4244 (active)   
	                                          2 => 192.168.56.12:4244 (active)   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-vrh2s -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                        IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                             
	 204        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                 ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                          
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                        
	                                                            reserved:host                                                                                                      
	 802        Disabled           Disabled          44753      k8s:app=grafana                                                                    fd02::29   10.0.0.204   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                  
	 857        Disabled           Disabled          26872      k8s:app=prometheus                                                                 fd02::2    10.0.0.228   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                             
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                  
	 1045       Disabled           Disabled          4          reserved:health                                                                    fd02::71   10.0.0.249   ready   
	 2239       Disabled           Disabled          65461      k8s:app=empire-hq                                                                  fd02::1    10.0.0.41    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                             
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:zgroup=kafkaTestApp                                                                                            
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
15:55:31 STEP: Running AfterEach for block EntireTestsuite K8sKafkaPolicyTest Kafka Policy Tests
15:55:31 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|75008753_K8sKafkaPolicyTest_Kafka_Policy_Tests_KafkaPolicies.zip]]
15:55:34 STEP: Running AfterAll block for EntireTestsuite K8sKafkaPolicyTest
15:55:34 STEP: Removing Cilium installation using generated helm manifest


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-4.19//481/artifact/75008753_K8sKafkaPolicyTest_Kafka_Policy_Tests_KafkaPolicies.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-4.19//481/artifact/dc366d7d_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-4.19//481/artifact/test_results_Cilium-PR-K8s-1.24-kernel-4.19_481_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-4.19/481/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper maintainer-s-little-helper bot added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label Sep 30, 2022
@github-actions
Copy link

This issue has been automatically marked as stale because it has not
had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added the stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. label Nov 30, 2022
@github-actions
Copy link

This issue has not seen any activity since it was marked stale.
Closing.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects
None yet
Development

No branches or pull requests

0 participants