Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sKafkaPolicyTest Kafka Policy Tests KafkaPolicies #11013

Closed
pchaigno opened this issue Apr 16, 2020 · 0 comments · Fixed by #11020
Closed

CI: K8sKafkaPolicyTest Kafka Policy Tests KafkaPolicies #11013

pchaigno opened this issue Apr 16, 2020 · 0 comments · Fixed by #11020
Assignees
Labels
area/CI Continuous Integration testing issue or flake kind/bug This is a bug in the Cilium logic. priority/high This is considered vital to an upcoming release.

Comments

@pchaigno
Copy link
Member

The logs have the actual panic backtrace:

Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 498 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1f625a0, 0x39a94d0)
	/go/src/github.com/cilium/cilium/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/cilium/cilium/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x1f625a0, 0x39a94d0)
	/usr/local/go/src/runtime/panic.go:967 +0x166
fmt.(*buffer).writeString(...)
	/usr/local/go/src/fmt/print.go:82
fmt.(*fmt).padString(0xc001254e10, 0x0, 0x30)
	/usr/local/go/src/fmt/format.go:110 +0x8c
fmt.(*fmt).fmtS(0xc001254e10, 0x0, 0x30)
	/usr/local/go/src/fmt/format.go:359 +0x61
fmt.(*pp).fmtString(0xc001254dd0, 0x0, 0x30, 0x76)
	/usr/local/go/src/fmt/print.go:447 +0x131
fmt.(*pp).printValue(0xc001254dd0, 0x1e81940, 0xc0012abf80, 0x98, 0xc000000076, 0x1)
	/usr/local/go/src/fmt/print.go:761 +0x2153
fmt.(*pp).printValue(0xc001254dd0, 0x1f41ea0, 0xc001343fb0, 0x15, 0xc000000076, 0x0)
	/usr/local/go/src/fmt/print.go:782 +0xddb
fmt.(*pp).printArg(0xc001254dd0, 0x1f41ea0, 0xc001343fb0, 0x76)
	/usr/local/go/src/fmt/print.go:716 +0x292
fmt.(*pp).doPrint(0xc001254dd0, 0xc000a08ba0, 0x1, 0x1)
	/usr/local/go/src/fmt/print.go:1161 +0xf8
fmt.Sprint(0xc000a08ba0, 0x1, 0x1, 0x907, 0x0)
	/usr/local/go/src/fmt/print.go:249 +0x52
github.com/sirupsen/logrus.(*TextFormatter).appendValue(0xc0004ca640, 0xc000e98c90, 0x1f41ea0, 0xc001343fb0)
	/go/src/github.com/cilium/cilium/vendor/github.com/sirupsen/logrus/text_formatter.go:287 +0x165
github.com/sirupsen/logrus.(*TextFormatter).appendKeyValue(0xc0004ca640, 0xc000e98c90, 0x22d93dd, 0xf, 0x1f41ea0, 0xc001343fb0)
	/go/src/github.com/cilium/cilium/vendor/github.com/sirupsen/logrus/text_formatter.go:281 +0x8f
github.com/sirupsen/logrus.(*TextFormatter).Format(0xc0004ca640, 0xc0001467e0, 0x4741a2, 0x1, 0x0, 0x456cd9, 0xc000a09200)
	/go/src/github.com/cilium/cilium/vendor/github.com/sirupsen/logrus/text_formatter.go:198 +0xa2d
github.com/sirupsen/logrus.(*Entry).write(0xc0001467e0)
	/go/src/github.com/cilium/cilium/vendor/github.com/sirupsen/logrus/entry.go:255 +0x7c
github.com/sirupsen/logrus.Entry.log(0xc0000d7ea0, 0xc0010f4cc0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cilium/cilium/vendor/github.com/sirupsen/logrus/entry.go:231 +0x19e
github.com/sirupsen/logrus.(*Entry).Log(0xc000146770, 0xc000000005, 0xc000a09490, 0x1, 0x1)
	/go/src/github.com/cilium/cilium/vendor/github.com/sirupsen/logrus/entry.go:268 +0xeb
github.com/sirupsen/logrus.(*Entry).Debug(...)
	/go/src/github.com/cilium/cilium/vendor/github.com/sirupsen/logrus/entry.go:277
github.com/cilium/cilium/pkg/k8s/watchers.(*K8sWatcher).updateCiliumNetworkPolicyV2(0xc000185080, 0x26d6de0, 0x39d2250, 0x0, 0x0, 0xc000a7aa28, 0xc00088e3d0, 0xc000a773e0, 0x20)
	/go/src/github.com/cilium/cilium/pkg/k8s/watchers/cilium_network_policy.go:285 +0xb5a
github.com/cilium/cilium/pkg/k8s/watchers.(*K8sWatcher).ciliumNetworkPoliciesInit.func2(0x227ef00, 0xc000a7aa28, 0x227ef00, 0xc00088e3d0)
	/go/src/github.com/cilium/cilium/pkg/k8s/watchers/cilium_network_policy.go:144 +0x1d0
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnUpdate(...)
	/go/src/github.com/cilium/cilium/vendor/k8s.io/client-go/tools/cache/controller.go:225
github.com/cilium/cilium/pkg/k8s/informer.NewInformerWithStore.func1(0x1f95a00, 0xc0011b4fc0, 0x1, 0xc0011b4fc0)
	/go/src/github.com/cilium/cilium/pkg/k8s/informer/informer.go:77 +0x2b5
k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc00081a210, 0xc00073a210, 0x0, 0x0, 0x0, 0x0)
	/go/src/github.com/cilium/cilium/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:492 +0x235
k8s.io/client-go/tools/cache.(*controller).processLoop(0xc00038b900)
	/go/src/github.com/cilium/cilium/vendor/k8s.io/client-go/tools/cache/controller.go:173 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0008f9f80)
	/go/src/github.com/cilium/cilium/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a09f80, 0x26c1c00, 0xc00073a870, 0xc00000f901, 0xc000094540)
	/go/src/github.com/cilium/cilium/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xa3
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008f9f80, 0x3b9aca00, 0x0, 0xc00094c501, 0xc000094540)
	/go/src/github.com/cilium/cilium/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0xe2
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/src/github.com/cilium/cilium/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*controller).Run(0xc00038b900, 0xc000094540)
	/go/src/github.com/cilium/cilium/vendor/k8s.io/client-go/tools/cache/controller.go:145 +0x2c4
created by github.com/cilium/cilium/pkg/k8s/watchers.(*K8sWatcher).ciliumNetworkPoliciesInit
	/go/src/github.com/cilium/cilium/pkg/k8s/watchers/cilium_network_policy.go:175 +0x497

Stacktrace

Found a "panic:" in Cilium Logs

Standard output

⚠️  Found a "panic:" in logs
⚠️  Found a "panic:" in logs
⚠️  Found a "panic:" in logs
⚠️  Found a "panic:" in logs
Number of "context deadline exceeded" in logs: 0
⚠️  Number of "level=error" in logs: 64
⚠️  Number of "level=warning" in logs: 81
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 5 errors/warnings:
\t/go/src/github.com/cilium/cilium/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
\t/go/src/github.com/cilium/cilium/vendor/github.com/sirupsen/logrus/text_formatter.go:281 +0x8f
\t/go/src/github.com/cilium/cilium/vendor/k8s.io/client-go/tools/cache/controller.go:145 +0x2c4
\t/go/src/github.com/cilium/cilium/pkg/k8s/watchers/cilium_network_policy.go:175 +0x497
fmt.(*buffer).writeString(...)
Cilium pods: [cilium-8q9gm cilium-t7nq5]
Netpols loaded: 
CiliumNetworkPolicies loaded: default::kafka-sw-security-policy 
Endpoint Policy Enforcement:
Pod                                    Ingress   Egress
empire-hq-7bc44dbf44-j6zk7                       
empire-outpost-8888-55d467647c-5mwx5             
empire-outpost-9999-9fd89ddbf-rxnsn              
kafka-broker-74c9f96688-tfrjk                    
coredns-767d4c6dd7-7sncc                         
empire-backup-7c58dffd99-dnljx                   
Cilium agent 'cilium-8q9gm': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 34 Failed 0
Cilium agent 'cilium-t7nq5': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 18 Failed 0

Standard error

Show the standard error
STEP: Installing Cilium
STEP: Installing DNS Deployment
STEP: Restarting DNS Pods
STEP: Performing Cilium preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium controllers preflight check
STEP: Performing Cilium health check
STEP: Performing Cilium service preflight check
STEP: Performing K8s service preflight check
STEP: Waiting for cilium-operator to be ready
STEP: Waiting for kube-dns to be ready
STEP: Running kube-dns preflight check
STEP: Performing K8s service preflight check
STEP: Wait for Kafka broker to be up
STEP: Creating new kafka topic empire-announce
STEP: Creating new kafka topic deathstar-plans
STEP: Waiting for DNS to resolve within pods for kafka-service
STEP: Testing basic Kafka Produce and Consume
STEP: Apply L7 kafka policy and wait
STEP: Testing Kafka L7 policy enforcement status
=== Test Finished at 2020-04-15T23:57:32Z====
===================== TEST FAILED =====================
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 default       empire-backup-7c58dffd99-dnljx         1/1     Running   0          2m49s   10.10.1.241     k8s2   <none>           <none>
	 default       empire-hq-7bc44dbf44-j6zk7             1/1     Running   0          2m49s   10.10.0.55      k8s1   <none>           <none>
	 default       empire-outpost-8888-55d467647c-5mwx5   1/1     Running   0          2m49s   10.10.1.91      k8s2   <none>           <none>
	 default       empire-outpost-9999-9fd89ddbf-rxnsn    1/1     Running   0          2m49s   10.10.1.162     k8s2   <none>           <none>
	 default       kafka-broker-74c9f96688-tfrjk          1/1     Running   0          2m49s   10.10.1.98      k8s2   <none>           <none>
	 kube-system   cilium-8q9gm                           1/1     Running   0          3m40s   192.168.36.12   k8s2   <none>           <none>
	 kube-system   cilium-operator-c7fd44fcc-jfttj        1/1     Running   0          3m40s   192.168.36.12   k8s2   <none>           <none>
	 kube-system   cilium-t7nq5                           1/1     Running   0          3m41s   192.168.36.11   k8s1   <none>           <none>
	 kube-system   coredns-767d4c6dd7-7sncc               1/1     Running   0          3m29s   10.10.1.120     k8s2   <none>           <none>
	 kube-system   etcd-k8s1                              1/1     Running   0          80m     192.168.36.11   k8s1   <none>           <none>
	 kube-system   kube-apiserver-k8s1                    1/1     Running   0          80m     192.168.36.11   k8s1   <none>           <none>
	 kube-system   kube-controller-manager-k8s1           1/1     Running   0          80m     192.168.36.11   k8s1   <none>           <none>
	 kube-system   kube-proxy-2zbtc                       1/1     Running   0          80m     192.168.36.11   k8s1   <none>           <none>
	 kube-system   kube-proxy-x8r7q                       1/1     Running   0          77m     192.168.36.12   k8s2   <none>           <none>
	 kube-system   kube-scheduler-k8s1                    1/1     Running   0          80m     192.168.36.11   k8s1   <none>           <none>
	 kube-system   log-gatherer-95947                     1/1     Running   0          75m     192.168.36.11   k8s1   <none>           <none>
	 kube-system   log-gatherer-dctvn                     1/1     Running   0          75m     192.168.36.12   k8s2   <none>           <none>
	 kube-system   registry-adder-qgbxd                   1/1     Running   0          77m     192.168.36.12   k8s2   <none>           <none>
	 kube-system   registry-adder-tcbg5                   1/1     Running   0          77m     192.168.36.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-8q9gm cilium-t7nq5]
cmd: kubectl exec -n kube-system cilium-8q9gm -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                   
	 1    10.96.0.1:443        ClusterIP      1 => 192.168.36.11:6443   
	 2    10.96.0.10:9153      ClusterIP      1 => 10.10.1.120:9153     
	 3    10.96.0.10:53        ClusterIP      1 => 10.10.1.120:53       
	 16   10.103.151.92:2379   ClusterIP                                
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-8q9gm -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                       IPv6                   IPv4          STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                         
	 62         Disabled           Disabled          17065      k8s:io.cilium.k8s.policy.cluster=default          f00d::a0a:100:0:5bd8   10.10.1.120   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                    
	                                                            k8s:k8s-app=kube-dns                                                                           
	 398        Disabled           Disabled          4          reserved:health                                   f00d::a0a:100:0:f312   10.10.1.187   ready   
	 437        Disabled           Disabled          29725      k8s:app=empire-outpost                            f00d::a0a:100:0:8faf   10.10.1.91    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                       
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                
	                                                            k8s:io.kubernetes.pod.namespace=default                                                        
	                                                            k8s:outpostid=8888                                                                             
	                                                            k8s:zgroup=kafkaTestApp                                                                        
	 1471       Disabled           Disabled          15725      k8s:app=empire-outpost                            f00d::a0a:100:0:2410   10.10.1.162   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                       
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                
	                                                            k8s:io.kubernetes.pod.namespace=default                                                        
	                                                            k8s:outpostid=9999                                                                             
	                                                            k8s:zgroup=kafkaTestApp                                                                        
	 2005       Enabled            Enabled           3196       k8s:app=kafka                                     f00d::a0a:100:0:82c9   10.10.1.98    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                       
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                
	                                                            k8s:io.kubernetes.pod.namespace=default                                                        
	                                                            k8s:zgroup=kafkaTestApp                                                                        
	 2246       Disabled           Disabled          9920       k8s:app=empire-backup                             f00d::a0a:100:0:9b17   10.10.1.241   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                       
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                
	                                                            k8s:io.kubernetes.pod.namespace=default                                                        
	                                                            k8s:zgroup=kafkaTestApp                                                                        
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-t7nq5 -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                   
	 1    10.96.0.1:443        ClusterIP      1 => 192.168.36.11:6443   
	 2    10.96.0.10:53        ClusterIP      1 => 10.10.1.120:53       
	 3    10.96.0.10:9153      ClusterIP      1 => 10.10.1.120:9153     
	 16   10.103.151.92:2379   ClusterIP                                
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-t7nq5 -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                       IPv6                 IPv4          STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                       
	 1057       Disabled           Disabled          7990       k8s:app=empire-hq                                 f00d::a0a:0:0:dee1   10.10.0.55    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                              
	                                                            k8s:io.kubernetes.pod.namespace=default                                                      
	                                                            k8s:zgroup=kafkaTestApp                                                                      
	 2913       Disabled           Disabled          4          reserved:health                                   f00d::a0a:0:0:11c6   10.10.0.123   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================

test_results_Cilium-PR-Ginkgo-Tests-Kernel_502_BDD-Test-PR.zip

@pchaigno pchaigno added kind/bug This is a bug in the Cilium logic. area/CI Continuous Integration testing issue or flake needs/triage This issue requires triaging to establish severity and next steps. labels Apr 16, 2020
@aanm aanm self-assigned this Apr 16, 2020
@aanm aanm added priority/high This is considered vital to an upcoming release. and removed needs/triage This issue requires triaging to establish severity and next steps. labels Apr 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/CI Continuous Integration testing issue or flake kind/bug This is a bug in the Cilium logic. priority/high This is considered vital to an upcoming release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants