Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sDatapathConfig Wireguard encryption Pod2pod is encrypted in tunneling mode #15974

Closed
qmonnet opened this issue May 1, 2021 · 0 comments · Fixed by #16011
Closed

CI: K8sDatapathConfig Wireguard encryption Pod2pod is encrypted in tunneling mode #15974

qmonnet opened this issue May 1, 2021 · 0 comments · Fixed by #16011
Assignees
Labels
area/CI Continuous Integration testing issue or flake area/encryption Impacts encryption support such as IPSec, WireGuard, or kTLS. ci/flake This is a known failure that occurs in the tree. Please investigate me!

Comments

@qmonnet
Copy link
Member

qmonnet commented May 1, 2021

CI failure - Wireguard encryption Pod2pod is encrypted in tunneling mode

Context

Stacktrace

/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Expected
    <string>: 15:57:51.662652 IP 10.0.0.1.45268 > 10.0.1.52.80: Flags [S], seq 1992751319, win 64860, options [mss 1410,sackOK,TS val 809699386 ecr 0,nop,wscale 7], length 0
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on cilium_vxlan, link-type EN10MB (Ethernet), capture size 262144 bytes
    1 packet captured
    1 packet received by filter
    0 packets dropped by kernel
    
not to contain substring
    <string>: 1 packet captured
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:582

Standard Output

Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-8zjzd cilium-mvlfq]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
test-k8s2-5b756fd6c5-g9n2t             
testclient-dq6mn                       
testclient-f67wf                       
testds-46x4q                           
testds-7tjvz                           
coredns-5495c8f48d-rjxkn               
Cilium agent 'cilium-8zjzd': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 29 Failed 0
Cilium agent 'cilium-mvlfq': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0

Standard Error

15:56:21 STEP: Installing Cilium
15:56:22 STEP: Waiting for Cilium to become ready
15:57:29 STEP: Validating if Kubernetes DNS is deployed
15:57:29 STEP: Checking if deployment is ready
15:57:29 STEP: Checking if kube-dns service is plumbed correctly
15:57:29 STEP: Checking if pods have identity
15:57:29 STEP: Checking if DNS can resolve
15:57:33 STEP: Kubernetes DNS is up and operational
15:57:33 STEP: Validating Cilium Installation
15:57:33 STEP: Performing Cilium controllers preflight check
15:57:33 STEP: Performing Cilium health check
15:57:33 STEP: Performing Cilium status preflight check
15:57:38 STEP: Performing Cilium service preflight check
15:57:38 STEP: Performing K8s service preflight check
15:57:38 STEP: Waiting for cilium-operator to be ready
15:57:38 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
15:57:38 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
15:57:38 STEP: Making sure all endpoints are in ready state
15:57:39 STEP: Creating namespace 202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp
15:57:39 STEP: Deploying demo_ds.yaml in namespace 202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp
15:57:40 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
15:57:40 STEP: WaitforNPods(namespace="202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp", filter="")
15:57:50 STEP: WaitforNPods(namespace="202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp", filter="") => <nil>
15:57:50 STEP: WaitforPods(namespace="202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp", filter="-l zgroup=testDSClient")
15:57:50 STEP: WaitforPods(namespace="202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp", filter="-l zgroup=testDSClient") => <nil>
15:57:50 STEP: WaitforPods(namespace="202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp", filter="-l zgroup=testDS")
15:57:50 STEP: WaitforPods(namespace="202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp", filter="-l zgroup=testDS") => <nil>
FAIL: Expected
    <string>: 15:57:51.662652 IP 10.0.0.1.45268 > 10.0.1.52.80: Flags [S], seq 1992751319, win 64860, options [mss 1410,sackOK,TS val 809699386 ecr 0,nop,wscale 7], length 0
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on cilium_vxlan, link-type EN10MB (Ethernet), capture size 262144 bytes
    1 packet captured
    1 packet received by filter
    0 packets dropped by kernel
    
not to contain substring
    <string>: 1 packet captured
=== Test Finished at 2021-04-30T15:57:52Z====
15:57:52 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
15:57:53 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE   READINESS GATES
	 202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp   test-k8s2-5b756fd6c5-g9n2t         2/2     Running   0          18s   10.0.1.253      k8s2   <none>           <none>
	 202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp   testclient-dq6mn                   1/1     Running   0          18s   10.0.0.1        k8s1   <none>           <none>
	 202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp   testclient-f67wf                   1/1     Running   0          18s   10.0.1.15       k8s2   <none>           <none>
	 202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp   testds-46x4q                       2/2     Running   0          18s   10.0.1.52       k8s2   <none>           <none>
	 202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp   testds-7tjvz                       2/2     Running   0          18s   10.0.0.85       k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-7fd557d749-phg65           0/1     Running   0          92m   10.0.0.82       k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-d87f8f984-9dgw2         1/1     Running   0          92m   10.0.0.50       k8s1   <none>           <none>
	 kube-system                                                       cilium-8zjzd                       1/1     Running   0          95s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-mvlfq                       1/1     Running   0          95s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-75bb5656fc-gbqpk   1/1     Running   0          95s   192.168.36.13   k8s3   <none>           <none>
	 kube-system                                                       cilium-operator-75bb5656fc-mg22t   1/1     Running   0          95s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-5495c8f48d-rjxkn           1/1     Running   0          83m   10.0.1.206      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          95m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          95m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          95m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          95m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-cf579                 1/1     Running   0          92m   192.168.36.13   k8s3   <none>           <none>
	 kube-system                                                       log-gatherer-l46jt                 1/1     Running   0          92m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-scwvw                 1/1     Running   0          92m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-c6sll               1/1     Running   0          93m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-fnp4x               1/1     Running   0          93m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-jsx6g               1/1     Running   0          93m   192.168.36.13   k8s3   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-8zjzd cilium-mvlfq]
cmd: kubectl exec -n kube-system cilium-8zjzd -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.16 (v1.16.15) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.36.11 fd04::11 (Direct Routing)]
	 Cilium:                 Ok   1.10.90 (v1.10.90-cef4ec5)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
	 BandwidthManager:       EDT with BPF   [enp0s3, enp0s8]
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      29/29 healthy
	 Proxy Status:           No managed proxy redirect
	 Hubble:                 Ok              Current/Max Flows: 602/4095 (14.70%), Flows/s: 4.98   Metrics: Disabled
	 Encryption:             Wireguard       [cilium_wg0 (Pubkey: b7uG8vpe51Yimk9p+cfqFBpOuAUNEmpuqsuxvaXpHg0=, Port: 51871, Peers: 1)]
	 Cluster health:         2/2 reachable   (2021-04-30T15:57:34Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-8zjzd -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4        STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                           
	 380        Disabled           Disabled          4          reserved:health                                                                                   fd02::ea   10.0.0.5    ready   
	 1603       Disabled           Disabled          3556       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::db   10.0.0.85   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp                                  
	                                                            k8s:zgroup=testDS                                                                                                                
	 3041       Disabled           Disabled          42160      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::bb   10.0.0.1    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp                                  
	                                                            k8s:zgroup=testDSClient                                                                                                          
	 3436       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/master                                                                                               
	                                                            reserved:host                                                                                                                    
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-mvlfq -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.16 (v1.16.15) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.36.12 fd04::12 (Direct Routing)]
	 Cilium:                 Ok   1.10.90 (v1.10.90-cef4ec5)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 6/254 allocated from fd02::100/120
	 BandwidthManager:       EDT with BPF   [enp0s3, enp0s8]
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      38/38 healthy
	 Proxy Status:           No managed proxy redirect
	 Hubble:                 Ok              Current/Max Flows: 576/4095 (14.07%), Flows/s: 5.97   Metrics: Disabled
	 Encryption:             Wireguard       [cilium_wg0 (Pubkey: ZpCPLuz9GZjN4FLj1aCmA7dPsrZgvrKutWBr1hnPRUo=, Port: 51871, Peers: 1)]
	 Cluster health:         2/2 reachable   (2021-04-30T15:57:36Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-mvlfq -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                             
	 222        Disabled           Disabled          42160      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1bd   10.0.1.15    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp                                    
	                                                            k8s:zgroup=testDSClient                                                                                                            
	 759        Disabled           Disabled          5282       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::139   10.0.1.206   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                                               
	 1705       Disabled           Disabled          3556       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::171   10.0.1.52    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp                                    
	                                                            k8s:zgroup=testDS                                                                                                                  
	 1861       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                 ready   
	                                                            reserved:host                                                                                                                      
	 3493       Disabled           Disabled          4          reserved:health                                                                                   fd02::103   10.0.1.20    ready   
	 3940       Disabled           Disabled          5563       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::182   10.0.1.253   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp                                    
	                                                            k8s:zgroup=test-k8s2                                                                                                               
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
15:58:29 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
15:58:29 STEP: Deleting deployment demo_ds.yaml
15:58:30 STEP: Deleting namespace 202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp
15:58:30 STEP: Deleting namespace 202104301557k8sdatapathconfigwireguardencryptionpod2podisencryp
15:58:42 STEP: Running AfterEach for block EntireTestsuite

/Cc @brb

@qmonnet qmonnet added area/CI Continuous Integration testing issue or flake area/encryption Impacts encryption support such as IPSec, WireGuard, or kTLS. ci/flake This is a known failure that occurs in the tree. Please investigate me! labels May 1, 2021
@gandro gandro self-assigned this May 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/CI Continuous Integration testing issue or flake area/encryption Impacts encryption support such as IPSec, WireGuard, or kTLS. ci/flake This is a known failure that occurs in the tree. Please investigate me!
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants