Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sFQDNTest Validate that multiple specs are working correctly #21177

Closed
maintainer-s-little-helper bot opened this issue Sep 1, 2022 · 1 comment
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sFQDNTest Validate that multiple specs are working correctly

Failure Output

FAIL: failed due to BeforeAll failure

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:465
failed due to BeforeAll failure
/home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:647

Standard Output

Click to show.
Cilium pods: [cilium-6hd4t cilium-dksfw]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
grafana-7fd557d749-2ld27     false     false
prometheus-d87f8f984-njpvf   false     false
coredns-8cfc78c54-bdd48      false     false
Cilium agent 'cilium-6hd4t': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 18 Failed 0
Cilium agent 'cilium-dksfw': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0


Standard Error

Click to show.
FAIL: failed due to BeforeAll failure
===================== TEST FAILED =====================
06:10:03 STEP: Running AfterFailed block for EntireTestsuite K8sFQDNTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS             RESTARTS   AGE     IP              NODE     NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-7fd557d749-2ld27           0/1     Running            0          63s     10.0.1.72       k8s2     <none>           <none>
	 cilium-monitoring   prometheus-d87f8f984-njpvf         1/1     Running            0          63s     10.0.1.185      k8s2     <none>           <none>
	 kube-system         cilium-6hd4t                       1/1     Running            0          60s     192.168.56.11   k8s1     <none>           <none>
	 kube-system         cilium-dksfw                       1/1     Running            0          60s     192.168.56.12   k8s2     <none>           <none>
	 kube-system         cilium-operator-59f9fb7675-57p6x   1/1     Running            0          60s     192.168.56.11   k8s1     <none>           <none>
	 kube-system         cilium-operator-59f9fb7675-psrxf   1/1     Running            1          60s     192.168.56.12   k8s2     <none>           <none>
	 kube-system         coredns-8cfc78c54-f6pwq            0/1     Pending            0          23s     <none>          <none>   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running            0          3m44s   192.168.56.11   k8s1     <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running            0          3m37s   192.168.56.11   k8s1     <none>           <none>
	 kube-system         kube-controller-manager-k8s1       0/1     CrashLoopBackOff   2          4m19s   192.168.56.11   k8s1     <none>           <none>
	 kube-system         kube-proxy-f2f7h                   1/1     Running            0          100s    192.168.56.12   k8s2     <none>           <none>
	 kube-system         kube-proxy-lcbw9                   1/1     Running            0          2m31s   192.168.56.11   k8s1     <none>           <none>
	 kube-system         kube-scheduler-k8s1                0/1     Error              2          4m19s   192.168.56.11   k8s1     <none>           <none>
	 kube-system         log-gatherer-q45zm                 1/1     Running            0          66s     192.168.56.11   k8s1     <none>           <none>
	 kube-system         log-gatherer-sh8mb                 1/1     Running            0          66s     192.168.56.12   k8s2     <none>           <none>
	 kube-system         registry-adder-j7r47               1/1     Running            0          98s     192.168.56.12   k8s2     <none>           <none>
	 kube-system         registry-adder-q9q62               1/1     Running            0          98s     192.168.56.11   k8s1     <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-6hd4t cilium-dksfw]
cmd: kubectl exec -n kube-system cilium-6hd4t -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend              Service Type   Backend                            
	 1    10.96.0.10:53         ClusterIP                                         
	 2    10.96.0.10:9153       ClusterIP                                         
	 3    10.106.229.255:3000   ClusterIP                                         
	 4    10.108.199.87:9090    ClusterIP      1 => 10.0.1.185:9090 (active)      
	 5    10.97.122.172:443     ClusterIP      1 => 192.168.56.12:4244 (active)   
	                                           2 => 192.168.56.11:4244 (active)   
	 6    10.96.0.1:443         ClusterIP      1 => 192.168.56.11:6443 (active)   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-6hd4t -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])          IPv6       IPv4        STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                              
	 927        Disabled           Disabled          4          reserved:health                      fd02::ea   10.0.0.41   ready   
	 1630       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                  ready   
	                                                            k8s:node-role.kubernetes.io/master                                  
	                                                            reserved:host                                                       
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-dksfw -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend              Service Type   Backend                            
	 1    10.106.229.255:3000   ClusterIP                                         
	 2    10.108.199.87:9090    ClusterIP      1 => 10.0.1.185:9090 (active)      
	 3    10.97.122.172:443     ClusterIP      1 => 192.168.56.12:4244 (active)   
	                                           2 => 192.168.56.11:4244 (active)   
	 4    10.96.0.1:443         ClusterIP      1 => 192.168.56.11:6443 (active)   
	 5    10.96.0.10:53         ClusterIP                                         
	 6    10.96.0.10:9153       ClusterIP                                         
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-dksfw -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                              IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                    
	 812        Disabled           Disabled          4          reserved:health                                          fd02::1f6   10.0.1.230   ready   
	 890        Disabled           Disabled          14613      k8s:app=grafana                                          fd02::136   10.0.1.72    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                         
	 996        Disabled           Disabled          7377       k8s:app=prometheus                                       fd02::1dd   10.0.1.185   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                    
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                         
	 3650       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                        ready   
	                                                            reserved:host                                                                             
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
06:10:10 STEP: Running AfterEach for block EntireTestsuite K8sFQDNTest
06:10:10 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|a65dbda6_K8sFQDNTest_Validate_that_multiple_specs_are_working_correctly.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//2499/artifact/6e5b8163_K8sFQDNTest_Restart_Cilium_validate_that_FQDN_is_still_working.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//2499/artifact/7f595312_K8sFQDNTest_Validate_that_FQDN_policy_continues_to_work_after_being_updated.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//2499/artifact/a65dbda6_K8sFQDNTest_Validate_that_multiple_specs_are_working_correctly.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//2499/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.9_2499_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9/2499/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper maintainer-s-little-helper bot added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label Sep 1, 2022
@joestringer
Copy link
Member

Likely caused by the same root cause of #21176.

@joestringer joestringer closed this as not planned Won't fix, can't repro, duplicate, stale Sep 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!
Projects
None yet
Development

No branches or pull requests

1 participant