Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v1.9] CI: K8sConformance Portmap Chaining: connectivity-check pods are not ready after timeout #16873

Closed
jibi opened this issue Jul 14, 2021 · 8 comments
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects

Comments

@jibi
Copy link
Member

jibi commented Jul 14, 2021

First saw in #16781

Stacktrace

/home/jenkins/workspace/Cilium-PR-K8s-1.13-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:514
connectivity-check pods are not ready after timeout
Expected
    <*errors.errorString | 0xc002dfa2a0>: {
        s: "timed out waiting for pods with filter  to be ready: 4m0s timeout expired",
    }
to be nil
/usr/local/go/src/reflect/value.go:476

Standard Output

Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-2j4zc cilium-9482w]
Netpols loaded: 
CiliumNetworkPolicies loaded: default::echo-c default::pod-to-a-allowed-cnp default::pod-to-a-denied-cnp default::pod-to-a-intra-node-proxy-egress-policy default::pod-to-a-multi-node-proxy-egress-policy default::pod-to-c-intra-node-proxy-to-proxy-policy default::pod-to-c-multi-node-proxy-to-proxy-policy default::pod-to-external-fqdn-allow-google-cnp 
Endpoint Policy Enforcement:
Pod                                                          Ingress   Egress
grafana-b4dbb994f-6xsqb                                                
pod-to-c-intra-node-proxy-to-proxy-policy-7f8c67c676-ll6kp             
pod-to-c-multi-node-proxy-to-proxy-policy-5bd77b9497-ppsbz             
pod-to-a-denied-cnp-549769756c-ljnhs                                   
pod-to-a-intra-node-proxy-egress-policy-6886ddbfc9-kv2hq               
pod-to-a-multi-node-proxy-egress-policy-6d5f4fcc98-66k8l               
pod-to-b-multi-node-headless-58755dd4fc-qtr76                          
pod-to-c-multi-node-proxy-ingress-policy-7cbfbbcbb-pkql5               
pod-to-external-1111-dd4c476f5-jqkwz                                   
pod-to-b-multi-node-clusterip-7d59cf79bf-tqdl8                         
pod-to-b-multi-node-hostport-dc85cc667-24mn2                           
prometheus-688959f59d-smvrt                                            
echo-a-68594567f4-qn5s2                                                
echo-b-6d8476f798-h4xv4                                                
echo-c-6687fccd59-jtmtl                                                
pod-to-a-7f974698df-5z2lq                                              
pod-to-b-intra-node-hostport-5bd6c997c9-vgqm8                          
pod-to-c-intra-node-proxy-ingress-policy-fd8d47479-kb2ls               
coredns-7ff984754c-v84dz                                               
pod-to-a-allowed-cnp-fd5766ff7-hhgdc                                   
pod-to-b-intra-node-nodeport-77b97885cc-h6znl                          
pod-to-b-multi-node-nodeport-5746df777-h9v9d                           
pod-to-external-fqdn-allow-google-cnp-75877ff4ff-gvx72                 
Cilium agent 'cilium-2j4zc': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 69 Failed 0
Cilium agent 'cilium-9482w': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 73 Failed 0

Standard Error

Click here to see.
21:06:27 STEP: Running BeforeAll block for EntireTestsuite K8sConformance
21:06:27 STEP: Ensuring the namespace kube-system exists
21:06:27 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
21:06:27 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
21:06:27 STEP: Installing Cilium
21:06:28 STEP: Waiting for Cilium to become ready
21:06:28 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:29 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:30 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:31 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:32 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:33 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:34 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:35 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:36 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:37 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:38 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:40 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:41 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:42 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:06:43 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:06:44 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:06:45 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:06:46 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:06:47 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:06:48 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:06:49 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:06:51 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:06:52 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:07:00 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:07:01 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:07:02 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:07:03 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:07:04 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:07:05 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:07:06 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:07:07 STEP: Number of ready Cilium pods: 2
21:07:08 STEP: Restarting unmanaged pods coredns-7ff984754c-lcz7c in namespace kube-system
21:07:15 STEP: Validating if Kubernetes DNS is deployed
21:07:15 STEP: Checking if deployment is ready
21:07:15 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
21:07:15 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
21:07:26 STEP: Waiting for Kubernetes DNS to become operational
21:07:26 STEP: Checking if deployment is ready
21:07:26 STEP: Checking if kube-dns service is plumbed correctly
21:07:26 STEP: Checking if DNS can resolve
21:07:26 STEP: Checking if pods have identity
21:07:27 STEP: Kubernetes DNS is not ready yet: pod kube-system/coredns-7ff984754c-v84dz has no CiliumIdentity
21:07:27 STEP: Checking if deployment is ready
21:07:27 STEP: Checking if kube-dns service is plumbed correctly
21:07:27 STEP: Checking if pods have identity
21:07:27 STEP: Checking if DNS can resolve
21:07:29 STEP: Kubernetes DNS is not ready yet: pod kube-system/coredns-7ff984754c-v84dz has no CiliumIdentity
21:07:29 STEP: Checking if deployment is ready
21:07:29 STEP: Checking if kube-dns service is plumbed correctly
21:07:29 STEP: Checking if pods have identity
21:07:29 STEP: Checking if DNS can resolve
21:07:30 STEP: Kubernetes DNS is not ready yet: pod kube-system/coredns-7ff984754c-v84dz has no CiliumIdentity
21:07:30 STEP: Checking if deployment is ready
21:07:30 STEP: Checking if kube-dns service is plumbed correctly
21:07:30 STEP: Checking if pods have identity
21:07:30 STEP: Checking if DNS can resolve
21:07:31 STEP: Kubernetes DNS is not ready yet: pod kube-system/coredns-7ff984754c-v84dz has no CiliumIdentity
21:07:31 STEP: Checking if deployment is ready
21:07:31 STEP: Checking if kube-dns service is plumbed correctly
21:07:31 STEP: Checking if pods have identity
21:07:31 STEP: Checking if DNS can resolve
21:07:32 STEP: Kubernetes DNS is not ready yet: pod kube-system/coredns-7ff984754c-v84dz has no CiliumIdentity
21:07:32 STEP: Checking if deployment is ready
21:07:32 STEP: Checking if kube-dns service is plumbed correctly
21:07:32 STEP: Checking if pods have identity
21:07:32 STEP: Checking if DNS can resolve
21:07:33 STEP: Kubernetes DNS is not ready yet: pod kube-system/coredns-7ff984754c-v84dz has no CiliumIdentity
21:07:33 STEP: Checking if deployment is ready
21:07:33 STEP: Checking if kube-dns service is plumbed correctly
21:07:33 STEP: Checking if pods have identity
21:07:33 STEP: Checking if DNS can resolve
21:07:34 STEP: Kubernetes DNS is not ready yet: pod kube-system/coredns-7ff984754c-v84dz has no CiliumIdentity
21:07:34 STEP: Checking if deployment is ready
21:07:34 STEP: Checking if kube-dns service is plumbed correctly
21:07:34 STEP: Checking if DNS can resolve
21:07:34 STEP: Checking if pods have identity
21:07:35 STEP: Kubernetes DNS is not ready yet: pod kube-system/coredns-7ff984754c-v84dz has no CiliumIdentity
21:07:35 STEP: Checking if deployment is ready
21:07:35 STEP: Checking if kube-dns service is plumbed correctly
21:07:35 STEP: Checking if pods have identity
21:07:35 STEP: Checking if DNS can resolve
21:07:37 STEP: Kubernetes DNS is not ready yet: pod kube-system/coredns-7ff984754c-v84dz has no CiliumIdentity
21:07:37 STEP: Checking if deployment is ready
21:07:37 STEP: Checking if kube-dns service is plumbed correctly
21:07:37 STEP: Checking if pods have identity
21:07:37 STEP: Checking if DNS can resolve
21:07:38 STEP: Validating Cilium Installation
21:07:38 STEP: Performing Cilium status preflight check
21:07:38 STEP: Performing Cilium controllers preflight check
21:07:38 STEP: Performing Cilium health check
21:07:40 STEP: Performing Cilium service preflight check
21:07:40 STEP: Performing K8s service preflight check
21:07:41 STEP: Waiting for cilium-operator to be ready
21:07:41 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
21:07:41 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
21:07:41 STEP: Making sure all endpoints are in ready state
21:07:43 STEP: WaitforPods(namespace="default", filter="")
21:11:43 STEP: WaitforPods(namespace="default", filter="") => timed out waiting for pods with filter  to be ready: 4m0s timeout expired
21:11:43 STEP: cmd: kubectl describe pods -n default 
Exitcode: 0 
Stdout:
 	 Name:               echo-a-68594567f4-qn5s2
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s2/192.168.36.12
	 Start Time:         Tue, 13 Jul 2021 21:07:43 +0000
	 Labels:             name=echo-a
	                     pod-template-hash=68594567f4
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.0.64
	 Controlled By:      ReplicaSet/echo-a-68594567f4
	 Containers:
	   echo-a-container:
	     Container ID:   docker://4933db79185a3a34fe41bad285a089aa4a67c65ac8cf47de8acd4267bac5fa3a
	     Image:          docker.io/cilium/json-mock:1.2
	     Image ID:       docker-pullable://cilium/json-mock@sha256:941e03da57551dd4a71f351b35650c152a1192ac1df717e43ee58b5aa2b8e241
	     Port:           8080/TCP
	     Host Port:      0/TCP
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:46 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:8080] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:8080] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:
	       PORT:  8080
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned default/echo-a-68594567f4-qn5s2 to k8s2
	   Normal  Pulled     3m58s  kubelet, k8s2      Container image "docker.io/cilium/json-mock:1.2" already present on machine
	   Normal  Created    3m58s  kubelet, k8s2      Created container
	   Normal  Started    3m57s  kubelet, k8s2      Started container
	 
	 
	 Name:               echo-b-6d8476f798-h4xv4
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s2/192.168.36.12
	 Start Time:         Tue, 13 Jul 2021 21:07:43 +0000
	 Labels:             name=echo-b
	                     pod-template-hash=6d8476f798
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.0.108
	 Controlled By:      ReplicaSet/echo-b-6d8476f798
	 Containers:
	   echo-b-container:
	     Container ID:   docker://a5a2d6342c1e0dba549590887927610fe86ebc2dd025d2afea6511071781c4c0
	     Image:          docker.io/cilium/json-mock:1.2
	     Image ID:       docker-pullable://cilium/json-mock@sha256:941e03da57551dd4a71f351b35650c152a1192ac1df717e43ee58b5aa2b8e241
	     Port:           8080/TCP
	     Host Port:      40000/TCP
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:54 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:8080] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:8080] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:
	       PORT:  8080
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned default/echo-b-6d8476f798-h4xv4 to k8s2
	   Normal  Pulled     3m50s  kubelet, k8s2      Container image "docker.io/cilium/json-mock:1.2" already present on machine
	   Normal  Created    3m50s  kubelet, k8s2      Created container
	   Normal  Started    3m49s  kubelet, k8s2      Started container
	 
	 
	 Name:               echo-b-host-7b4585cc8c-wc2df
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s2/192.168.36.12
	 Start Time:         Tue, 13 Jul 2021 21:07:43 +0000
	 Labels:             name=echo-b-host
	                     pod-template-hash=7b4585cc8c
	 Annotations:        <none>
	 Status:             Running
	 IP:                 192.168.36.12
	 Controlled By:      ReplicaSet/echo-b-host-7b4585cc8c
	 Containers:
	   echo-b-host-container:
	     Container ID:   docker://fddf10de8ce298962f6beb4053b4eda5af0bc9b51d842df8afb9de8132934b4d
	     Image:          docker.io/cilium/json-mock:1.2
	     Image ID:       docker-pullable://cilium/json-mock@sha256:941e03da57551dd4a71f351b35650c152a1192ac1df717e43ee58b5aa2b8e241
	     Port:           <none>
	     Host Port:      <none>
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:46 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:41000] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:41000] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:
	       PORT:  41000
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned default/echo-b-host-7b4585cc8c-wc2df to k8s2
	   Normal  Pulled     3m58s  kubelet, k8s2      Container image "docker.io/cilium/json-mock:1.2" already present on machine
	   Normal  Created    3m58s  kubelet, k8s2      Created container
	   Normal  Started    3m57s  kubelet, k8s2      Started container
	 
	 
	 Name:               echo-c-6687fccd59-jtmtl
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s2/192.168.36.12
	 Start Time:         Tue, 13 Jul 2021 21:07:43 +0000
	 Labels:             name=echo-c
	                     pod-template-hash=6687fccd59
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.0.46
	 Controlled By:      ReplicaSet/echo-c-6687fccd59
	 Containers:
	   echo-c-container:
	     Container ID:   docker://b5f9da644ccda65a8e8749b1227afdcfe264472dc2ae12be149d377bbc2f360d
	     Image:          docker.io/cilium/json-mock:1.2
	     Image ID:       docker-pullable://cilium/json-mock@sha256:941e03da57551dd4a71f351b35650c152a1192ac1df717e43ee58b5aa2b8e241
	     Port:           8080/TCP
	     Host Port:      40001/TCP
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:56 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:8080] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:8080] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:
	       PORT:  8080
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned default/echo-c-6687fccd59-jtmtl to k8s2
	   Normal  Pulled     3m47s  kubelet, k8s2      Container image "docker.io/cilium/json-mock:1.2" already present on machine
	   Normal  Created    3m47s  kubelet, k8s2      Created container
	   Normal  Started    3m47s  kubelet, k8s2      Started container
	 
	 
	 Name:               echo-c-host-687b8bb5b-f576b
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s2/192.168.36.12
	 Start Time:         Tue, 13 Jul 2021 21:07:43 +0000
	 Labels:             name=echo-c-host
	                     pod-template-hash=687b8bb5b
	 Annotations:        <none>
	 Status:             Running
	 IP:                 192.168.36.12
	 Controlled By:      ReplicaSet/echo-c-host-687b8bb5b
	 Containers:
	   echo-c-host-container:
	     Container ID:   docker://618be609034326c3053b3d825aa89de04688f9a8075c203d5185df3f71f3ebcb
	     Image:          docker.io/cilium/json-mock:1.2
	     Image ID:       docker-pullable://cilium/json-mock@sha256:941e03da57551dd4a71f351b35650c152a1192ac1df717e43ee58b5aa2b8e241
	     Port:           <none>
	     Host Port:      <none>
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:46 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:41002] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:41002] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:
	       PORT:  41002
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned default/echo-c-host-687b8bb5b-f576b to k8s2
	   Normal  Pulled     3m58s  kubelet, k8s2      Container image "docker.io/cilium/json-mock:1.2" already present on machine
	   Normal  Created    3m57s  kubelet, k8s2      Created container
	   Normal  Started    3m57s  kubelet, k8s2      Started container
	 
	 
	 Name:               host-to-b-multi-node-clusterip-75766487bc-l2hss
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s1/192.168.36.11
	 Start Time:         Tue, 13 Jul 2021 21:07:45 +0000
	 Labels:             name=host-to-b-multi-node-clusterip
	                     pod-template-hash=75766487bc
	 Annotations:        <none>
	 Status:             Running
	 IP:                 192.168.36.11
	 Controlled By:      ReplicaSet/host-to-b-multi-node-clusterip-75766487bc
	 Containers:
	   host-to-b-multi-node-clusterip-container:
	     Container ID:  docker://e666fbc90b9a5d582d8cc9c69cb1967ebd05fe4812543e2fb3b82214bccaa31c
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:46 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type     Reason     Age                    From               Message
	   ----     ------     ----                   ----               -------
	   Normal   Scheduled  3m58s                  default-scheduler  Successfully assigned default/host-to-b-multi-node-clusterip-75766487bc-l2hss to k8s1
	   Normal   Pulled     3m57s                  kubelet, k8s1      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m57s                  kubelet, k8s1      Created container
	   Normal   Started    3m57s                  kubelet, k8s1      Started container
	   Warning  Unhealthy  3m45s (x2 over 3m54s)  kubelet, k8s1      Readiness probe failed: curl: (7) Failed to connect to echo-b port 8080: Connection refused
	   Warning  Unhealthy  3m44s (x2 over 3m54s)  kubelet, k8s1      Liveness probe failed: curl: (7) Failed to connect to echo-b port 8080: Connection refused
	 
	 
	 Name:               host-to-b-multi-node-headless-6db57f94f-6nsts
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s1/192.168.36.11
	 Start Time:         Tue, 13 Jul 2021 21:07:45 +0000
	 Labels:             name=host-to-b-multi-node-headless
	                     pod-template-hash=6db57f94f
	 Annotations:        <none>
	 Status:             Running
	 IP:                 192.168.36.11
	 Controlled By:      ReplicaSet/host-to-b-multi-node-headless-6db57f94f
	 Containers:
	   host-to-b-multi-node-headless-container:
	     Container ID:  docker://316296ece204f2faf50f1da937ff5315a84ea366b7dc9323f7b2eeeb765c677b
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:08:40 +0000
	     Last State:     Terminated
	       Reason:       Error
	       Exit Code:    137
	       Started:      Tue, 13 Jul 2021 21:07:47 +0000
	       Finished:     Tue, 13 Jul 2021 21:08:40 +0000
	     Ready:          True
	     Restart Count:  1
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-headless:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-headless:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type     Reason     Age                    From               Message
	   ----     ------     ----                   ----               -------
	   Normal   Scheduled  3m58s                  default-scheduler  Successfully assigned default/host-to-b-multi-node-headless-6db57f94f-6nsts to k8s1
	   Warning  Unhealthy  3m33s (x3 over 3m53s)  kubelet, k8s1      Liveness probe failed: curl: (6) Could not resolve host: echo-b-headless
	   Warning  Unhealthy  3m26s (x3 over 3m47s)  kubelet, k8s1      Readiness probe failed: curl: (6) Could not resolve host: echo-b-headless
	   Normal   Pulled     3m3s (x2 over 3m56s)   kubelet, k8s1      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m3s (x2 over 3m56s)   kubelet, k8s1      Created container
	   Normal   Started    3m3s (x2 over 3m56s)   kubelet, k8s1      Started container
	   Normal   Killing    3m3s                   kubelet, k8s1      Killing container with id docker://host-to-b-multi-node-headless-container:Container failed liveness probe.. Container will be killed and recreated.
	 
	 
	 Name:               pod-to-a-7f974698df-5z2lq
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s2/192.168.36.12
	 Start Time:         Tue, 13 Jul 2021 21:07:43 +0000
	 Labels:             name=pod-to-a
	                     pod-template-hash=7f974698df
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.0.61
	 Controlled By:      ReplicaSet/pod-to-a-7f974698df
	 Containers:
	   pod-to-a-container:
	     Container ID:  docker://5cd64c86678fcd417806cb85ca893ce0be63b718e10ab779c6a587edabd4c7d7
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:47 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type     Reason     Age    From               Message
	   ----     ------     ----   ----               -------
	   Normal   Scheduled  4m     default-scheduler  Successfully assigned default/pod-to-a-7f974698df-5z2lq to k8s2
	   Normal   Pulled     3m57s  kubelet, k8s2      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m56s  kubelet, k8s2      Created container
	   Normal   Started    3m56s  kubelet, k8s2      Started container
	   Warning  Unhealthy  3m48s  kubelet, k8s2      Liveness probe failed: curl: (28) Connection timed out after 5002 milliseconds
	   Warning  Unhealthy  3m42s  kubelet, k8s2      Readiness probe failed: curl: (28) Connection timed out after 5000 milliseconds
	 
	 
	 Name:               pod-to-a-allowed-cnp-fd5766ff7-hhgdc
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s1/192.168.36.11
	 Start Time:         Tue, 13 Jul 2021 21:07:43 +0000
	 Labels:             name=pod-to-a-allowed-cnp
	                     pod-template-hash=fd5766ff7
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.1.52
	 Controlled By:      ReplicaSet/pod-to-a-allowed-cnp-fd5766ff7
	 Containers:
	   pod-to-a-allowed-cnp-container:
	     Container ID:  docker://398f1a386a538e60d65741b78ff451aa6054d93740d6b4fef6f72708cec5f7ee
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:49 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type     Reason     Age    From               Message
	   ----     ------     ----   ----               -------
	   Normal   Scheduled  4m     default-scheduler  Successfully assigned default/pod-to-a-allowed-cnp-fd5766ff7-hhgdc to k8s1
	   Normal   Pulled     3m54s  kubelet, k8s1      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m54s  kubelet, k8s1      Created container
	   Normal   Started    3m54s  kubelet, k8s1      Started container
	   Warning  Unhealthy  3m48s  kubelet, k8s1      Readiness probe failed: curl: (28) Connection timed out after 5000 milliseconds
	   Warning  Unhealthy  3m46s  kubelet, k8s1      Liveness probe failed: curl: (28) Connection timed out after 5001 milliseconds
	 
	 
	 Name:               pod-to-a-denied-cnp-549769756c-ljnhs
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s2/192.168.36.12
	 Start Time:         Tue, 13 Jul 2021 21:07:43 +0000
	 Labels:             name=pod-to-a-denied-cnp
	                     pod-template-hash=549769756c
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.0.118
	 Controlled By:      ReplicaSet/pod-to-a-denied-cnp-549769756c
	 Containers:
	   pod-to-a-denied-cnp-container:
	     Container ID:  docker://f9fb83aec336711498edeb7fd4ad490bfbb6842d7577d91e3419646b9c42d5b4
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:51 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-a:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-a:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned default/pod-to-a-denied-cnp-549769756c-ljnhs to k8s2
	   Normal  Pulled     3m53s  kubelet, k8s2      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m53s  kubelet, k8s2      Created container
	   Normal  Started    3m52s  kubelet, k8s2      Started container
	 
	 
	 Name:               pod-to-a-intra-node-proxy-egress-policy-6886ddbfc9-kv2hq
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s2/192.168.36.12
	 Start Time:         Tue, 13 Jul 2021 21:07:43 +0000
	 Labels:             name=pod-to-a-intra-node-proxy-egress-policy
	                     pod-template-hash=6886ddbfc9
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.0.119
	 Controlled By:      ReplicaSet/pod-to-a-intra-node-proxy-egress-policy-6886ddbfc9
	 Containers:
	   pod-to-a-intra-node-proxy-egress-policy-allow-container:
	     Container ID:  docker://d0e8724d626bd0fbdaa4fb66a47a81eed691b08c7232b40522af2d931a0b3f82
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:53 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	   pod-to-a-intra-node-proxy-egress-policy-reject-container:
	     Container ID:  docker://632336e5b95d74d0aabae32b0961401babd2696f8e3616d50a4f199180164118
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:54 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-a:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type     Reason     Age    From               Message
	   ----     ------     ----   ----               -------
	   Normal   Scheduled  4m     default-scheduler  Successfully assigned default/pod-to-a-intra-node-proxy-egress-policy-6886ddbfc9-kv2hq to k8s2
	   Normal   Pulled     3m50s  kubelet, k8s2      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m50s  kubelet, k8s2      Created container
	   Normal   Started    3m50s  kubelet, k8s2      Started container
	   Normal   Pulled     3m50s  kubelet, k8s2      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m49s  kubelet, k8s2      Created container
	   Normal   Started    3m49s  kubelet, k8s2      Started container
	   Warning  Unhealthy  3m42s  kubelet, k8s2      Liveness probe failed: curl: (28) Connection timed out after 5002 milliseconds
	 
	 
	 Name:               pod-to-a-multi-node-proxy-egress-policy-6d5f4fcc98-66k8l
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s1/192.168.36.11
	 Start Time:         Tue, 13 Jul 2021 21:07:44 +0000
	 Labels:             name=pod-to-a-multi-node-proxy-egress-policy
	                     pod-template-hash=6d5f4fcc98
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.1.205
	 Controlled By:      ReplicaSet/pod-to-a-multi-node-proxy-egress-policy-6d5f4fcc98
	 Containers:
	   pod-to-a-multi-node-proxy-egress-policy-allow-container:
	     Container ID:  docker://7afaf23ef971d431d5467ec34690c31e1382b3fe0180b7f71de95add25a68b4a
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:49 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	   pod-to-a-multi-node-proxy-egress-policy-reject-container:
	     Container ID:  docker://6ece9cd1c6381aab2337f8c9367a5bafdc18b347043b110ccf4d334f5d6e1fcb
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:50 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-a:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type     Reason     Age    From               Message
	   ----     ------     ----   ----               -------
	   Normal   Scheduled  3m59s  default-scheduler  Successfully assigned default/pod-to-a-multi-node-proxy-egress-policy-6d5f4fcc98-66k8l to k8s1
	   Normal   Pulled     3m54s  kubelet, k8s1      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m54s  kubelet, k8s1      Created container
	   Normal   Started    3m54s  kubelet, k8s1      Started container
	   Normal   Pulled     3m54s  kubelet, k8s1      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m54s  kubelet, k8s1      Created container
	   Normal   Started    3m53s  kubelet, k8s1      Started container
	   Warning  Unhealthy  3m46s  kubelet, k8s1      Readiness probe failed: curl: (28) Connection timed out after 5001 milliseconds
	   Warning  Unhealthy  3m45s  kubelet, k8s1      Liveness probe failed: curl: (28) Connection timed out after 5001 milliseconds
	 
	 
	 Name:               pod-to-b-intra-node-hostport-5bd6c997c9-vgqm8
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s2/192.168.36.12
	 Start Time:         Tue, 13 Jul 2021 21:07:46 +0000
	 Labels:             name=pod-to-b-intra-node-hostport
	                     pod-template-hash=5bd6c997c9
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.0.121
	 Controlled By:      ReplicaSet/pod-to-b-intra-node-hostport-5bd6c997c9
	 Containers:
	   pod-to-b-intra-node-hostport-container:
	     Container ID:  docker://ef49d7cf50ad5424b93ac642d4da7456025bf7747bde628e958b80909bb386d6
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:56 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-host-headless:40000/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-host-headless:40000/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  3m57s  default-scheduler  Successfully assigned default/pod-to-b-intra-node-hostport-5bd6c997c9-vgqm8 to k8s2
	   Normal  Pulled     3m47s  kubelet, k8s2      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m47s  kubelet, k8s2      Created container
	   Normal  Started    3m47s  kubelet, k8s2      Started container
	 
	 
	 Name:               pod-to-b-intra-node-nodeport-77b97885cc-h6znl
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s2/192.168.36.12
	 Start Time:         Tue, 13 Jul 2021 21:07:46 +0000
	 Labels:             name=pod-to-b-intra-node-nodeport
	                     pod-template-hash=77b97885cc
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.0.218
	 Controlled By:      ReplicaSet/pod-to-b-intra-node-nodeport-77b97885cc
	 Containers:
	   pod-to-b-intra-node-nodeport-container:
	     Container ID:  docker://7f4276feb723b62aabe333eef41b716b63bb12d3b41583e40650aa275f84908b
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:56 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-host-headless:31414/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-host-headless:31414/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  3m57s  default-scheduler  Successfully assigned default/pod-to-b-intra-node-nodeport-77b97885cc-h6znl to k8s2
	   Normal  Pulled     3m47s  kubelet, k8s2      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m47s  kubelet, k8s2      Created container
	   Normal  Started    3m47s  kubelet, k8s2      Started container
	 
	 
	 Name:               pod-to-b-multi-node-clusterip-7d59cf79bf-tqdl8
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s1/192.168.36.11
	 Start Time:         Tue, 13 Jul 2021 21:07:45 +0000
	 Labels:             name=pod-to-b-multi-node-clusterip
	                     pod-template-hash=7d59cf79bf
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.1.131
	 Controlled By:      ReplicaSet/pod-to-b-multi-node-clusterip-7d59cf79bf
	 Containers:
	   pod-to-b-multi-node-clusterip-container:
	     Container ID:  docker://878678b9363c603fde90dda3cc960dad3759ede8b338f1b3a4bd6d0065659cdc
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:54 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type     Reason     Age    From               Message
	   ----     ------     ----   ----               -------
	   Normal   Scheduled  3m58s  default-scheduler  Successfully assigned default/pod-to-b-multi-node-clusterip-7d59cf79bf-tqdl8 to k8s1
	   Normal   Pulled     3m49s  kubelet, k8s1      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m49s  kubelet, k8s1      Created container
	   Normal   Started    3m49s  kubelet, k8s1      Started container
	   Warning  Unhealthy  3m43s  kubelet, k8s1      Readiness probe failed: curl: (28) Connection timed out after 5001 milliseconds
	 
	 
	 Name:               pod-to-b-multi-node-headless-58755dd4fc-qtr76
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s1/192.168.36.11
	 Start Time:         Tue, 13 Jul 2021 21:07:45 +0000
	 Labels:             name=pod-to-b-multi-node-headless
	                     pod-template-hash
...[truncated 4741 chars]...
     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  3m57s  default-scheduler  Successfully assigned default/pod-to-b-multi-node-hostport-dc85cc667-24mn2 to k8s1
	   Normal  Pulled     3m42s  kubelet, k8s1      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m42s  kubelet, k8s1      Created container
	   Normal  Started    3m42s  kubelet, k8s1      Started container
	 
	 
	 Name:               pod-to-b-multi-node-nodeport-5746df777-h9v9d
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s1/192.168.36.11
	 Start Time:         Tue, 13 Jul 2021 21:07:46 +0000
	 Labels:             name=pod-to-b-multi-node-nodeport
	                     pod-template-hash=5746df777
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.1.41
	 Controlled By:      ReplicaSet/pod-to-b-multi-node-nodeport-5746df777
	 Containers:
	   pod-to-b-multi-node-nodeport-container:
	     Container ID:  docker://0882e6a0830bb0cf81dd9f3e7303cdc715b5e5272f3e8640e6cc3eaa374ed567
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:08:04 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-host-headless:31414/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-host-headless:31414/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type     Reason                  Age    From               Message
	   ----     ------                  ----   ----               -------
	   Normal   Scheduled               3m57s  default-scheduler  Successfully assigned default/pod-to-b-multi-node-nodeport-5746df777-h9v9d to k8s1
	   Warning  FailedCreatePodSandBox  3m41s  kubelet, k8s1      Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c2fab30c63147b9444aff321783deac0a4e5e9b94d6f98089ffc16795a181cf1" network for pod "pod-to-b-multi-node-nodeport-5746df777-h9v9d": NetworkPlugin cni failed to set up pod "pod-to-b-multi-node-nodeport-5746df777-h9v9d_default" network: Unable to create endpoint: response status code does not match any response statuses defined for this endpoint in the swagger spec (status 429): {}
	   Normal   SandboxChanged          3m41s  kubelet, k8s1      Pod sandbox changed, it will be killed and re-created.
	   Normal   Pulled                  3m39s  kubelet, k8s1      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created                 3m39s  kubelet, k8s1      Created container
	   Normal   Started                 3m39s  kubelet, k8s1      Started container
	 
	 
	 Name:               pod-to-c-intra-node-proxy-ingress-policy-fd8d47479-kb2ls
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s2/192.168.36.12
	 Start Time:         Tue, 13 Jul 2021 21:07:44 +0000
	 Labels:             name=pod-to-c-intra-node-proxy-ingress-policy
	                     pod-template-hash=fd8d47479
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.0.86
	 Controlled By:      ReplicaSet/pod-to-c-intra-node-proxy-ingress-policy-fd8d47479
	 Containers:
	   pod-to-c-intra-node-proxy-ingress-policy-allow-container:
	     Container ID:  docker://1dfc976c8a567271beb35aec8619b3c4c04bf60003124996f24a69ca6b246fa0
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:51 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	   pod-to-c-intra-node-proxy-ingress-policy-reject-container:
	     Container ID:  docker://103ede0a309e3ef161a10a8e90e74ec6d34c26ef6ee2539656faed5a15997a05
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:51 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-c:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type     Reason     Age    From               Message
	   ----     ------     ----   ----               -------
	   Normal   Scheduled  3m59s  default-scheduler  Successfully assigned default/pod-to-c-intra-node-proxy-ingress-policy-fd8d47479-kb2ls to k8s2
	   Normal   Pulled     3m53s  kubelet, k8s2      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m53s  kubelet, k8s2      Created container
	   Normal   Started    3m52s  kubelet, k8s2      Started container
	   Normal   Pulled     3m52s  kubelet, k8s2      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m52s  kubelet, k8s2      Created container
	   Normal   Started    3m52s  kubelet, k8s2      Started container
	   Warning  Unhealthy  3m44s  kubelet, k8s2      Readiness probe failed: curl: (28) Connection timed out after 5003 milliseconds
	   Warning  Unhealthy  3m38s  kubelet, k8s2      Liveness probe failed: curl: (28) Connection timed out after 5002 milliseconds
	 
	 
	 Name:               pod-to-c-intra-node-proxy-to-proxy-policy-7f8c67c676-ll6kp
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s2/192.168.36.12
	 Start Time:         Tue, 13 Jul 2021 21:07:44 +0000
	 Labels:             name=pod-to-c-intra-node-proxy-to-proxy-policy
	                     pod-template-hash=7f8c67c676
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.0.81
	 Controlled By:      ReplicaSet/pod-to-c-intra-node-proxy-to-proxy-policy-7f8c67c676
	 Containers:
	   pod-to-c-intra-node-proxy-to-proxy-policy-allow-container:
	     Container ID:  docker://4c59618198fe1899b386c335c28d502f5699a4dc46f4d8538571a127964e98a8
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:10:51 +0000
	     Last State:     Terminated
	       Reason:       Error
	       Exit Code:    137
	       Started:      Tue, 13 Jul 2021 21:09:51 +0000
	       Finished:     Tue, 13 Jul 2021 21:10:51 +0000
	     Ready:          False
	     Restart Count:  3
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	   pod-to-c-intra-node-proxy-to-proxy-policy-reject-container:
	     Container ID:  docker://705e34400711ba9b04b7c48d238d44d02c27748f4caa1493eba5226a2804e44b
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:54 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-c:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             False 
	   ContainersReady   False 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type     Reason     Age                    From               Message
	   ----     ------     ----                   ----               -------
	   Normal   Scheduled  3m59s                  default-scheduler  Successfully assigned default/pod-to-c-intra-node-proxy-to-proxy-policy-7f8c67c676-ll6kp to k8s2
	   Normal   Started    3m49s                  kubelet, k8s2      Started container
	   Normal   Pulled     3m49s                  kubelet, k8s2      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m49s                  kubelet, k8s2      Created container
	   Warning  Unhealthy  3m32s                  kubelet, k8s2      Liveness probe failed: curl: (28) Connection timed out after 5006 milliseconds
	   Warning  Unhealthy  3m27s                  kubelet, k8s2      Readiness probe failed: curl: (28) Connection timed out after 5002 milliseconds
	   Normal   Created    2m52s (x2 over 3m50s)  kubelet, k8s2      Created container
	   Warning  Unhealthy  2m52s                  kubelet, k8s2      Readiness probe failed: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown
	   Normal   Pulled     2m52s (x2 over 3m50s)  kubelet, k8s2      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Killing    2m52s                  kubelet, k8s2      Killing container with id docker://pod-to-c-intra-node-proxy-to-proxy-policy-allow-container:Container failed liveness probe.. Container will be killed and recreated.
	   Normal   Started    2m52s (x2 over 3m49s)  kubelet, k8s2      Started container
	   Warning  Unhealthy  2m32s (x4 over 3m42s)  kubelet, k8s2      Liveness probe failed: curl: (28) Connection timed out after 5001 milliseconds
	   Warning  Unhealthy  2m27s (x6 over 3m37s)  kubelet, k8s2      Readiness probe failed: curl: (28) Connection timed out after 5001 milliseconds
	   Warning  Unhealthy  2m22s                  kubelet, k8s2      Liveness probe failed: curl: (28) Connection timed out after 5003 milliseconds
	   Warning  Unhealthy  2m17s                  kubelet, k8s2      Readiness probe failed: curl: (28) Connection timed out after 5000 milliseconds
	 
	 
	 Name:               pod-to-c-multi-node-proxy-ingress-policy-7cbfbbcbb-pkql5
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s1/192.168.36.11
	 Start Time:         Tue, 13 Jul 2021 21:07:44 +0000
	 Labels:             name=pod-to-c-multi-node-proxy-ingress-policy
	                     pod-template-hash=7cbfbbcbb
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.1.124
	 Controlled By:      ReplicaSet/pod-to-c-multi-node-proxy-ingress-policy-7cbfbbcbb
	 Containers:
	   pod-to-c-multi-node-proxy-ingress-policy-allow-container:
	     Container ID:  docker://9b17571d50fc6a92bc7b492a78443e93dbf131454ab1de27c5984c08ad1c6e88
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:52 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	   pod-to-c-multi-node-proxy-ingress-policy-reject-container:
	     Container ID:  docker://b62bd1be10d4e972a143c860f20efae3a066461b3dc9446d51c669e39fcb1835
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:52 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-c:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type     Reason     Age    From               Message
	   ----     ------     ----   ----               -------
	   Normal   Scheduled  3m59s  default-scheduler  Successfully assigned default/pod-to-c-multi-node-proxy-ingress-policy-7cbfbbcbb-pkql5 to k8s1
	   Normal   Pulled     3m52s  kubelet, k8s1      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m52s  kubelet, k8s1      Created container
	   Normal   Started    3m51s  kubelet, k8s1      Started container
	   Normal   Pulled     3m51s  kubelet, k8s1      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m51s  kubelet, k8s1      Created container
	   Normal   Started    3m51s  kubelet, k8s1      Started container
	   Warning  Unhealthy  3m41s  kubelet, k8s1      Liveness probe failed: curl: (28) Connection timed out after 5004 milliseconds
	   Warning  Unhealthy  3m37s  kubelet, k8s1      Readiness probe failed: curl: (28) Connection timed out after 5001 milliseconds
	 
	 
	 Name:               pod-to-c-multi-node-proxy-to-proxy-policy-5bd77b9497-ppsbz
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s1/192.168.36.11
	 Start Time:         Tue, 13 Jul 2021 21:07:44 +0000
	 Labels:             name=pod-to-c-multi-node-proxy-to-proxy-policy
	                     pod-template-hash=5bd77b9497
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.1.246
	 Controlled By:      ReplicaSet/pod-to-c-multi-node-proxy-to-proxy-policy-5bd77b9497
	 Containers:
	   pod-to-c-multi-node-proxy-to-proxy-policy-allow-container:
	     Container ID:  docker://895cd3793e59210fa8b89b5523a20eb6d04be8184c8891171b3f4775c047b13d
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:52 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	   pod-to-c-multi-node-proxy-to-proxy-policy-reject-container:
	     Container ID:  docker://9d10ee035e2af91f714dd32cf9b514d094dabe9413f6d455197085f057d85554
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:52 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-c:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type     Reason     Age    From               Message
	   ----     ------     ----   ----               -------
	   Normal   Scheduled  3m59s  default-scheduler  Successfully assigned default/pod-to-c-multi-node-proxy-to-proxy-policy-5bd77b9497-ppsbz to k8s1
	   Normal   Pulled     3m51s  kubelet, k8s1      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m51s  kubelet, k8s1      Created container
	   Normal   Started    3m51s  kubelet, k8s1      Started container
	   Normal   Pulled     3m51s  kubelet, k8s1      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m51s  kubelet, k8s1      Created container
	   Normal   Started    3m51s  kubelet, k8s1      Started container
	   Warning  Unhealthy  3m42s  kubelet, k8s1      Readiness probe failed: curl: (28) Connection timed out after 5001 milliseconds
	   Warning  Unhealthy  3m35s  kubelet, k8s1      Liveness probe failed: curl: (28) Connection timed out after 5001 milliseconds
	 
	 
	 Name:               pod-to-external-1111-dd4c476f5-jqkwz
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s1/192.168.36.11
	 Start Time:         Tue, 13 Jul 2021 21:07:43 +0000
	 Labels:             name=pod-to-external-1111
	                     pod-template-hash=dd4c476f5
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.1.27
	 Controlled By:      ReplicaSet/pod-to-external-1111-dd4c476f5
	 Containers:
	   pod-to-external-1111-container:
	     Container ID:  docker://3cc1dc7267255adb769cff3f1beb693b4864fe0dc2e2b6fc8be039e4949a5b61
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:49 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null 1.1.1.1] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null 1.1.1.1] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned default/pod-to-external-1111-dd4c476f5-jqkwz to k8s1
	   Normal  Pulled     3m54s  kubelet, k8s1      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m54s  kubelet, k8s1      Created container
	   Normal  Started    3m54s  kubelet, k8s1      Started container
	 
	 
	 Name:               pod-to-external-fqdn-allow-google-cnp-75877ff4ff-gvx72
	 Namespace:          default
	 Priority:           0
	 PriorityClassName:  <none>
	 Node:               k8s2/192.168.36.12
	 Start Time:         Tue, 13 Jul 2021 21:07:43 +0000
	 Labels:             name=pod-to-external-fqdn-allow-google-cnp
	                     pod-template-hash=75877ff4ff
	 Annotations:        <none>
	 Status:             Running
	 IP:                 10.0.0.151
	 Controlled By:      ReplicaSet/pod-to-external-fqdn-allow-google-cnp-75877ff4ff
	 Containers:
	   pod-to-external-fqdn-allow-google-cnp-container:
	     Container ID:  docker://2810ecb68ec3d1fc87b9cd4bae28ac12a75fd463bcdf499d5fee5f5864753f67
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Tue, 13 Jul 2021 21:07:50 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null www.google.com] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null www.google.com] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-vpmjh (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-vpmjh:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-vpmjh
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
	                  node.kubernetes.io/unreachable:NoExecute for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned default/pod-to-external-fqdn-allow-google-cnp-75877ff4ff-gvx72 to k8s2
	   Normal  Pulled     3m53s  kubelet, k8s2      Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m53s  kubelet, k8s2      Created container
	   Normal  Started    3m53s  kubelet, k8s2      Started container
	 
Stderr:
 	 

FAIL: connectivity-check pods are not ready after timeout
Expected
    <*errors.errorString | 0xc002dfa2a0>: {
        s: "timed out waiting for pods with filter  to be ready: 4m0s timeout expired",
    }
to be nil
=== Test Finished at 2021-07-13T21:11:43Z====
21:11:43 STEP: Running JustAfterEach block for EntireTestsuite K8sConformance
===================== TEST FAILED =====================
21:11:44 STEP: Running AfterFailed block for EntireTestsuite K8sConformance
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                                                         READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-b4dbb994f-6xsqb                                      1/1     Running   0          5m21s   10.0.1.247      k8s1   <none>           <none>
	 cilium-monitoring   prometheus-688959f59d-smvrt                                  1/1     Running   0          5m21s   10.0.1.228      k8s1   <none>           <none>
	 default             echo-a-68594567f4-qn5s2                                      1/1     Running   0          4m5s    10.0.0.64       k8s2   <none>           <none>
	 default             echo-b-6d8476f798-h4xv4                                      1/1     Running   0          4m5s    10.0.0.108      k8s2   <none>           <none>
	 default             echo-b-host-7b4585cc8c-wc2df                                 1/1     Running   0          4m5s    192.168.36.12   k8s2   <none>           <none>
	 default             echo-c-6687fccd59-jtmtl                                      1/1     Running   0          4m5s    10.0.0.46       k8s2   <none>           <none>
	 default             echo-c-host-687b8bb5b-f576b                                  1/1     Running   0          4m5s    192.168.36.12   k8s2   <none>           <none>
	 default             host-to-b-multi-node-clusterip-75766487bc-l2hss              1/1     Running   0          4m3s    192.168.36.11   k8s1   <none>           <none>
	 default             host-to-b-multi-node-headless-6db57f94f-6nsts                1/1     Running   1          4m3s    192.168.36.11   k8s1   <none>           <none>
	 default             pod-to-a-7f974698df-5z2lq                                    1/1     Running   0          4m5s    10.0.0.61       k8s2   <none>           <none>
	 default             pod-to-a-allowed-cnp-fd5766ff7-hhgdc                         1/1     Running   0          4m5s    10.0.1.52       k8s1   <none>           <none>
	 default             pod-to-a-denied-cnp-549769756c-ljnhs                         1/1     Running   0          4m5s    10.0.0.118      k8s2   <none>           <none>
	 default             pod-to-a-intra-node-proxy-egress-policy-6886ddbfc9-kv2hq     2/2     Running   0          4m5s    10.0.0.119      k8s2   <none>           <none>
	 default             pod-to-a-multi-node-proxy-egress-policy-6d5f4fcc98-66k8l     2/2     Running   0          4m4s    10.0.1.205      k8s1   <none>           <none>
	 default             pod-to-b-intra-node-hostport-5bd6c997c9-vgqm8                1/1     Running   0          4m2s    10.0.0.121      k8s2   <none>           <none>
	 default             pod-to-b-intra-node-nodeport-77b97885cc-h6znl                1/1     Running   0          4m2s    10.0.0.218      k8s2   <none>           <none>
	 default             pod-to-b-multi-node-clusterip-7d59cf79bf-tqdl8               1/1     Running   0          4m3s    10.0.1.131      k8s1   <none>           <none>
	 default             pod-to-b-multi-node-headless-58755dd4fc-qtr76                1/1     Running   0          4m3s    10.0.1.102      k8s1   <none>           <none>
	 default             pod-to-b-multi-node-hostport-dc85cc667-24mn2                 1/1     Running   0          4m2s    10.0.1.12       k8s1   <none>           <none>
	 default             pod-to-b-multi-node-nodeport-5746df777-h9v9d                 1/1     Running   0          4m2s    10.0.1.41       k8s1   <none>           <none>
	 default             pod-to-c-intra-node-proxy-ingress-policy-fd8d47479-kb2ls     2/2     Running   0          4m4s    10.0.0.86       k8s2   <none>           <none>
	 default             pod-to-c-intra-node-proxy-to-proxy-policy-7f8c67c676-ll6kp   1/2     Running   3          4m4s    10.0.0.81       k8s2   <none>           <none>
	 default             pod-to-c-multi-node-proxy-ingress-policy-7cbfbbcbb-pkql5     2/2     Running   0          4m4s    10.0.1.124      k8s1   <none>           <none>
	 default             pod-to-c-multi-node-proxy-to-proxy-policy-5bd77b9497-ppsbz   2/2     Running   0          4m4s    10.0.1.246      k8s1   <none>           <none>
	 default             pod-to-external-1111-dd4c476f5-jqkwz                         1/1     Running   0          4m5s    10.0.1.27       k8s1   <none>           <none>
	 default             pod-to-external-fqdn-allow-google-cnp-75877ff4ff-gvx72       1/1     Running   0          4m5s    10.0.0.151      k8s2   <none>           <none>
	 kube-system         cilium-2j4zc                                                 1/1     Running   0          5m20s   192.168.36.11   k8s1   <none>           <none>
	 kube-system         cilium-9482w                                                 1/1     Running   0          5m20s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-76759448fd-52phx                             1/1     Running   0          5m20s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         coredns-7ff984754c-v84dz                                     1/1     Running   0          4m33s   10.0.0.12       k8s2   <none>           <none>
	 kube-system         etcd-k8s1                                                    1/1     Running   0          14m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                                          1/1     Running   0          14m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1                                 1/1     Running   4          14m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-knvfq                                             1/1     Running   0          14m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-vkjkm                                             1/1     Running   0          6m47s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         kube-scheduler-k8s1                                          1/1     Running   3          14m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-qr8xh                                           1/1     Running   0          5m32s   192.168.36.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-twrv9                                           1/1     Running   0          5m32s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         registry-adder-glqp2                                         1/1     Running   0          6m40s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         registry-adder-w8c9q                                         1/1     Running   0          6m40s   192.168.36.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-2j4zc cilium-9482w]
cmd: kubectl exec -n kube-system cilium-2j4zc -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                              IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                    
	 122        Disabled           Disabled          57520      k8s:io.cilium.k8s.policy.cluster=default                 fd00::15c   10.0.1.27    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:name=pod-to-external-1111                                                             
	 441        Disabled           Enabled           4360       k8s:io.cilium.k8s.policy.cluster=default                 fd00::1e6   10.0.1.52    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:name=pod-to-a-allowed-cnp                                                             
	 718        Disabled           Enabled           17201      k8s:io.cilium.k8s.policy.cluster=default                 fd00::193   10.0.1.205   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:name=pod-to-a-multi-node-proxy-egress-policy                                          
	 857        Disabled           Disabled          37999      k8s:io.cilium.k8s.policy.cluster=default                 fd00::12c   10.0.1.124   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:name=pod-to-c-multi-node-proxy-ingress-policy                                         
	 1185       Disabled           Disabled          30226      k8s:io.cilium.k8s.policy.cluster=default                 fd00::149   10.0.1.12    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:name=pod-to-b-multi-node-hostport                                                     
	 1526       Disabled           Disabled          4          reserved:health                                          fd00::1d4   10.0.1.33    ready   
	 1556       Disabled           Disabled          34149      k8s:io.cilium.k8s.policy.cluster=default                 fd00::1c5   10.0.1.131   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:name=pod-to-b-multi-node-clusterip                                                    
	 1880       Disabled           Disabled          62533      k8s:io.cilium.k8s.policy.cluster=default                 fd00::1a8   10.0.1.41    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:name=pod-to-b-multi-node-nodeport                                                     
	 1959       Disabled           Disabled          53999      k8s:app=prometheus                                       fd00::15b   10.0.1.228   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                    
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                         
	 2802       Disabled           Disabled          30007      k8s:app=grafana                                          fd00::11d   10.0.1.247   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                         
	 2853       Disabled           Disabled          34078      k8s:io.cilium.k8s.policy.cluster=default                 fd00::1c0   10.0.1.102   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:name=pod-to-b-multi-node-headless                                                     
	 3319       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                        ready   
	                                                            k8s:node-role.kubernetes.io/master                                                        
	                                                            reserved:host                                                                             
	 3664       Disabled           Enabled           39524      k8s:io.cilium.k8s.policy.cluster=default                 fd00::1ce   10.0.1.246   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:name=pod-to-c-multi-node-proxy-to-proxy-policy                                        
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-9482w -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                          IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                               
	 224        Disabled           Disabled          20377      k8s:io.cilium.k8s.policy.cluster=default             fd00::3b   10.0.0.121   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                              
	                                                            k8s:name=pod-to-b-intra-node-hostport                                                
	 363        Disabled           Disabled          17697      k8s:io.cilium.k8s.policy.cluster=default             fd00::b3   10.0.0.86    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                              
	                                                            k8s:name=pod-to-c-intra-node-proxy-ingress-policy                                    
	 573        Disabled           Disabled          2331       k8s:io.cilium.k8s.policy.cluster=default             fd00::b7   10.0.0.218   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                              
	                                                            k8s:name=pod-to-b-intra-node-nodeport                                                
	 574        Disabled           Disabled          3817       k8s:io.cilium.k8s.policy.cluster=default             fd00::7b   10.0.0.64    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                              
	                                                            k8s:name=echo-a                                                                      
	 794        Disabled           Disabled          40698      k8s:io.cilium.k8s.policy.cluster=default             fd00::2b   10.0.0.12    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                      
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                          
	                                                            k8s:k8s-app=kube-dns                                                                 
	 1081       Disabled           Enabled           19067      k8s:io.cilium.k8s.policy.cluster=default             fd00::e2   10.0.0.119   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                              
	                                                            k8s:name=pod-to-a-intra-node-proxy-egress-policy                                     
	 1494       Disabled           Disabled          4          reserved:health                                      fd00::7    10.0.0.17    ready   
	 1496       Enabled            Disabled          321        k8s:io.cilium.k8s.policy.cluster=default             fd00::9d   10.0.0.46    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                              
	                                                            k8s:name=echo-c                                                                      
	 1852       Disabled           Enabled           20022      k8s:io.cilium.k8s.policy.cluster=default             fd00::5f   10.0.0.118   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                              
	                                                            k8s:name=pod-to-a-denied-cnp                                                         
	 2253       Disabled           Disabled          8768       k8s:io.cilium.k8s.policy.cluster=default             fd00::60   10.0.0.108   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                              
	                                                            k8s:name=echo-b                                                                      
	 2944       Disabled           Disabled          29253      k8s:io.cilium.k8s.policy.cluster=default             fd00::9c   10.0.0.61    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                              
	                                                            k8s:name=pod-to-a                                                                    
	 3516       Disabled           Enabled           22979      k8s:io.cilium.k8s.policy.cluster=default             fd00::a9   10.0.0.81    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                              
	                                                            k8s:name=pod-to-c-intra-node-proxy-to-proxy-policy                                   
	 3939       Disabled           Enabled           2294       k8s:io.cilium.k8s.policy.cluster=default             fd00::37   10.0.0.151   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                      
	                                                            k8s:io.kubernetes.pod.namespace=default                                              
	                                                            k8s:name=pod-to-external-fqdn-allow-google-cnp                                       
	 3979       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                   ready   
	                                                            reserved:host                                                                        
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
21:12:30 STEP: Running AfterEach for block EntireTestsuite K8sConformance
21:13:32 STEP: Running AfterEach for block EntireTestsuite
@jibi jibi added area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! labels Jul 14, 2021
@pchaigno pchaigno changed the title CI: K8sConformance Portmap Chaining: connectivity-check pods are not ready after timeout [v1.9] CI: K8sConformance Portmap Chaining: connectivity-check pods are not ready after timeout Jul 14, 2021
@jibi
Copy link
Member Author

jibi commented Jul 14, 2021

I can reproduce it locally with:

➜  test git:(pr/v1.9-backport-2021-07-05-2) for i in $(seq 1 10); do K8S_VERSION=1.13 KERNEL=49 KUBEPROXY=1 ginkgo -v --focus="K8sConformance" -- -cilium.provision=true -cilium.passCLIEnvironment=true -cilium.holdEnvironment=true; done

(takes a few runs to hit the flake).

Looking at Cilium logs I see a lot of errors related to the ipcache:

vagrant@k8s1:~$ ks logs cilium-9zz28 | egrep 'level=(error|warning).*ipcache'
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.220 owned by kvstore or agent" k8sNamespace=kube-system k8sPodName=coredns-7ff984754c-5l2wc new-hostIP=10.0.1.220 new-podIP=10.0.1.220 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.24 owned by kvstore or agent" k8sNamespace=cilium-monitoring k8sPodName=grafana-b4dbb994f-x8sfj new-hostIP=10.0.1.24 new-podIP=10.0.1.24 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.244 owned by kvstore or agent" k8sNamespace=cilium-monitoring k8sPodName=prometheus-688959f59d-xlwcl new-hostIP=10.0.1.244 new-podIP=10.0.1.244 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.24 owned by kvstore or agent" k8sNamespace=cilium-monitoring k8sPodName=grafana-b4dbb994f-x8sfj new-hostIP=10.0.1.24 new-podIP=10.0.1.24 new-podIPs="[]" old-hostIP=10.0.1.24 old-podIP=10.0.1.24 old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.131 owned by kvstore or agent" k8sNamespace=kube-system k8sPodName=coredns-7ff984754c-dlj4j new-hostIP=10.0.1.131 new-podIP=10.0.1.131 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.30 owned by kvstore or agent" k8sNamespace=kube-system k8sPodName=coredns-7ff984754c-7tgx9 new-hostIP=10.0.1.30 new-podIP=10.0.1.30 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.247 owned by kvstore or agent" k8sNamespace=default k8sPodName=echo-a-68594567f4-xq4vh new-hostIP=10.0.1.247 new-podIP=10.0.1.247 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.201 owned by kvstore or agent" k8sNamespace=default k8sPodName=echo-c-6687fccd59-gpzhg new-hostIP=10.0.1.201 new-podIP=10.0.1.201 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.154 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-external-1111-dd4c476f5-8cfn9 new-hostIP=10.0.1.154 new-podIP=10.0.1.154 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.74 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-a-allowed-cnp-fd5766ff7-ptjbc new-hostIP=10.0.1.74 new-podIP=10.0.1.74 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.201 owned by kvstore or agent" k8sNamespace=default k8sPodName=echo-c-6687fccd59-gpzhg new-hostIP=10.0.1.201 new-podIP=10.0.1.201 new-podIPs="[]" old-hostIP=10.0.1.201 old-podIP=10.0.1.201 old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.76 owned by kvstore or agent" k8sNamespace=default k8sPodName=echo-b-6d8476f798-tgw9b new-hostIP=10.0.1.76 new-podIP=10.0.1.76 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.154 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-external-1111-dd4c476f5-8cfn9 new-hostIP=10.0.1.154 new-podIP=10.0.1.154 new-podIPs="[]" old-hostIP=10.0.1.154 old-podIP=10.0.1.154 old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.219 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-c-intra-node-proxy-ingress-policy-fd8d47479-5rfjh new-hostIP=10.0.1.219 new-podIP=10.0.1.219 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.247 owned by kvstore or agent" k8sNamespace=default k8sPodName=echo-a-68594567f4-xq4vh new-hostIP=10.0.1.247 new-podIP=10.0.1.247 new-podIPs="[]" old-hostIP=10.0.1.247 old-podIP=10.0.1.247 old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.120 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-a-intra-node-proxy-egress-policy-6886ddbfc9-5szlh new-hostIP=10.0.1.120 new-podIP=10.0.1.120 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.125 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-c-intra-node-proxy-to-proxy-policy-7f8c67c676-h79nw new-hostIP=10.0.1.125 new-podIP=10.0.1.125 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.74 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-a-allowed-cnp-fd5766ff7-ptjbc new-hostIP=10.0.1.74 new-podIP=10.0.1.74 new-podIPs="[]" old-hostIP=10.0.1.74 old-podIP=10.0.1.74 old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.32 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-b-intra-node-hostport-5bd6c997c9-7pr2s new-hostIP=10.0.1.32 new-podIP=10.0.1.32 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.120 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-a-intra-node-proxy-egress-policy-6886ddbfc9-5szlh new-hostIP=10.0.1.120 new-podIP=10.0.1.120 new-podIPs="[]" old-hostIP=10.0.1.120 old-podIP=10.0.1.120 old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.219 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-c-intra-node-proxy-ingress-policy-fd8d47479-5rfjh new-hostIP=10.0.1.219 new-podIP=10.0.1.219 new-podIPs="[]" old-hostIP=10.0.1.219 old-podIP=10.0.1.219 old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.125 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-c-intra-node-proxy-to-proxy-policy-7f8c67c676-h79nw new-hostIP=10.0.1.125 new-podIP=10.0.1.125 new-podIPs="[]" old-hostIP=10.0.1.125 old-podIP=10.0.1.125 old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.76 owned by kvstore or agent" k8sNamespace=default k8sPodName=echo-b-6d8476f798-tgw9b new-hostIP=10.0.1.76 new-podIP=10.0.1.76 new-podIPs="[]" old-hostIP=10.0.1.76 old-podIP=10.0.1.76 old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.167 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-b-intra-node-nodeport-77b97885cc-gn9fc new-hostIP=10.0.1.167 new-podIP=10.0.1.167 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher

and waiting a bit and killing the pods that were crashing was enough to bring them back to a healthy state:

vagrant@k8s1:~$ k get pods
NAME                                                         READY   STATUS    RESTARTS   AGE
echo-a-68594567f4-v8wph                                      1/1     Running   0          6m54s
echo-b-6d8476f798-n6rcr                                      1/1     Running   0          6m22s
echo-b-host-7b4585cc8c-2csjs                                 1/1     Running   3          5m46s
echo-c-6687fccd59-kqg22                                      1/1     Running   0          5m6s
echo-c-host-687b8bb5b-j8lff                                  1/1     Running   3          4m26s
host-to-b-multi-node-clusterip-75766487bc-ddvq7              1/1     Running   0          3m46s
host-to-b-multi-node-headless-6db57f94f-7w4vg                1/1     Running   0          3m43s
pod-to-a-7f974698df-qnm72                                    1/1     Running   0          3m11s
pod-to-a-allowed-cnp-fd5766ff7-5824f                         1/1     Running   0          2m38s
pod-to-a-denied-cnp-549769756c-mkcsw                         1/1     Running   0          2m6s
pod-to-a-intra-node-proxy-egress-policy-6886ddbfc9-mp759     2/2     Running   0          83s
pod-to-a-multi-node-proxy-egress-policy-6d5f4fcc98-qq62l     2/2     Running   0          46s
pod-to-b-intra-node-hostport-5bd6c997c9-7pr2s                1/1     Running   12         35m
pod-to-b-intra-node-nodeport-77b97885cc-gn9fc                1/1     Running   12         35m
pod-to-b-multi-node-clusterip-7d59cf79bf-28zgc               1/1     Running   12         35m
pod-to-b-multi-node-headless-58755dd4fc-vlfdf                1/1     Running   12         35m
pod-to-b-multi-node-hostport-dc85cc667-vg5f9                 1/1     Running   12         35m
pod-to-b-multi-node-nodeport-5746df777-x2rt4                 1/1     Running   12         35m
pod-to-c-intra-node-proxy-ingress-policy-fd8d47479-5rfjh     2/2     Running   1          35m
pod-to-c-intra-node-proxy-to-proxy-policy-7f8c67c676-h79nw   2/2     Running   1          35m
pod-to-c-multi-node-proxy-ingress-policy-7cbfbbcbb-wj65b     2/2     Running   1          35m
pod-to-c-multi-node-proxy-to-proxy-policy-5bd77b9497-f9vmk   2/2     Running   1          35m
pod-to-external-1111-dd4c476f5-8cfn9                         1/1     Running   0          35m
pod-to-external-fqdn-allow-google-cnp-75877ff4ff-97fx4       1/1     Running   0          35m

@jibi
Copy link
Member Author

jibi commented Jul 14, 2021

The ipcache errors might be a red herring, but they seem to correlate with a pod being able to run or not:

vagrant@k8s1:~$ ks logs cilium-m4vqd | egrep 'pod-to-a-7f974698df-czs26|pod-to-a-7f974698df-5pdmd'
level=debug msg="Updated ipcache map entry on pod add" hostIP= k8sNamespace=default k8sPodName=pod-to-a-7f974698df-czs26 podIP= podIPs="[]" subsys=k8s-watcher
level=debug msg="Updated ipcache map entry on pod add" k8sNamespace=default k8sPodName=pod-to-a-7f974698df-czs26 new-hostIP= new-podIP= new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=debug msg="Allocated random IP" ip="fd00::154" owner=default/pod-to-a-7f974698df-czs26 subsys=ipam
level=debug msg="Allocated random IP" ip=10.0.1.121 owner=default/pod-to-a-7f974698df-czs26 subsys=ipam
level=debug msg="PUT /endpoint/{id} request" endpoint="{Addressing:0xc001d46380 ContainerID:c7b921a5e93d46bd751f85d4e483eff8f07a0f42ee0137d2ecf083098eb9422f ContainerName: DatapathConfiguration:<nil> DatapathMapID:0 DockerEndpointID: DockerNetworkID: HostMac:56:bb:6c:e9:dd:a4 ID:0 InterfaceIndex:196 InterfaceName:lxcbde9a9b852ee K8sNamespace:default K8sPodName:pod-to-a-7f974698df-czs26 Labels:[] Mac:4a:37:7a:f6:a9:47 Pid:0 PolicyEnabled:false State:waiting-for-identity SyncBuildEndpoint:true}" subsys=daemon
level=info msg="Create endpoint request" addressing="&{10.0.1.121 7326c2cd-e489-11eb-b6fc-0800279c1efe fd00::154 7326c378-e489-11eb-b6fc-0800279c1efe}" containerID=c7b921a5e93d46bd751f85d4e483eff8f07a0f42ee0137d2ecf083098eb9422f datapathConfiguration="<nil>" interface=lxcbde9a9b852ee k8sPodName=default/pod-to-a-7f974698df-czs26 labels="[]" subsys=daemon sync-build=true
level=debug msg="Connecting to k8s local stores to retrieve labels for pod" k8sNamespace=default k8sPodName=pod-to-a-7f974698df-czs26 subsys=k8s
level=debug msg="No sidecar.istio.io/status annotation" k8sNamespace=default k8sPodName=pod-to-a-7f974698df-czs26 subsys=k8s
level=debug msg="Upserting IP into ipcache layer" identity="{unmanaged custom-resource false}" ipAddr=10.0.1.121 k8sNamespace=default k8sPodName=pod-to-a-7f974698df-czs26 key=0 namedPorts="map[]" subsys=ipcache
level=debug msg="Upserting IP into ipcache layer" identity="{unmanaged custom-resource false}" ipAddr="fd00::154" k8sNamespace=default k8sPodName=pod-to-a-7f974698df-czs26 key=0 namedPorts="map[]" subsys=ipcache
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.121 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-a-7f974698df-czs26 new-hostIP=10.0.1.121 new-podIP=10.0.1.121 new-podIPs="[]" old-hostIP= old-podIP= old-podIPs="[]" subsys=k8s-watcher
level=debug msg="Upserting IP into ipcache layer" identity="{16591 custom-resource false}" ipAddr=10.0.1.121 k8sNamespace=default k8sPodName=pod-to-a-7f974698df-czs26 key=0 namedPorts="map[]" subsys=ipcache
level=debug msg="Upserting IP into ipcache layer" identity="{16591 custom-resource false}" ipAddr="fd00::154" k8sNamespace=default k8sPodName=pod-to-a-7f974698df-czs26 key=0 namedPorts="map[]" subsys=ipcache
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.121 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-a-7f974698df-czs26 new-hostIP=10.0.1.121 new-podIP=10.0.1.121 new-podIPs="[]" old-hostIP=10.0.1.121 old-podIP=10.0.1.121 old-podIPs="[]" subsys=k8s-watcher
level=warning msg="Unable to update ipcache map entry on pod add" error="ipcache entry for podIP 10.0.1.121 owned by kvstore or agent" k8sNamespace=default k8sPodName=pod-to-a-7f974698df-czs26 new-hostIP=10.0.1.121 new-podIP=10.0.1.121 new-podIPs="[]" old-hostIP=10.0.1.121 old-podIP=10.0.1.121 old-podIPs="[]" subsys=k8s-watcher
level=debug msg="Upserting IP into ipcache layer" identity="{unmanaged custom-resource false}" ipAddr=10.0.0.243 k8sNamespace=default k8sPodName=pod-to-a-7f974698df-5pdmd key=0 namedPorts="map[]" subsys=ipcache
level=debug msg="Upserting IP into ipcache layer" identity="{unmanaged custom-resource false}" ipAddr="fd00::83" k8sNamespace=default k8sPodName=pod-to-a-7f974698df-5pdmd key=0 namedPorts="map[]" subsys=ipcache
level=debug msg="Released IP" ip=10.0.1.121 owner=default/pod-to-a-7f974698df-czs26 subsys=ipam
level=debug msg="Released IP" ip="fd00::154" owner=default/pod-to-a-7f974698df-czs26 subsys=ipam
level=debug msg="Skipped ipcache map delete on pod delete" error="identity for IP 10.0.1.121 does not exist in case" hostIP=192.168.36.11 k8sNamespace=default k8sPodName=pod-to-a-7f974698df-czs26 podIP=10.0.1.121 podIPs="[]" subsys=k8s-watcher
level=debug msg="Upserting IP into ipcache layer" identity="{16591 custom-resource false}" ipAddr=10.0.0.243 k8sNamespace=default k8sPodName=pod-to-a-7f974698df-5pdmd key=0 namedPorts="map[]" subsys=ipcache
level=debug msg="Upserting IP into ipcache layer" identity="{16591 custom-resource false}" ipAddr="fd00::83" k8sNamespace=default k8sPodName=pod-to-a-7f974698df-5pdmd key=0 namedPorts="map[]" subsys=ipcache

Here pod-to-a-7f974698df-czs26 was the unhealthy pod, and pod-to-a-7f974698df-5pdmd is the new one that got scheduled after deleting the first one

@jibi
Copy link
Member Author

jibi commented Jul 14, 2021

Just noticed that #16381 was not backported to 1.9 🤔
I'll do the backport and see if that helps with this flake

@pchaigno
Copy link
Member

I'll do the backport and see if that helps with this flake

Unless it's expected to cause non-trivial conflicts, I'd just leave it to the backporters. There are already several v1.9 backport PRs opened so adding more ad-hoc backports will just complicate the rebasing.

@jibi
Copy link
Member Author

jibi commented Jul 14, 2021

After backporting #16381 to v1.9 I'm not able anymore to reproduce this locally 🎉

@joestringer
Copy link
Member

I wonder if we should just backport #16381 to v1.8 as well to mitigate CI issues there?

@jibi
Copy link
Member Author

jibi commented Sep 17, 2021

I wonder if we should just backport #16381 to v1.8 as well to mitigate CI issues there?

I don't think I can take a look at the failed test in v1.8 backports PR, but if @jrajahalme observed K8sConformance Portmap Chaining failing and something related to ipcache in the logs then we should probably backport this to 1.8 as well 👍

@github-actions
Copy link

github-actions bot commented Jul 9, 2022

This issue has been automatically marked as stale because it has not
had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added the stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. label Jul 9, 2022
@pchaigno pchaigno closed this as completed Jul 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects
No open projects
CI Force
  
Awaiting triage
Development

No branches or pull requests

3 participants