Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.9: no connectivity with Cilium v1.9 on Minikube with the docker driver + recent kernel #14964

Closed
rolinh opened this issue Feb 12, 2021 · 12 comments
Labels
kind/bug This is a bug in the Cilium logic. kind/complexity-issue Relates to BPF complexity or program size issues needs/triage This issue requires triaging to establish severity and next steps. stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

Comments

@rolinh
Copy link
Member

rolinh commented Feb 12, 2021

Bug report

General Information

  • Cilium version (run cilium version): v1.9.4
  • Kernel version (run uname -a): 5.10.15-arch1-1
  • Orchestration system version in use (e.g. kubectl version, Mesos, ...): minikube v1.17.1

How to reproduce the issue

Follow the documentation to install on minikube

  1. Start minikube. If docker is available, it will be picked as it's the preferred driver for all major platforms (Linux, MacOS, Windows) with recent versions of Minikube:
$ minikube start --network-plugin=cni
😄  minikube v1.17.1 on Arch 
✨  Using the docker driver based on user configuration
❗  With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
👍  Starting control plane node minikube in cluster minikube
🔥  Creating docker container (CPUs=2, Memory=16000MB) ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
  1. Mount the eBPF filesystem:
$ minikube ssh -- sudo mount bpffs -t bpf /sys/fs/bpf
  1. Install Cilium v1.9 via the quick install method and wait for all pods to be ready:
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml
serviceaccount/cilium created
serviceaccount/cilium-operator created
configmap/cilium-config created
clusterrole.rbac.authorization.k8s.io/cilium created
clusterrole.rbac.authorization.k8s.io/cilium-operator created
clusterrolebinding.rbac.authorization.k8s.io/cilium created
clusterrolebinding.rbac.authorization.k8s.io/cilium-operator created
daemonset.apps/cilium created
deployment.apps/cilium-operator created
  1. Deploy the connectivity test:
$ kubectl create ns cilium-test
$ kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml
deployment.apps/echo-a created
deployment.apps/echo-b created
deployment.apps/echo-b-host created
deployment.apps/pod-to-a created
deployment.apps/pod-to-external-1111 created
deployment.apps/pod-to-a-denied-cnp created
deployment.apps/pod-to-a-allowed-cnp created
deployment.apps/pod-to-external-fqdn-allow-google-cnp created
deployment.apps/pod-to-b-multi-node-clusterip created
deployment.apps/pod-to-b-multi-node-headless created
deployment.apps/host-to-b-multi-node-clusterip created
deployment.apps/host-to-b-multi-node-headless created
deployment.apps/pod-to-b-multi-node-nodeport created
deployment.apps/pod-to-b-intra-node-nodeport created
service/echo-a created
service/echo-b created
service/echo-b-headless created
service/echo-b-host-headless created
ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created
  1. See that images can't even be pulled:
$ kubectl get pods -n cilium-test 
NAME                                                    READY   STATUS             RESTARTS   AGE
echo-a-dc9bcfd8f-cbwpj                                  0/1     ImagePullBackOff   0          10m
echo-b-5884b7dc69-s2hnn                                 0/1     ImagePullBackOff   0          10m
echo-b-host-cfdd57978-jd4dk                             0/1     ImagePullBackOff   0          10m
host-to-b-multi-node-clusterip-c4ff7ff64-gzs7r          0/1     Pending            0          10m
host-to-b-multi-node-headless-84d8f6f4c4-6649v          0/1     Pending            0          10m
pod-to-a-5cdfd4754d-r7dhd                               0/1     ImagePullBackOff   0          10m
pod-to-a-allowed-cnp-7d7c8f9f9b-fr6rm                   0/1     ImagePullBackOff   0          10m
pod-to-a-denied-cnp-75cb89dfd-nd5m6                     0/1     ImagePullBackOff   0          10m
pod-to-b-intra-node-nodeport-99b499f7d-dd7tq            0/1     ImagePullBackOff   0          10m
pod-to-b-multi-node-clusterip-cd4d764b6-qn8k4           0/1     Pending            0          10m
pod-to-b-multi-node-headless-6696c5f8cd-qb2w4           0/1     Pending            0          10m
pod-to-b-multi-node-nodeport-7ff5595558-mkjsh           0/1     Pending            0          10m
pod-to-external-1111-d5c7bb4c4-tzpzw                    0/1     ImagePullBackOff   0          10m
pod-to-external-fqdn-allow-google-cnp-f48574954-z2xm7   0/1     ErrImagePull       0          10m

Note: as per @brb suggestion, I also tried setting enable-bpf-masquerade: false but the result is the same.

cilium logs (click to expand)
level=info msg="Skipped reading configuration file" reason="Config File \"ciliumd\" Not Found in \"[/root]\"" subsys=config
level=info msg="Started gops server" address="127.0.0.1:9890" subsys=daemon
level=info msg="Memory available for map entries (0.003% of 67445800960B): 168614502B" subsys=config
level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 591629" subsys=config
level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 295814" subsys=config
level=info msg="option bpf-nat-global-max set by dynamic sizing to 591629" subsys=config
level=info msg="option bpf-neigh-global-max set by dynamic sizing to 591629" subsys=config
level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 295814" subsys=config
level=info msg="  --agent-health-port='9876'" subsys=daemon
level=info msg="  --agent-labels=''" subsys=daemon
level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
level=info msg="  --allow-localhost='auto'" subsys=daemon
level=info msg="  --annotate-k8s-node='true'" subsys=daemon
level=info msg="  --api-rate-limit='map[]'" subsys=daemon
level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
level=info msg="  --auto-direct-node-routes='false'" subsys=daemon
level=info msg="  --blacklist-conflicting-routes='false'" subsys=daemon
level=info msg="  --bpf-compile-debug='false'" subsys=daemon
level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp='6h0m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-tcp='6h0m0s'" subsys=daemon
level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
level=info msg="  --bpf-lb-acceleration='disabled'" subsys=daemon
level=info msg="  --bpf-lb-algorithm='random'" subsys=daemon
level=info msg="  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
level=info msg="  --bpf-lb-maglev-table-size='16381'" subsys=daemon
level=info msg="  --bpf-lb-map-max='65536'" subsys=daemon
level=info msg="  --bpf-lb-mode='snat'" subsys=daemon
level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
level=info msg="  --bpf-root=''" subsys=daemon
level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
level=info msg="  --cgroup-root=''" subsys=daemon
level=info msg="  --cluster-id=''" subsys=daemon
level=info msg="  --cluster-name='default'" subsys=daemon
level=info msg="  --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
level=info msg="  --cmdref=''" subsys=daemon
level=info msg="  --config=''" subsys=daemon
level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=daemon
level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
level=info msg="  --crd-wait-timeout='5m0s'" subsys=daemon
level=info msg="  --datapath-mode='veth'" subsys=daemon
level=info msg="  --debug='false'" subsys=daemon
level=info msg="  --debug-verbose=''" subsys=daemon
level=info msg="  --device=''" subsys=daemon
level=info msg="  --devices=''" subsys=daemon
level=info msg="  --direct-routing-device=''" subsys=daemon
level=info msg="  --disable-cnp-status-updates='true'" subsys=daemon
level=info msg="  --disable-conntrack='false'" subsys=daemon
level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
level=info msg="  --dns-max-ips-per-restored-rule='1000'" subsys=daemon
level=info msg="  --egress-masquerade-interfaces=''" subsys=daemon
level=info msg="  --egress-multi-home-ip-rule-compat='false'" subsys=daemon
level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
level=info msg="  --enable-bandwidth-manager='false'" subsys=daemon
level=info msg="  --enable-bpf-clock-probe='true'" subsys=daemon
level=info msg="  --enable-bpf-masquerade='false'" subsys=daemon
level=info msg="  --enable-bpf-tproxy='false'" subsys=daemon
level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
level=info msg="  --enable-endpoint-routes='false'" subsys=daemon
level=info msg="  --enable-external-ips='true'" subsys=daemon
level=info msg="  --enable-health-check-nodeport='true'" subsys=daemon
level=info msg="  --enable-health-checking='true'" subsys=daemon
level=info msg="  --enable-host-firewall='false'" subsys=daemon
level=info msg="  --enable-host-legacy-routing='false'" subsys=daemon
level=info msg="  --enable-host-port='true'" subsys=daemon
level=info msg="  --enable-host-reachable-services='false'" subsys=daemon
level=info msg="  --enable-hubble='true'" subsys=daemon
level=info msg="  --enable-identity-mark='true'" subsys=daemon
level=info msg="  --enable-ip-masq-agent='false'" subsys=daemon
level=info msg="  --enable-ipsec='false'" subsys=daemon
level=info msg="  --enable-ipv4='true'" subsys=daemon
level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
level=info msg="  --enable-ipv6='false'" subsys=daemon
level=info msg="  --enable-ipv6-ndp='false'" subsys=daemon
level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
level=info msg="  --enable-k8s-event-handover='false'" subsys=daemon
level=info msg="  --enable-l7-proxy='true'" subsys=daemon
level=info msg="  --enable-local-node-route='true'" subsys=daemon
level=info msg="  --enable-local-redirect-policy='false'" subsys=daemon
level=info msg="  --enable-monitor='true'" subsys=daemon
level=info msg="  --enable-node-port='false'" subsys=daemon
level=info msg="  --enable-policy='default'" subsys=daemon
level=info msg="  --enable-remote-node-identity='true'" subsys=daemon
level=info msg="  --enable-selective-regeneration='true'" subsys=daemon
level=info msg="  --enable-session-affinity='true'" subsys=daemon
level=info msg="  --enable-svc-source-range-check='true'" subsys=daemon
level=info msg="  --enable-tracing='false'" subsys=daemon
level=info msg="  --enable-well-known-identities='false'" subsys=daemon
level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
level=info msg="  --encrypt-interface=''" subsys=daemon
level=info msg="  --encrypt-node='false'" subsys=daemon
level=info msg="  --endpoint-interface-name-prefix='lxc+'" subsys=daemon
level=info msg="  --endpoint-queue-size='25'" subsys=daemon
level=info msg="  --endpoint-status=''" subsys=daemon
level=info msg="  --envoy-log=''" subsys=daemon
level=info msg="  --exclude-local-address=''" subsys=daemon
level=info msg="  --fixed-identity-mapping='map[]'" subsys=daemon
level=info msg="  --flannel-master-device=''" subsys=daemon
level=info msg="  --flannel-uninstall-on-exit='false'" subsys=daemon
level=info msg="  --force-local-policy-eval-at-source='true'" subsys=daemon
level=info msg="  --gops-port='9890'" subsys=daemon
level=info msg="  --host-reachable-services-protos='tcp,udp'" subsys=daemon
level=info msg="  --http-403-msg=''" subsys=daemon
level=info msg="  --http-idle-timeout='0'" subsys=daemon
level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
level=info msg="  --http-request-timeout='3600'" subsys=daemon
level=info msg="  --http-retry-count='3'" subsys=daemon
level=info msg="  --http-retry-timeout='0'" subsys=daemon
level=info msg="  --hubble-disable-tls='false'" subsys=daemon
level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
level=info msg="  --hubble-flow-buffer-size='4095'" subsys=daemon
level=info msg="  --hubble-listen-address=':4244'" subsys=daemon
level=info msg="  --hubble-metrics=''" subsys=daemon
level=info msg="  --hubble-metrics-server=''" subsys=daemon
level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
level=info msg="  --hubble-tls-cert-file='/var/lib/cilium/tls/hubble/server.crt'" subsys=daemon
level=info msg="  --hubble-tls-client-ca-files='/var/lib/cilium/tls/hubble/client-ca.crt'" subsys=daemon
level=info msg="  --hubble-tls-key-file='/var/lib/cilium/tls/hubble/server.key'" subsys=daemon
level=info msg="  --identity-allocation-mode='crd'" subsys=daemon
level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
level=info msg="  --install-iptables-rules='true'" subsys=daemon
level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
level=info msg="  --ipam='cluster-pool'" subsys=daemon
level=info msg="  --ipsec-key-file=''" subsys=daemon
level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
level=info msg="  --iptables-random-fully='false'" subsys=daemon
level=info msg="  --ipv4-node='auto'" subsys=daemon
level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
level=info msg="  --ipv4-range='auto'" subsys=daemon
level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
level=info msg="  --ipv4-service-range='auto'" subsys=daemon
level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
level=info msg="  --ipv6-mcast-device=''" subsys=daemon
level=info msg="  --ipv6-node='auto'" subsys=daemon
level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
level=info msg="  --ipv6-range='auto'" subsys=daemon
level=info msg="  --ipv6-service-range='auto'" subsys=daemon
level=info msg="  --ipvlan-master-device='undefined'" subsys=daemon
level=info msg="  --join-cluster='false'" subsys=daemon
level=info msg="  --k8s-api-server=''" subsys=daemon
level=info msg="  --k8s-force-json-patch='false'" subsys=daemon
level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
level=info msg="  --k8s-namespace='kube-system'" subsys=daemon
level=info msg="  --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
level=info msg="  --k8s-service-proxy-name=''" subsys=daemon
level=info msg="  --k8s-sync-timeout='3m0s'" subsys=daemon
level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
level=info msg="  --k8s-watcher-queue-size='1024'" subsys=daemon
level=info msg="  --keep-config='false'" subsys=daemon
level=info msg="  --kube-proxy-replacement='probe'" subsys=daemon
level=info msg="  --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
level=info msg="  --kvstore=''" subsys=daemon
level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
level=info msg="  --kvstore-opt='map[]'" subsys=daemon
level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
level=info msg="  --label-prefix-file=''" subsys=daemon
level=info msg="  --labels=''" subsys=daemon
level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
level=info msg="  --log-driver=''" subsys=daemon
level=info msg="  --log-opt='map[]'" subsys=daemon
level=info msg="  --log-system-load='false'" subsys=daemon
level=info msg="  --masquerade='true'" subsys=daemon
level=info msg="  --max-controller-interval='0'" subsys=daemon
level=info msg="  --metrics=''" subsys=daemon
level=info msg="  --monitor-aggregation='medium'" subsys=daemon
level=info msg="  --monitor-aggregation-flags='all'" subsys=daemon
level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
level=info msg="  --monitor-queue-size='0'" subsys=daemon
level=info msg="  --mtu='0'" subsys=daemon
level=info msg="  --nat46-range='0:0:0:0:0:FFFF::/96'" subsys=daemon
level=info msg="  --native-routing-cidr=''" subsys=daemon
level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
level=info msg="  --node-port-algorithm='random'" subsys=daemon
level=info msg="  --node-port-bind-protection='true'" subsys=daemon
level=info msg="  --node-port-mode='snat'" subsys=daemon
level=info msg="  --node-port-range='30000,32767'" subsys=daemon
level=info msg="  --policy-audit-mode='false'" subsys=daemon
level=info msg="  --policy-queue-size='100'" subsys=daemon
level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
level=info msg="  --pprof='false'" subsys=daemon
level=info msg="  --preallocate-bpf-maps='false'" subsys=daemon
level=info msg="  --prefilter-device='undefined'" subsys=daemon
level=info msg="  --prefilter-mode='native'" subsys=daemon
level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
level=info msg="  --prometheus-serve-addr=''" subsys=daemon
level=info msg="  --proxy-connect-timeout='1'" subsys=daemon
level=info msg="  --proxy-prometheus-port='0'" subsys=daemon
level=info msg="  --read-cni-conf=''" subsys=daemon
level=info msg="  --restore='true'" subsys=daemon
level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
level=info msg="  --single-cluster-route='false'" subsys=daemon
level=info msg="  --skip-crd-creation='false'" subsys=daemon
level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
level=info msg="  --sockops-enable='false'" subsys=daemon
level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
level=info msg="  --trace-payloadlen='128'" subsys=daemon
level=info msg="  --tunnel='vxlan'" subsys=daemon
level=info msg="  --version='false'" subsys=daemon
level=info msg="  --write-cni-conf-when-ready=''" subsys=daemon
level=info msg="     _ _ _" subsys=daemon
level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
level=info msg="|  _| | | | | |     |" subsys=daemon
level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
level=info msg="Cilium 1.9.4 07b62884c 2021-02-03T11:45:44-08:00 go version go1.15.7 linux/amd64" subsys=daemon
level=info msg="cilium-envoy  version: 1177896bebde79915fe5f9092409bf0254084b4e/1.14.5/Modified/RELEASE/BoringSSL" subsys=daemon
level=info msg="clang (10.0.0) and kernel (5.10.15) versions: OK!" subsys=linux-datapath
level=info msg="linking environment: OK!" subsys=linux-datapath
level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
level=info msg="Valid label prefix configuration:" subsys=labels-filter
level=info msg=" - :io.kubernetes.pod.namespace" subsys=labels-filter
level=info msg=" - :io.cilium.k8s.namespace.labels" subsys=labels-filter
level=info msg=" - :app.kubernetes.io" subsys=labels-filter
level=info msg=" - !:io.kubernetes" subsys=labels-filter
level=info msg=" - !:kubernetes.io" subsys=labels-filter
level=info msg=" - !:.*beta.kubernetes.io" subsys=labels-filter
level=info msg=" - !:k8s.io" subsys=labels-filter
level=info msg=" - !:pod-template-generation" subsys=labels-filter
level=info msg=" - !:pod-template-hash" subsys=labels-filter
level=info msg=" - !:controller-revision-hash" subsys=labels-filter
level=info msg=" - !:annotation.*" subsys=labels-filter
level=info msg=" - !:etcd_node" subsys=labels-filter
level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.1.0.0/16
level=info msg="Initializing daemon" subsys=daemon
level=info msg="Establishing connection to apiserver" host="https://10.96.0.1:443" subsys=k8s
level=info msg="Connected to apiserver" subsys=k8s
level=info msg="Inheriting MTU from external network interface" device=cni0 ipAddr=10.85.0.1 mtu=1500 subsys=mtu
level=info msg="Trying to auto-enable \"enable-node-port\", \"enable-external-ips\", \"enable-host-reachable-services\", \"enable-host-port\", \"enable-session-affinity\" features" subsys=daemon
level=info msg="BPF host routing is only available in native routing mode. Falling back to legacy host routing (enable-host-legacy-routing=true)." subsys=daemon
level=info msg="Restored services from maps" failed=0 restored=0 subsys=service
level=info msg="Reading old endpoints..." subsys=daemon
level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock" subsys=envoy-manager
level=info msg="No old endpoints found." subsys=daemon
level=error msg="Command execution failed" cmd="[iptables -t mangle -n -L CILIUM_PRE_mangle]" error="exit status 1" subsys=iptables
level=warning msg="iptables: No chain/target/match by that name." subsys=iptables
level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
level=info msg="Creating or updating CiliumNode resource" node=minikube subsys=nodediscovery
level=info msg="Successfully created CiliumNode resource" subsys=nodediscovery
level=info msg="Retrieved node information from cilium node" nodeName=minikube subsys=k8s
level=warning msg="Waiting for k8s node information" error="required IPv4 PodCIDR not available" subsys=k8s
level=info msg="Retrieved node information from cilium node" nodeName=minikube subsys=k8s
level=info msg="Received own node information from API server" ipAddr.ipv4=192.168.49.2 ipAddr.ipv6="<nil>" k8sNodeIP=192.168.49.2 labels="map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:minikube kubernetes.io/os:linux minikube.k8s.io/commit:043bdca07e54ab6e4fc0457e3064048f34133d7e-dirty minikube.k8s.io/name:minikube minikube.k8s.io/updated_at:2021_02_12T17_07_22_0700 minikube.k8s.io/version:v1.17.1 node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:]" nodeName=minikube subsys=k8s v4Prefix=10.0.0.0/24 v6Prefix="<nil>"
level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
level=info msg="Using auto-derived devices for BPF node port" devices="[eth0]" directRoutingDevice=eth0 subsys=daemon
level=info msg="Enabling k8s event listener" subsys=k8s-watcher
level=info msg="Removing stale endpoint interfaces" subsys=daemon
level=info msg="Waiting until all pre-existing resources related to policy have been received" subsys=k8s-watcher
level=info msg="Skipping kvstore configuration" subsys=daemon
level=info msg="Initializing node addressing" subsys=daemon
level=info msg="Initializing cluster-pool IPAM" subsys=ipam v4Prefix=10.0.0.0/24 v6Prefix="<nil>"
level=info msg="Restoring endpoints..." subsys=daemon
level=info msg="Endpoints restored" failed=0 restored=0 subsys=daemon
level=info msg="Addressing information:" subsys=daemon
level=info msg="  Cluster-Name: default" subsys=daemon
level=info msg="  Cluster-ID: 0" subsys=daemon
level=info msg="  Local node-name: minikube" subsys=daemon
level=info msg="  Node-IPv6: <nil>" subsys=daemon
level=info msg="  External-Node IPv4: 192.168.49.2" subsys=daemon
level=info msg="  Internal-Node IPv4: 10.0.0.82" subsys=daemon
level=info msg="  IPv4 allocation prefix: 10.0.0.0/24" subsys=daemon
level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
level=info msg="  Local IPv4 addresses:" subsys=daemon
level=info msg="  - 10.85.0.1" subsys=daemon
level=info msg="  - 192.168.49.2" subsys=daemon
level=info msg="Adding local node to cluster" node="{minikube default [{InternalIP 192.168.49.2} {CiliumInternalIP 10.0.0.82}] 10.0.0.0/24 <nil> 10.0.0.155 <nil> 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:minikube kubernetes.io/os:linux minikube.k8s.io/commit:043bdca07e54ab6e4fc0457e3064048f34133d7e-dirty minikube.k8s.io/name:minikube minikube.k8s.io/updated_at:2021_02_12T17_07_22_0700 minikube.k8s.io/version:v1.17.1 node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] 6}" subsys=nodediscovery
level=info msg="Creating or updating CiliumNode resource" node=minikube subsys=nodediscovery
level=info msg="Annotating k8s node" subsys=daemon v4CiliumHostIP.IPv4=10.0.0.82 v4Prefix=10.0.0.0/24 v4healthIP.IPv4=10.0.0.155 v6CiliumHostIP.IPv6="<nil>" v6Prefix="<nil>" v6healthIP.IPv6="<nil>"
level=info msg="Initializing identity allocator" subsys=identity-cache
level=info msg="Cluster-ID is not specified, skipping ClusterMesh initialization" subsys=daemon
level=info msg="Setting up BPF datapath" bpfClockSource=jiffies bpfInsnSet=v2 subsys=datapath-loader
level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1
level=warning msg="Failed to sysctl -w" error="could not open the sysctl file /proc/sys/net/core/bpf_jit_enable: open /proc/sys/net/core/bpf_jit_enable: no such file or directory" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1
level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
level=info msg="Setting sysctl" subsys=datapath-loader sysParamName=kernel.timer_migration sysParamValue=0
level=info msg="All pre-existing resources related to policy have been received; continuing" subsys=k8s-watcher
level=info msg="Adding new proxy port rules for cilium-dns-egress:39837" proxy port name=cilium-dns-egress subsys=proxy
level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
level=info msg="Validating configured node address ranges" subsys=daemon
level=info msg="Starting connection tracking garbage collector" subsys=daemon
level=info msg="Starting IP identity watcher" subsys=ipcache
level=info msg="Initial scan of connection tracking completed" subsys=ct-gc
level=info msg="Regenerating restored endpoints" numRestored=0 subsys=daemon
level=info msg="Creating host endpoint" subsys=daemon
level=info msg="Finished regenerating restored endpoints" regenerated=0 subsys=daemon total=0
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2190 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2190 identityLabels="k8s:minikube.k8s.io/commit=043bdca07e54ab6e4fc0457e3064048f34133d7e-dirty,k8s:minikube.k8s.io/name=minikube,k8s:minikube.k8s.io/updated_at=2021_02_12T17_07_22_0700,k8s:minikube.k8s.io/version=v1.17.1,k8s:node-role.kubernetes.io/control-plane,k8s:node-role.kubernetes.io/master,reserved:host" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2190 identity=1 identityLabels="k8s:minikube.k8s.io/commit=043bdca07e54ab6e4fc0457e3064048f34133d7e-dirty,k8s:minikube.k8s.io/name=minikube,k8s:minikube.k8s.io/updated_at=2021_02_12T17_07_22_0700,k8s:minikube.k8s.io/version=v1.17.1,k8s:node-role.kubernetes.io/control-plane,k8s:node-role.kubernetes.io/master,reserved:host" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Launching Cilium health daemon" subsys=daemon
level=info msg="Launching Cilium health endpoint" subsys=daemon
level=info msg="Started healthz status API server" address="127.0.0.1:9876" subsys=daemon
level=info msg="Datapath signal listener running" subsys=signal
level=info msg="Initializing Cilium API" subsys=daemon
level=info msg="Daemon initialization completed" bootstrapTime=9.249064645s subsys=daemon
level=info msg="Configuring Hubble server" eventQueueSize=16384 maxFlows=4095 subsys=hubble
level=info msg="Serving cilium API at unix:///var/run/cilium/cilium.sock" subsys=daemon
level=info msg="Beginning to read perf buffer" startTime="2021-02-12 16:09:21.461103386 +0000 UTC m=+9.343482009" subsys=monitor-agent
level=info msg="Starting local Hubble server" address="unix:///var/run/cilium/hubble.sock" subsys=hubble
level=info msg="Processing API request with rate limiter" name=endpoint-delete parallelRequests=4 subsys=rate uuid=aace2e3d-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" name=endpoint-delete parallelRequests=4 subsys=rate uuid=aace2e3d-6d4c-11eb-912b-024283c0511b waitDurationTotal="24.08µs"
level=info msg="Delete endpoint request" id="container-id:afee16f9b0bb1174470c53b88c5de73d29357f37c8d6418f622c92f15333c54e" subsys=daemon
level=info msg="API call has been processed" error="endpoint not found" name=endpoint-delete processingDuration="11.72µs" subsys=rate totalDuration="48.479µs" uuid=aace2e3d-6d4c-11eb-912b-024283c0511b waitDurationTotal="24.08µs"
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=ab03a7d6-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=ab03a7d6-6d4c-11eb-912b-024283c0511b waitDurationTotal=0s
level=info msg="Create endpoint request" addressing="&{10.0.0.173 ab037b8d-6d4c-11eb-912b-024283c0511b  }" containerID=c3b7f3644e0913db9fb3d319f3dde6d9b20f508de0ae9d7f1bcf63a854cc37d9 datapathConfiguration="<nil>" interface=lxcf79b520876e9 k8sPodName=kube-system/coredns-74ff55c5b-4kwpk labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3350 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3350 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=coredns,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=kube-dns" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
level=info msg="Allocated new global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=coredns;k8s:io.kubernetes.pod.namespace=kube-system;k8s:k8s-app=kube-dns;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3350 identity=43833 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=coredns,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=kube-dns" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3350 identity=43833 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1305 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1305 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1305 identity=4 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Compiled new BPF template" BPFCompilationTime=1.258871748s file-path=/var/run/cilium/state/templates/186851e3dfad69cc0c322c0394a8589496ddd3df/bpf_host.o subsys=datapath-loader
level=info msg="Compiled new BPF template" BPFCompilationTime=1.05281869s file-path=/var/run/cilium/state/templates/a08e40bd100ab7240750a50562842b951f1ac730/bpf_lxc.o subsys=datapath-loader
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1305 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3350 identity=43833 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=3350 identity=43833 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=1.368092629s subsys=rate totalDuration=1.368135098s uuid=ab03a7d6-6d4c-11eb-912b-024283c0511b waitDurationTotal=0s
level=info msg="Serving cilium health API at unix:///var/run/cilium/health.sock" subsys=health-server
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=2190 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Policy Add Request" ciliumNetworkPolicy="[&{EndpointSelector:{\"matchLabels\":{\"any:name\":\"pod-to-a-denied-cnp\",\"k8s:io.kubernetes.pod.namespace\":\"cilium-test\"}} NodeSelector:{} Ingress:[] IngressDeny:[] Egress:[{EgressCommonRule:{ToEndpoints:[{\"matchLabels\":{\"k8s:io.kubernetes.pod.namespace\":\"kube-system\",\"k8s:k8s-app\":\"kube-dns\"}}] ToRequires:[] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToGroups:[] AggregatedSelectors:[]} ToPorts:[{Ports:[{Port:53 Protocol:ANY}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:<nil>}] ToFQDNs:[]} {EgressCommonRule:{ToEndpoints:[{\"matchLabels\":{\"k8s:dns.operator.openshift.io/daemonset-dns\":\"default\",\"k8s:io.kubernetes.pod.namespace\":\"openshift-dns\"}}] ToRequires:[] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToGroups:[] AggregatedSelectors:[]} ToPorts:[{Ports:[{Port:5353 Protocol:UDP}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:<nil>}] ToFQDNs:[]}] EgressDeny:[] Labels:[k8s:io.cilium.k8s.policy.derived-from=CiliumNetworkPolicy k8s:io.cilium.k8s.policy.name=pod-to-a-denied-cnp k8s:io.cilium.k8s.policy.namespace=cilium-test k8s:io.cilium.k8s.policy.uid=bb531ae9-c218-46e1-bd54-f76fde583bc6] Description:}]" policyAddRequest=bab14476-6d4c-11eb-912b-024283c0511b subsys=daemon
level=info msg="Policy imported via API, recalculating..." policyAddRequest=bab14476-6d4c-11eb-912b-024283c0511b policyRevision=2 subsys=daemon
level=info msg="Imported CiliumNetworkPolicy" ciliumNetworkPolicyName=pod-to-a-denied-cnp k8sApiVersion= k8sNamespace=cilium-test subsys=k8s-watcher
level=info msg="Policy Add Request" ciliumNetworkPolicy="[&{EndpointSelector:{\"matchLabels\":{\"any:name\":\"pod-to-a-allowed-cnp\",\"k8s:io.kubernetes.pod.namespace\":\"cilium-test\"}} NodeSelector:{} Ingress:[] IngressDeny:[] Egress:[{EgressCommonRule:{ToEndpoints:[{\"matchLabels\":{\"any:name\":\"echo-a\",\"k8s:io.kubernetes.pod.namespace\":\"cilium-test\"}}] ToRequires:[] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToGroups:[] AggregatedSelectors:[]} ToPorts:[{Ports:[{Port:8080 Protocol:TCP}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:<nil>}] ToFQDNs:[]} {EgressCommonRule:{ToEndpoints:[{\"matchLabels\":{\"k8s:io.kubernetes.pod.namespace\":\"kube-system\",\"k8s:k8s-app\":\"kube-dns\"}}] ToRequires:[] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToGroups:[] AggregatedSelectors:[]} ToPorts:[{Ports:[{Port:53 Protocol:ANY}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:<nil>}] ToFQDNs:[]} {EgressCommonRule:{ToEndpoints:[{\"matchLabels\":{\"k8s:dns.operator.openshift.io/daemonset-dns\":\"default\",\"k8s:io.kubernetes.pod.namespace\":\"openshift-dns\"}}] ToRequires:[] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToGroups:[] AggregatedSelectors:[]} ToPorts:[{Ports:[{Port:5353 Protocol:UDP}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:<nil>}] ToFQDNs:[]}] EgressDeny:[] Labels:[k8s:io.cilium.k8s.policy.derived-from=CiliumNetworkPolicy k8s:io.cilium.k8s.policy.name=pod-to-a-allowed-cnp k8s:io.cilium.k8s.policy.namespace=cilium-test k8s:io.cilium.k8s.policy.uid=b3d45527-d0fc-4a03-bd87-be218225b066] Description:}]" policyAddRequest=bab21db3-6d4c-11eb-912b-024283c0511b subsys=daemon
level=info msg="Policy imported via API, recalculating..." policyAddRequest=bab21db3-6d4c-11eb-912b-024283c0511b policyRevision=3 subsys=daemon
level=info msg="Imported CiliumNetworkPolicy" ciliumNetworkPolicyName=pod-to-a-allowed-cnp k8sApiVersion= k8sNamespace=cilium-test subsys=k8s-watcher
level=info msg="Policy Add Request" ciliumNetworkPolicy="[&{EndpointSelector:{\"matchLabels\":{\"any:name\":\"pod-to-external-fqdn-allow-google-cnp\",\"k8s:io.kubernetes.pod.namespace\":\"cilium-test\"}} NodeSelector:{} Ingress:[] IngressDeny:[] Egress:[{EgressCommonRule:{ToEndpoints:[] ToRequires:[] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToGroups:[] AggregatedSelectors:[{\"matchLabels\":{\"reserved:none\":\"\"}}]} ToPorts:[] ToFQDNs:[{MatchName: MatchPattern:*.google.com}]} {EgressCommonRule:{ToEndpoints:[{\"matchLabels\":{\"k8s:io.kubernetes.pod.namespace\":\"kube-system\",\"k8s:k8s-app\":\"kube-dns\"}}] ToRequires:[] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToGroups:[] AggregatedSelectors:[]} ToPorts:[{Ports:[{Port:53 Protocol:ANY}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:0xc00034abd0}] ToFQDNs:[]} {EgressCommonRule:{ToEndpoints:[{\"matchLabels\":{\"k8s:dns.operator.openshift.io/daemonset-dns\":\"default\",\"k8s:io.kubernetes.pod.namespace\":\"openshift-dns\"}}] ToRequires:[] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToGroups:[] AggregatedSelectors:[]} ToPorts:[{Ports:[{Port:5353 Protocol:UDP}] TerminatingTLS:<nil> OriginatingTLS:<nil> Rules:0xc00034acb0}] ToFQDNs:[]}] EgressDeny:[] Labels:[k8s:io.cilium.k8s.policy.derived-from=CiliumNetworkPolicy k8s:io.cilium.k8s.policy.name=pod-to-external-fqdn-allow-google-cnp k8s:io.cilium.k8s.policy.namespace=cilium-test k8s:io.cilium.k8s.policy.uid=ca9389b4-31a5-4f34-853a-d919b347a039] Description:}]" policyAddRequest=bab31877-6d4c-11eb-912b-024283c0511b subsys=daemon
level=info msg="Policy imported via API, recalculating..." policyAddRequest=bab31877-6d4c-11eb-912b-024283c0511b policyRevision=4 subsys=daemon
level=info msg="Imported CiliumNetworkPolicy" ciliumNetworkPolicyName=pod-to-external-fqdn-allow-google-cnp k8sApiVersion= k8sNamespace=cilium-test subsys=k8s-watcher
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=bb49b7a8-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=bb49b7a8-6d4c-11eb-912b-024283c0511b waitDurationTotal=0s
level=info msg="Create endpoint request" addressing="&{10.0.0.123 bb3fbb54-6d4c-11eb-912b-024283c0511b  }" containerID=e8ed6c2e5005f9fd0e9b39425cef83cdf5849477b141927337a854c11374782b datapathConfiguration="<nil>" interface=lxc82ecd721a3f1 k8sPodName=cilium-test/echo-a-dc9bcfd8f-cbwpj labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1091 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1091 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=echo-a" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
level=info msg="Allocated new global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=echo-a;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1091 identity=46328 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=echo-a" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1091 identity=46328 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=bb6842c2-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=bb6842c2-6d4c-11eb-912b-024283c0511b waitDurationTotal=0s
level=info msg="Create endpoint request" addressing="&{10.0.0.211 bb6817c4-6d4c-11eb-912b-024283c0511b  }" containerID=efc35d3b68468f30c8ee4e289e3248cde1f90ffb146329b3e09a233c65f6bd9d datapathConfiguration="<nil>" interface=lxcedb65c13888d k8sPodName=cilium-test/echo-b-5884b7dc69-s2hnn labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=757 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=757 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=echo-b" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
level=info msg="Allocated new global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=echo-b;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=757 identity=30333 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=echo-b" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=757 identity=30333 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=1091 identity=46328 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=1091 identity=46328 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=562.231693ms subsys=rate totalDuration=562.35576ms uuid=bb49b7a8-6d4c-11eb-912b-024283c0511b waitDurationTotal=0s
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=6 rateLimiterSkipped=true subsys=rate uuid=bbac36f8-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=6 rateLimiterSkipped=true subsys=rate uuid=bbac36f8-6d4c-11eb-912b-024283c0511b waitDurationTotal=0s
level=info msg="Create endpoint request" addressing="&{10.0.0.110 bb95d8d2-6d4c-11eb-912b-024283c0511b  }" containerID=07c6ab8d2c101c04f1ce1c4e5242e46ea931bbc94ba67dc076d50fa2f1aec244 datapathConfiguration="<nil>" interface=lxc53b0062f3b16 k8sPodName=cilium-test/pod-to-external-1111-d5c7bb4c4-tzpzw labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1128 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1128 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-external-1111" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=757 identity=30333 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=757 identity=30333 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=445.986824ms subsys=rate totalDuration=446.039722ms uuid=bb6842c2-6d4c-11eb-912b-024283c0511b waitDurationTotal=0s
level=info msg="Allocated new global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-external-1111;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1128 identity=10952 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-external-1111" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1128 identity=10952 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=7 subsys=rate uuid=bbb5ce36-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" burst=7 limit=1.26/s maxWaitDuration=15s maxWaitDurationLimiter=14.999938722s name=endpoint-create parallelRequests=7 subsys=rate uuid=bbb5ce36-6d4c-11eb-912b-024283c0511b waitDurationLimiter=0s waitDurationTotal="80.627µs"
level=info msg="Create endpoint request" addressing="&{10.0.0.218 bba2eabc-6d4c-11eb-912b-024283c0511b  }" containerID=6219baa0e3eccc24ddd7f4b1eb08159a55a09b9a133b20ec0f4ee3fc085eb996 datapathConfiguration="<nil>" interface=lxc718b626f08f6 k8sPodName=cilium-test/pod-to-a-denied-cnp-75cb89dfd-nd5m6 labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1711 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1711 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-a-denied-cnp" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
level=info msg="Allocated new global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-a-denied-cnp;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1711 identity=56832 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-a-denied-cnp" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1711 identity=56832 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=7 subsys=rate uuid=bbd9ff5b-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" burst=7 limit=1.26/s maxWaitDuration=15s maxWaitDurationLimiter=14.999935131s name=endpoint-create parallelRequests=7 subsys=rate uuid=bbd9ff5b-6d4c-11eb-912b-024283c0511b waitDurationLimiter=0s waitDurationTotal="73.228µs"
level=info msg="Create endpoint request" addressing="&{10.0.0.246 bbcb9d23-6d4c-11eb-912b-024283c0511b  }" containerID=872db337e43bacce7a661387f28e00219a60aaf7678b29a7ef5e0e3f99b0ae69 datapathConfiguration="<nil>" interface=lxcaf8c5a67b887 k8sPodName=cilium-test/pod-to-a-allowed-cnp-7d7c8f9f9b-fr6rm labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=293 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=7 subsys=rate uuid=bbda0d64-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" burst=7 limit=1.26/s maxWaitDuration=15s maxWaitDurationLimiter=14.999919023s name=endpoint-create parallelRequests=7 subsys=rate uuid=bbda0d64-6d4c-11eb-912b-024283c0511b waitDurationLimiter=0s waitDurationTotal="90.217µs"
level=info msg="Create endpoint request" addressing="&{10.0.0.113 bbd341b3-6d4c-11eb-912b-024283c0511b  }" containerID=cc49d77e9b0e099b633ac6415426224a034c69144624eb7b8984f04da0d22164 datapathConfiguration="<nil>" interface=lxc18ee52e6438a k8sPodName=cilium-test/pod-to-external-fqdn-allow-google-cnp-f48574954-z2xm7 labels="[]" subsys=daemon sync-build=true
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=293 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-a-allowed-cnp" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3241 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3241 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-external-fqdn-allow-google-cnp" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=7 subsys=rate uuid=bbda23d2-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" burst=7 limit=1.26/s maxWaitDuration=15s maxWaitDurationLimiter=14.999939642s name=endpoint-create parallelRequests=7 subsys=rate uuid=bbda23d2-6d4c-11eb-912b-024283c0511b waitDurationLimiter=0s waitDurationTotal="68.328µs"
level=info msg="Create endpoint request" addressing="&{10.0.0.244 bbd9e08b-6d4c-11eb-912b-024283c0511b  }" containerID=8ae2e23810e66232128f679adb36307336f77a7a42b2b4ccce087a0e509fd973 datapathConfiguration="<nil>" interface=lxc4dad957fa11d k8sPodName=cilium-test/pod-to-a-5cdfd4754d-r7dhd labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1997 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1997 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-a" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="Invalid state transition skipped" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1997 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4= ipv6= k8sPodName=/ line=540 subsys=endpoint
level=info msg="Invalid state transition skipped" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3241 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4= ipv6= k8sPodName=/ line=540 subsys=endpoint
level=info msg="Invalid state transition skipped" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=293 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4= ipv6= k8sPodName=/ line=540 subsys=endpoint
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=7 subsys=rate uuid=bbf7333c-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" burst=7 limit=1.26/s maxWaitDuration=15s maxWaitDurationLimiter=14.999932932s name=endpoint-create parallelRequests=7 subsys=rate uuid=bbf7333c-6d4c-11eb-912b-024283c0511b waitDurationLimiter=0s waitDurationTotal="76.128µs"
level=info msg="Create endpoint request" addressing="&{10.0.0.109 bbf70d02-6d4c-11eb-912b-024283c0511b  }" containerID=78755f09c4b04b97eccb3a9b591422556acdcb4ba7930c640542b26c8bd7cf97 datapathConfiguration="<nil>" interface=lxc33e91d0109cb k8sPodName=cilium-test/pod-to-b-intra-node-nodeport-99b499f7d-dd7tq labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=244 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=244 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-b-intra-node-nodeport" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=1711 identity=56832 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=1711 identity=56832 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=488.062793ms subsys=rate totalDuration=488.16442ms uuid=bbb5ce36-6d4c-11eb-912b-024283c0511b waitDurationTotal="80.627µs"
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=1128 identity=10952 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=1128 identity=10952 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=563.657784ms subsys=rate totalDuration=563.735112ms uuid=bbac36f8-6d4c-11eb-912b-024283c0511b waitDurationTotal=0s
level=info msg="Allocated new global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-a-allowed-cnp;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=293 identity=35803 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-a-allowed-cnp" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=293 identity=35803 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=293 identity=35803 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=293 identity=35803 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=580.106269ms subsys=rate totalDuration=580.200047ms uuid=bbd9ff5b-6d4c-11eb-912b-024283c0511b waitDurationTotal="73.228µs"
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="Invalid state transition skipped" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1997 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4= ipv6= k8sPodName=/ line=540 subsys=endpoint
level=info msg="Invalid state transition skipped" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=244 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4= ipv6= k8sPodName=/ line=540 subsys=endpoint
level=info msg="Invalid state transition skipped" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3241 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4= ipv6= k8sPodName=/ line=540 subsys=endpoint
level=info msg="Throttling request took 1.063199687s, request: GET:https://10.96.0.1:443/apis/cilium.io/v2/namespaces/cilium-test/ciliumendpoints/pod-to-b-intra-node-nodeport-99b499f7d-dd7tq" subsys=klog
level=info msg="Waiting for Hubble server TLS certificate and key files to be created" subsys=hubble
level=info msg="Allocated new global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-external-fqdn-allow-google-cnp;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3241 identity=15814 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-external-fqdn-allow-google-cnp" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3241 identity=15814 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=3241 identity=15814 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=3241 identity=15814 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=1.826441303s subsys=rate totalDuration=1.82655568s uuid=bbda0d64-6d4c-11eb-912b-024283c0511b waitDurationTotal="90.217µs"
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="Invalid state transition skipped" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1997 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4= ipv6= k8sPodName=/ line=540 subsys=endpoint
level=info msg="Invalid state transition skipped" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=244 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4= ipv6= k8sPodName=/ line=540 subsys=endpoint
level=info msg="Allocated new global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-a;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1997 identity=3057 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-a" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1997 identity=3057 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=1997 identity=3057 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=1997 identity=3057 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=2.839129617s subsys=rate totalDuration=2.839220455s uuid=bbda23d2-6d4c-11eb-912b-024283c0511b waitDurationTotal="68.328µs"
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="Invalid state transition skipped" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=244 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4= ipv6= k8sPodName=/ line=540 subsys=endpoint
level=info msg="Allocated new global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-b-intra-node-nodeport;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=244 identity=9132 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-b-intra-node-nodeport" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=244 identity=9132 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=244 identity=9132 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=244 identity=9132 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=3.421971982s subsys=rate totalDuration=3.422070879s uuid=bbf7333c-6d4c-11eb-912b-024283c0511b waitDurationTotal="76.128µs"
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="regenerating all endpoints" reason= subsys=endpoint-manager
level=info msg="Processing API request with rate limiter" name=endpoint-delete parallelRequests=4 subsys=rate uuid=d34c870a-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" name=endpoint-delete parallelRequests=4 subsys=rate uuid=d34c870a-6d4c-11eb-912b-024283c0511b waitDurationTotal="31.379µs"
level=info msg="Delete endpoint request" id="container-id:6219baa0e3eccc24ddd7f4b1eb08159a55a09b9a133b20ec0f4ee3fc085eb996" subsys=daemon
level=info msg="Releasing key" key="[k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-test k8s:name=pod-to-a-denied-cnp]" subsys=allocator
level=info msg="Removed endpoint" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=1711 identity=56832 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="API call has been processed" name=endpoint-delete processingDuration=2.326937ms subsys=rate totalDuration=2.365326ms uuid=d34c870a-6d4c-11eb-912b-024283c0511b waitDurationTotal="31.379µs"
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=4 subsys=rate uuid=d39efbbb-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" burst=5 limit=0.74/s maxWaitDuration=15s maxWaitDurationLimiter=14.999961501s name=endpoint-create parallelRequests=4 subsys=rate uuid=d39efbbb-6d4c-11eb-912b-024283c0511b waitDurationLimiter=0s waitDurationTotal="44.009µs"
level=info msg="Create endpoint request" addressing="&{10.0.0.126 d39eb912-6d4c-11eb-912b-024283c0511b  }" containerID=8cb661f59cad1086e2a1900e498279d4673a2686facad5275a70bfaa4d597aea datapathConfiguration="<nil>" interface=lxcb853c948926a k8sPodName=cilium-test/pod-to-a-denied-cnp-75cb89dfd-nd5m6 labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=958 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=958 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-a-denied-cnp" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Reserved new local key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-a-denied-cnp;" subsys=allocator
level=info msg="Reusing existing global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-a-denied-cnp;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=958 identity=56832 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-a-denied-cnp" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=958 identity=56832 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=958 identity=56832 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=958 identity=56832 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=139.994654ms subsys=rate totalDuration=140.048543ms uuid=d39efbbb-6d4c-11eb-912b-024283c0511b waitDurationTotal="44.009µs"
level=info msg="Processing API request with rate limiter" name=endpoint-delete parallelRequests=173 subsys=rate uuid=dc4c80e3-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" name=endpoint-delete parallelRequests=173 subsys=rate uuid=dc4c80e3-6d4c-11eb-912b-024283c0511b waitDurationTotal="51.468µs"
level=info msg="Delete endpoint request" id="container-id:07c6ab8d2c101c04f1ce1c4e5242e46ea931bbc94ba67dc076d50fa2f1aec244" subsys=daemon
level=info msg="Releasing key" key="[k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-test k8s:name=pod-to-external-1111]" subsys=allocator
level=info msg="Removed endpoint" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=1128 identity=10952 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="API call has been processed" name=endpoint-delete processingDuration=3.234312ms subsys=rate totalDuration=3.29833ms uuid=dc4c80e3-6d4c-11eb-912b-024283c0511b waitDurationTotal="51.468µs"
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=5 subsys=rate uuid=dc91db37-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" burst=5 limit=0.82/s maxWaitDuration=15s maxWaitDurationLimiter=14.999965951s name=endpoint-create parallelRequests=5 subsys=rate uuid=dc91db37-6d4c-11eb-912b-024283c0511b waitDurationLimiter=0s waitDurationTotal="39.959µs"
level=info msg="Create endpoint request" addressing="&{10.0.0.16 dc9196cc-6d4c-11eb-912b-024283c0511b  }" containerID=1c77443713c7bd3257097a487b607d6e29142eeed5bd72e1c22c5401cdd00007 datapathConfiguration="<nil>" interface=lxcf33c43d0b7b9 k8sPodName=cilium-test/pod-to-external-1111-d5c7bb4c4-tzpzw labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=896 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=896 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-external-1111" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Reserved new local key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-external-1111;" subsys=allocator
level=info msg="Reusing existing global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-external-1111;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=896 identity=10952 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-external-1111" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=896 identity=10952 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=896 identity=10952 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=896 identity=10952 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=148.252129ms subsys=rate totalDuration=148.300958ms uuid=dc91db37-6d4c-11eb-912b-024283c0511b waitDurationTotal="39.959µs"
level=info msg="Processing API request with rate limiter" name=endpoint-delete parallelRequests=145 subsys=rate uuid=df4e09fa-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" name=endpoint-delete parallelRequests=145 subsys=rate uuid=df4e09fa-6d4c-11eb-912b-024283c0511b waitDurationTotal="54.878µs"
level=info msg="Delete endpoint request" id="container-id:872db337e43bacce7a661387f28e00219a60aaf7678b29a7ef5e0e3f99b0ae69" subsys=daemon
level=info msg="Releasing key" key="[k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-test k8s:name=pod-to-a-allowed-cnp]" subsys=allocator
level=info msg="Removed endpoint" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=293 identity=35803 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="API call has been processed" name=endpoint-delete processingDuration=3.510285ms subsys=rate totalDuration=3.578043ms uuid=df4e09fa-6d4c-11eb-912b-024283c0511b waitDurationTotal="54.878µs"
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=5 subsys=rate uuid=df953ece-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" burst=6 limit=0.91/s maxWaitDuration=15s maxWaitDurationLimiter=14.999939842s name=endpoint-create parallelRequests=5 subsys=rate uuid=df953ece-6d4c-11eb-912b-024283c0511b waitDurationLimiter=0s waitDurationTotal="68.628µs"
level=info msg="Create endpoint request" addressing="&{10.0.0.172 df94fc65-6d4c-11eb-912b-024283c0511b  }" containerID=fca0f9e720d2490ab6c8b48c01964b85cba65276be7030b01c68b5ab89eac99d datapathConfiguration="<nil>" interface=lxcb51772e0cdb3 k8sPodName=cilium-test/pod-to-a-allowed-cnp-7d7c8f9f9b-fr6rm labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=500 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=500 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-a-allowed-cnp" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Reserved new local key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-a-allowed-cnp;" subsys=allocator
level=info msg="Reusing existing global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-a-allowed-cnp;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=500 identity=35803 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-a-allowed-cnp" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=500 identity=35803 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=500 identity=35803 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=500 identity=35803 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=159.31104ms subsys=rate totalDuration=159.399567ms uuid=df953ece-6d4c-11eb-912b-024283c0511b waitDurationTotal="68.628µs"
level=info msg="Processing API request with rate limiter" name=endpoint-delete parallelRequests=134 subsys=rate uuid=e84ed466-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" name=endpoint-delete parallelRequests=134 subsys=rate uuid=e84ed466-6d4c-11eb-912b-024283c0511b waitDurationTotal="54.889µs"
level=info msg="Delete endpoint request" id="container-id:cc49d77e9b0e099b633ac6415426224a034c69144624eb7b8984f04da0d22164" subsys=daemon
level=info msg="Releasing key" key="[k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-test k8s:name=pod-to-external-fqdn-allow-google-cnp]" subsys=allocator
level=info msg="Removed endpoint" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=3241 identity=15814 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="API call has been processed" name=endpoint-delete processingDuration=3.300291ms subsys=rate totalDuration=3.367719ms uuid=e84ed466-6d4c-11eb-912b-024283c0511b waitDurationTotal="54.889µs"
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=5 subsys=rate uuid=e893b782-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" burst=6 limit=0.94/s maxWaitDuration=15s maxWaitDurationLimiter=14.999929461s name=endpoint-create parallelRequests=5 subsys=rate uuid=e893b782-6d4c-11eb-912b-024283c0511b waitDurationLimiter=0s waitDurationTotal="79.548µs"
level=info msg="Create endpoint request" addressing="&{10.0.0.44 e8938dce-6d4c-11eb-912b-024283c0511b  }" containerID=02c51df92d936e639b5af3f450551f3daa39a9fe5fd59f129875f521fd8ef09f datapathConfiguration="<nil>" interface=lxc6c92cc87b985 k8sPodName=cilium-test/pod-to-external-fqdn-allow-google-cnp-f48574954-z2xm7 labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=309 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=309 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-external-fqdn-allow-google-cnp" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Reserved new local key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-external-fqdn-allow-google-cnp;" subsys=allocator
level=info msg="Reusing existing global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-external-fqdn-allow-google-cnp;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=309 identity=15814 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-external-fqdn-allow-google-cnp" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=309 identity=15814 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=309 identity=15814 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=309 identity=15814 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=176.753035ms subsys=rate totalDuration=176.852043ms uuid=e893b782-6d4c-11eb-912b-024283c0511b waitDurationTotal="79.548µs"
level=info msg="Processing API request with rate limiter" name=endpoint-delete parallelRequests=131 subsys=rate uuid=eb4fc956-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" name=endpoint-delete parallelRequests=131 subsys=rate uuid=eb4fc956-6d4c-11eb-912b-024283c0511b waitDurationTotal="49.199µs"
level=info msg="Delete endpoint request" id="container-id:8ae2e23810e66232128f679adb36307336f77a7a42b2b4ccce087a0e509fd973" subsys=daemon
level=info msg="Releasing key" key="[k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-test k8s:name=pod-to-a]" subsys=allocator
level=info msg="Removed endpoint" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=1997 identity=3057 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="API call has been processed" name=endpoint-delete processingDuration=3.30547ms subsys=rate totalDuration=3.368208ms uuid=eb4fc956-6d4c-11eb-912b-024283c0511b waitDurationTotal="49.199µs"
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=5 subsys=rate uuid=eb994a71-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" burst=6 limit=0.97/s maxWaitDuration=15s maxWaitDurationLimiter=14.999945592s name=endpoint-create parallelRequests=5 subsys=rate uuid=eb994a71-6d4c-11eb-912b-024283c0511b waitDurationLimiter=0s waitDurationTotal="63.948µs"
level=info msg="Create endpoint request" addressing="&{10.0.0.237 eb9925ff-6d4c-11eb-912b-024283c0511b  }" containerID=2e855a2aa2bb55366a1908d7ed15fee2443f915de67989f7bd8cce85a7ecba1b datapathConfiguration="<nil>" interface=lxc20d56eb5d7b6 k8sPodName=cilium-test/pod-to-a-5cdfd4754d-r7dhd labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3209 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3209 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-a" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Reserved new local key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-a;" subsys=allocator
level=info msg="Reusing existing global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-a;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3209 identity=3057 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-a" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3209 identity=3057 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=3209 identity=3057 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=3209 identity=3057 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=186.803062ms subsys=rate totalDuration=186.88527ms uuid=eb994a71-6d4c-11eb-912b-024283c0511b waitDurationTotal="63.948µs"
level=info msg="Processing API request with rate limiter" name=endpoint-delete parallelRequests=129 subsys=rate uuid=f45080d6-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" name=endpoint-delete parallelRequests=129 subsys=rate uuid=f45080d6-6d4c-11eb-912b-024283c0511b waitDurationTotal="58.948µs"
level=info msg="Delete endpoint request" id="container-id:78755f09c4b04b97eccb3a9b591422556acdcb4ba7930c640542b26c8bd7cf97" subsys=daemon
level=info msg="Releasing key" key="[k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-test k8s:name=pod-to-b-intra-node-nodeport]" subsys=allocator
level=info msg="Removed endpoint" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=244 identity=9132 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="API call has been processed" name=endpoint-delete processingDuration=3.849885ms subsys=rate totalDuration=3.922533ms uuid=f45080d6-6d4c-11eb-912b-024283c0511b waitDurationTotal="58.948µs"
level=info msg="Processing API request with rate limiter" maxWaitDuration=15s name=endpoint-create parallelRequests=5 subsys=rate uuid=f49c6fde-6d4c-11eb-912b-024283c0511b
level=info msg="API request released by rate limiter" burst=6 limit=1.00/s maxWaitDuration=15s maxWaitDurationLimiter=14.999941891s name=endpoint-create parallelRequests=5 subsys=rate uuid=f49c6fde-6d4c-11eb-912b-024283c0511b waitDurationLimiter=0s waitDurationTotal="66.868µs"
level=info msg="Create endpoint request" addressing="&{10.0.0.12 f49c484c-6d4c-11eb-912b-024283c0511b  }" containerID=2d49dded299e22abdde4a567a908b071281485be79f48a3a7e50de1d11a1c6fe datapathConfiguration="<nil>" interface=lxcee00d00551ec k8sPodName=cilium-test/pod-to-b-intra-node-nodeport-99b499f7d-dd7tq labels="[]" subsys=daemon sync-build=true
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1946 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1946 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-b-intra-node-nodeport" ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Reserved new local key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-b-intra-node-nodeport;" subsys=allocator
level=info msg="Reusing existing global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=cilium-test;k8s:name=pod-to-b-intra-node-nodeport;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1946 identity=9132 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=cilium-test,k8s:name=pod-to-b-intra-node-nodeport" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1946 identity=9132 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=4 endpointID=1946 identity=9132 ipv4= ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Successful endpoint creation" containerID= datapathPolicyRevision=4 desiredPolicyRevision=4 endpointID=1946 identity=9132 ipv4= ipv6= k8sPodName=/ subsys=daemon
level=info msg="API call has been processed" name=endpoint-create processingDuration=186.587337ms subsys=rate totalDuration=186.675405ms uuid=f49c6fde-6d4c-11eb-912b-024283c0511b waitDurationTotal="66.868µs"
level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.0006017294651368766 newInterval=7m30s subsys=map-ct
level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.0010614778205223553 newInterval=11m15s subsys=map-ct
@rolinh rolinh added the kind/bug This is a bug in the Cilium logic. label Feb 12, 2021
@brb brb added the needs/triage This issue requires triaging to establish severity and next steps. label Feb 15, 2021
@brb
Copy link
Member

brb commented Feb 15, 2021

@rolinh Have you checked whether the issue persists with the latest master?

@rolinh
Copy link
Member Author

rolinh commented Feb 15, 2021

@brb Yes I forgot to mention in the issue but everything works as expected with master and I can only reproduce the problem with v1.9.

@pchaigno
Copy link
Member

pchaigno commented Feb 15, 2021

I was able to reproduce on 5.10, but not 5.9. I'll try to bisect the exact kernel commit that impacted our complexity this afternoon.

EDIT: I bisected it to https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=75748837b7e56919679e02163f45d5818c644d03. Interestingly, Alexei (author) didn't seem to think it would impact state pruning. The attached object file can be used to reproduce without starting or even compiling Cilium.

@rolinh
Copy link
Member Author

rolinh commented Mar 4, 2021

I found out that v1.9 on a Kind cluster now works with Kernel v5.10. At the time I reported the issue, I was running v5.10.15 and now I'm running v5.10.19. These are probably relevant commits in the bpf subsytem that might explain why I don't hit the complexity issue anymore even with Cilium v1.9:

$ git log --pretty=oneline v5.10.15..v5.10.19 kernel/bpf/
3320bae8c115863b6f17993c2b7970f7f419da57 bpf: Fix truncation handling for mod32 dst reg wrt zero
8032bf2af9ce26b3a362b9711d15f626ab946a74 bpf: Check for integer overflow when using roundup_pow_of_two()
67afdc7d95b90aaf3ba3b2c7bccd6d73bda53265 bpf: Fix verifier jsgt branch analysis on max bound
1d16cc210fabd0a7ebf52d3025f81c2bde054a90 bpf: Fix 32 bit src register truncation on div/mod
569033c0825e4d90f7e824696dd334d239adc997 bpf: Fix verifier jmp32 pruning decision logic

@brb
Copy link
Member

brb commented Mar 4, 2021

v1.9 on a Kind cluster now works with Kernel v5.10

With host-reachable-svc disabled?

@rolinh
Copy link
Member Author

rolinh commented Mar 8, 2021

@brb I thought I had host reachable services disabled but... no. The kernel complexity issue wasn't actually fixed by one of the commit above 😞 .

@ReillyTevera
Copy link

There are a number of BPF fixes in 5.10.20, so it's probably worth testing that version as well. (none in 5.10.21 from what I can see)

@rolinh
Copy link
Member Author

rolinh commented Mar 8, 2021

I did test with 5.10.21 today, still hitting the complexity issue unfortunately 😞

@ReillyTevera
Copy link

@rolinh There are a couple of patches in the latest 5.10.24 kernel that may help. Particularly one that touches BPF XDP code. Can you test with this kernel and see if anything changes?

@rolinh
Copy link
Member Author

rolinh commented Mar 18, 2021

@ReillyTevera Just tested with kernel 5.10.24, still hit the complexity issue. I want to get to bissect the issue but I just don't have cycles at the moment.

@stale
Copy link

stale bot commented Jun 3, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. label Jun 3, 2021
@stale
Copy link

stale bot commented Jun 26, 2021

This issue has not seen any activity since it was marked stale. Closing.

@stale stale bot closed this as completed Jun 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug This is a bug in the Cilium logic. kind/complexity-issue Relates to BPF complexity or program size issues needs/triage This issue requires triaging to establish severity and next steps. stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects
None yet
Development

No branches or pull requests

4 participants