Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cilium_vxlan operation not supported #29932

Closed
2 tasks done
RazaGR opened this issue Dec 16, 2023 · 3 comments
Closed
2 tasks done

cilium_vxlan operation not supported #29932

RazaGR opened this issue Dec 16, 2023 · 3 comments
Labels
kind/bug This is a bug in the Cilium logic. kind/community-report This was reported by a user in the Cilium community, eg via Slack. needs/triage This issue requires triaging to establish severity and next steps.

Comments

@RazaGR
Copy link

RazaGR commented Dec 16, 2023

Is there an existing issue for this?

  • I have searched the existing issues

What happened?

A bug happened!

Cilium Version

cilium-cli: v0.15.16 compiled with go1.21.4 on darwin/arm64
cilium image (default): v1.14.4
cilium image (stable): v1.14.5
cilium image (running): 1.15.0-pre.3

Kernel Version

Darwin MacBook-Pro.local 23.2.0 Darwin Kernel Version 23.2.0: Wed Nov 15 21:55:06 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6020 arm64

Kubernetes Version

Client Version: v1.29.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.27.3
WARNING: version difference between client (1.29) and server (1.27) exceeds the supported minor version skew of +/-1

Sysdump

cilium-sysdump-20231216-201329.zip

Relevant log output

level=info msg="Memory available for map entries (0.003% of 8227328000B): 20568320B" subsys=config
level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 131072" subsys=config
level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 65536" subsys=config
level=info msg="option bpf-nat-global-max set by dynamic sizing to 131072" subsys=config
level=info msg="option bpf-neigh-global-max set by dynamic sizing to 131072" subsys=config
level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 65536" subsys=config
level=info msg="  --agent-health-port='9879'" subsys=daemon
level=info msg="  --agent-labels=''" subsys=daemon
level=info msg="  --agent-liveness-update-interval='1s'" subsys=daemon
level=info msg="  --agent-not-ready-taint-key='node.cilium.io/agent-not-ready'" subsys=daemon
level=info msg="  --allocator-list-timeout='3m0s'" subsys=daemon
level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
level=info msg="  --allow-localhost='auto'" subsys=daemon
level=info msg="  --annotate-k8s-node='false'" subsys=daemon
level=info msg="  --api-rate-limit=''" subsys=daemon
level=info msg="  --arping-refresh-period='30s'" subsys=daemon
level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
level=info msg="  --auto-direct-node-routes='false'" subsys=daemon
level=info msg="  --bgp-announce-lb-ip='false'" subsys=daemon
level=info msg="  --bgp-announce-pod-cidr='false'" subsys=daemon
level=info msg="  --bgp-config-path='/var/lib/cilium/bgp/config.yaml'" subsys=daemon
level=info msg="  --bpf-auth-map-max='524288'" subsys=daemon
level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp='2h13m20s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-tcp='2h13m20s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-tcp-grace='1m0s'" subsys=daemon
level=info msg="  --bpf-filter-priority='1'" subsys=daemon
level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
level=info msg="  --bpf-lb-acceleration='disabled'" subsys=daemon
level=info msg="  --bpf-lb-affinity-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-algorithm='random'" subsys=daemon
level=info msg="  --bpf-lb-dev-ip-addr-inherit=''" subsys=daemon
level=info msg="  --bpf-lb-dsr-dispatch='opt'" subsys=daemon
level=info msg="  --bpf-lb-dsr-l4-xlate='frontend'" subsys=daemon
level=info msg="  --bpf-lb-external-clusterip='false'" subsys=daemon
level=info msg="  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
level=info msg="  --bpf-lb-maglev-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-maglev-table-size='16381'" subsys=daemon
level=info msg="  --bpf-lb-map-max='65536'" subsys=daemon
level=info msg="  --bpf-lb-mode='snat'" subsys=daemon
level=info msg="  --bpf-lb-rev-nat-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-rss-ipv4-src-cidr=''" subsys=daemon
level=info msg="  --bpf-lb-rss-ipv6-src-cidr=''" subsys=daemon
level=info msg="  --bpf-lb-service-backend-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-service-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-sock='false'" subsys=daemon
level=info msg="  --bpf-lb-sock-hostns-only='false'" subsys=daemon
level=info msg="  --bpf-lb-source-range-map-max='0'" subsys=daemon
level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
level=info msg="  --bpf-map-event-buffers=''" subsys=daemon
level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
level=info msg="  --bpf-policy-map-full-reconciliation-interval='15m0s'" subsys=daemon
level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
level=info msg="  --bpf-root='/sys/fs/bpf'" subsys=daemon
level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
level=info msg="  --bypass-ip-availability-upon-restore='false'" subsys=daemon
level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
level=info msg="  --cflags=''" subsys=daemon
level=info msg="  --cgroup-root='/run/cilium/cgroupv2'" subsys=daemon
level=info msg="  --cilium-endpoint-gc-interval='5m0s'" subsys=daemon
level=info msg="  --cluster-health-port='4240'" subsys=daemon
level=info msg="  --cluster-id='0'" subsys=daemon
level=info msg="  --cluster-name='default'" subsys=daemon
level=info msg="  --cluster-pool-ipv4-cidr='10.0.0.0/8'" subsys=daemon
level=info msg="  --cluster-pool-ipv4-mask-size='24'" subsys=daemon
level=info msg="  --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
level=info msg="  --clustermesh-ip-identities-sync-timeout='1m0s'" subsys=daemon
level=info msg="  --cmdref=''" subsys=daemon
level=info msg="  --cni-chaining-mode='none'" subsys=daemon
level=info msg="  --cni-chaining-target=''" subsys=daemon
level=info msg="  --cni-exclusive='true'" subsys=daemon
level=info msg="  --cni-external-routing='false'" subsys=daemon
level=info msg="  --cni-log-file='/var/run/cilium/cilium-cni.log'" subsys=daemon
level=info msg="  --config=''" subsys=daemon
level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=daemon
level=info msg="  --config-sources='config-map:kube-system/cilium-config'" subsys=daemon
level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
level=info msg="  --conntrack-gc-max-interval='0s'" subsys=daemon
level=info msg="  --controller-group-metrics=''" subsys=daemon
level=info msg="  --crd-wait-timeout='5m0s'" subsys=daemon
level=info msg="  --custom-cni-conf='false'" subsys=daemon
level=info msg="  --datapath-mode='veth'" subsys=daemon
level=info msg="  --debug='false'" subsys=daemon
level=info msg="  --debug-verbose=''" subsys=daemon
level=info msg="  --derive-masquerade-ip-addr-from-device=''" subsys=daemon
level=info msg="  --devices='eth0'" subsys=daemon
level=info msg="  --direct-routing-device=''" subsys=daemon
level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
level=info msg="  --dns-max-ips-per-restored-rule='1000'" subsys=daemon
level=info msg="  --dns-policy-unload-on-shutdown='false'" subsys=daemon
level=info msg="  --dnsproxy-concurrency-limit='0'" subsys=daemon
level=info msg="  --dnsproxy-concurrency-processing-grace-period='0s'" subsys=daemon
level=info msg="  --dnsproxy-lock-count='131'" subsys=daemon
level=info msg="  --dnsproxy-lock-timeout='500ms'" subsys=daemon
level=info msg="  --egress-gateway-policy-map-max='16384'" subsys=daemon
level=info msg="  --egress-gateway-reconciliation-trigger-interval='1s'" subsys=daemon
level=info msg="  --egress-masquerade-interfaces=''" subsys=daemon
level=info msg="  --egress-multi-home-ip-rule-compat='false'" subsys=daemon
level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
level=info msg="  --enable-bandwidth-manager='false'" subsys=daemon
level=info msg="  --enable-bbr='false'" subsys=daemon
level=info msg="  --enable-bgp-control-plane='false'" subsys=daemon
level=info msg="  --enable-bpf-clock-probe='false'" subsys=daemon
level=info msg="  --enable-bpf-masquerade='false'" subsys=daemon
level=info msg="  --enable-bpf-tproxy='false'" subsys=daemon
level=info msg="  --enable-cilium-api-server-access='*'" subsys=daemon
level=info msg="  --enable-cilium-endpoint-slice='false'" subsys=daemon
level=info msg="  --enable-cilium-health-api-server-access='*'" subsys=daemon
level=info msg="  --enable-custom-calls='false'" subsys=daemon
level=info msg="  --enable-encryption-strict-mode='false'" subsys=daemon
level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
level=info msg="  --enable-endpoint-routes='false'" subsys=daemon
level=info msg="  --enable-envoy-config='false'" subsys=daemon
level=info msg="  --enable-external-ips='false'" subsys=daemon
level=info msg="  --enable-health-check-loadbalancer-ip='false'" subsys=daemon
level=info msg="  --enable-health-check-nodeport='true'" subsys=daemon
level=info msg="  --enable-health-checking='true'" subsys=daemon
level=info msg="  --enable-high-scale-ipcache='false'" subsys=daemon
level=info msg="  --enable-host-firewall='false'" subsys=daemon
level=info msg="  --enable-host-legacy-routing='false'" subsys=daemon
level=info msg="  --enable-host-port='false'" subsys=daemon
level=info msg="  --enable-hubble='true'" subsys=daemon
level=info msg="  --enable-hubble-recorder-api='true'" subsys=daemon
level=info msg="  --enable-icmp-rules='true'" subsys=daemon
level=info msg="  --enable-identity-mark='true'" subsys=daemon
level=info msg="  --enable-ip-masq-agent='false'" subsys=daemon
level=info msg="  --enable-ipsec='false'" subsys=daemon
level=info msg="  --enable-ipsec-key-watcher='true'" subsys=daemon
level=info msg="  --enable-ipv4='true'" subsys=daemon
level=info msg="  --enable-ipv4-big-tcp='false'" subsys=daemon
level=info msg="  --enable-ipv4-egress-gateway='false'" subsys=daemon
level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
level=info msg="  --enable-ipv4-masquerade='true'" subsys=daemon
level=info msg="  --enable-ipv6='false'" subsys=daemon
level=info msg="  --enable-ipv6-big-tcp='false'" subsys=daemon
level=info msg="  --enable-ipv6-masquerade='true'" subsys=daemon
level=info msg="  --enable-ipv6-ndp='false'" subsys=daemon
level=info msg="  --enable-k8s='true'" subsys=daemon
level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
level=info msg="  --enable-k8s-networkpolicy='true'" subsys=daemon
level=info msg="  --enable-k8s-terminating-endpoint='true'" subsys=daemon
level=info msg="  --enable-l2-announcements='true'" subsys=daemon
level=info msg="  --enable-l2-neigh-discovery='true'" subsys=daemon
level=info msg="  --enable-l2-pod-announcements='false'" subsys=daemon
level=info msg="  --enable-l7-proxy='true'" subsys=daemon
level=info msg="  --enable-local-node-route='true'" subsys=daemon
level=info msg="  --enable-local-redirect-policy='false'" subsys=daemon
level=info msg="  --enable-masquerade-to-route-source='false'" subsys=daemon
level=info msg="  --enable-metrics='true'" subsys=daemon
level=info msg="  --enable-mke='false'" subsys=daemon
level=info msg="  --enable-monitor='true'" subsys=daemon
level=info msg="  --enable-nat46x64-gateway='false'" subsys=daemon
level=info msg="  --enable-node-port='false'" subsys=daemon
level=info msg="  --enable-pmtu-discovery='false'" subsys=daemon
level=info msg="  --enable-policy='default'" subsys=daemon
level=info msg="  --enable-recorder='false'" subsys=daemon
level=info msg="  --enable-remote-node-identity='true'" subsys=daemon
level=info msg="  --enable-runtime-device-detection='false'" subsys=daemon
level=info msg="  --enable-sctp='false'" subsys=daemon
level=info msg="  --enable-service-topology='false'" subsys=daemon
level=info msg="  --enable-session-affinity='false'" subsys=daemon
level=info msg="  --enable-srv6='false'" subsys=daemon
level=info msg="  --enable-stale-cilium-endpoint-cleanup='true'" subsys=daemon
level=info msg="  --enable-svc-source-range-check='true'" subsys=daemon
level=info msg="  --enable-tracing='false'" subsys=daemon
level=info msg="  --enable-unreachable-routes='false'" subsys=daemon
level=info msg="  --enable-vtep='false'" subsys=daemon
level=info msg="  --enable-well-known-identities='false'" subsys=daemon
level=info msg="  --enable-wireguard='false'" subsys=daemon
level=info msg="  --enable-wireguard-userspace-fallback='false'" subsys=daemon
level=info msg="  --enable-xdp-prefilter='false'" subsys=daemon
level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
level=info msg="  --encrypt-interface=''" subsys=daemon
level=info msg="  --encrypt-node='false'" subsys=daemon
level=info msg="  --encryption-strict-mode-allow-remote-node-identities='false'" subsys=daemon
level=info msg="  --encryption-strict-mode-cidr=''" subsys=daemon
level=info msg="  --endpoint-bpf-prog-watchdog-interval='30s'" subsys=daemon
level=info msg="  --endpoint-gc-interval='5m0s'" subsys=daemon
level=info msg="  --endpoint-queue-size='25'" subsys=daemon
level=info msg="  --endpoint-status=''" subsys=daemon
level=info msg="  --envoy-config-timeout='2m0s'" subsys=daemon
level=info msg="  --envoy-log=''" subsys=daemon
level=info msg="  --exclude-local-address=''" subsys=daemon
level=info msg="  --external-envoy-proxy='false'" subsys=daemon
level=info msg="  --fixed-identity-mapping=''" subsys=daemon
level=info msg="  --fqdn-regex-compile-lru-size='1024'" subsys=daemon
level=info msg="  --gops-port='9890'" subsys=daemon
level=info msg="  --http-403-msg=''" subsys=daemon
level=info msg="  --http-idle-timeout='0'" subsys=daemon
level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
level=info msg="  --http-normalize-path='true'" subsys=daemon
level=info msg="  --http-request-timeout='3600'" subsys=daemon
level=info msg="  --http-retry-count='3'" subsys=daemon
level=info msg="  --http-retry-timeout='0'" subsys=daemon
level=info msg="  --hubble-disable-tls='false'" subsys=daemon
level=info msg="  --hubble-event-buffer-capacity='4095'" subsys=daemon
level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
level=info msg="  --hubble-export-allowlist=''" subsys=daemon
level=info msg="  --hubble-export-denylist=''" subsys=daemon
level=info msg="  --hubble-export-fieldmask=''" subsys=daemon
level=info msg="  --hubble-export-file-compress='false'" subsys=daemon
level=info msg="  --hubble-export-file-max-backups='5'" subsys=daemon
level=info msg="  --hubble-export-file-max-size-mb='10'" subsys=daemon
level=info msg="  --hubble-export-file-path=''" subsys=daemon
level=info msg="  --hubble-flowlogs-config-path=''" subsys=daemon
level=info msg="  --hubble-listen-address=':4244'" subsys=daemon
level=info msg="  --hubble-metrics=''" subsys=daemon
level=info msg="  --hubble-metrics-server=''" subsys=daemon
level=info msg="  --hubble-monitor-events=''" subsys=daemon
level=info msg="  --hubble-prefer-ipv6='false'" subsys=daemon
level=info msg="  --hubble-recorder-sink-queue-size='1024'" subsys=daemon
level=info msg="  --hubble-recorder-storage-path='/var/run/cilium/pcaps'" subsys=daemon
level=info msg="  --hubble-redact-enabled='false'" subsys=daemon
level=info msg="  --hubble-redact-http-headers-allow=''" subsys=daemon
level=info msg="  --hubble-redact-http-headers-deny=''" subsys=daemon
level=info msg="  --hubble-redact-http-urlquery='false'" subsys=daemon
level=info msg="  --hubble-redact-http-userinfo='true'" subsys=daemon
level=info msg="  --hubble-redact-kafka-apikey='false'" subsys=daemon
level=info msg="  --hubble-skip-unknown-cgroup-ids='true'" subsys=daemon
level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
level=info msg="  --hubble-tls-cert-file='/var/lib/cilium/tls/hubble/server.crt'" subsys=daemon
level=info msg="  --hubble-tls-client-ca-files='/var/lib/cilium/tls/hubble/client-ca.crt'" subsys=daemon
level=info msg="  --hubble-tls-key-file='/var/lib/cilium/tls/hubble/server.key'" subsys=daemon
level=info msg="  --identity-allocation-mode='crd'" subsys=daemon
level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
level=info msg="  --identity-gc-interval='15m0s'" subsys=daemon
level=info msg="  --identity-heartbeat-timeout='30m0s'" subsys=daemon
level=info msg="  --identity-restore-grace-period='10m0s'" subsys=daemon
level=info msg="  --install-egress-gateway-routes='false'" subsys=daemon
level=info msg="  --install-iptables-rules='true'" subsys=daemon
level=info msg="  --install-no-conntrack-iptables-rules='false'" subsys=daemon
level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
level=info msg="  --ipam='cluster-pool'" subsys=daemon
level=info msg="  --ipam-cilium-node-update-rate='15s'" subsys=daemon
level=info msg="  --ipam-default-ip-pool='default'" subsys=daemon
level=info msg="  --ipam-multi-pool-pre-allocation=''" subsys=daemon
level=info msg="  --ipsec-key-file=''" subsys=daemon
level=info msg="  --ipsec-key-rotation-duration='5m0s'" subsys=daemon
level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
level=info msg="  --iptables-random-fully='false'" subsys=daemon
level=info msg="  --ipv4-native-routing-cidr=''" subsys=daemon
level=info msg="  --ipv4-node='auto'" subsys=daemon
level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
level=info msg="  --ipv4-range='auto'" subsys=daemon
level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
level=info msg="  --ipv4-service-range='auto'" subsys=daemon
level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
level=info msg="  --ipv6-mcast-device=''" subsys=daemon
level=info msg="  --ipv6-native-routing-cidr=''" subsys=daemon
level=info msg="  --ipv6-node='auto'" subsys=daemon
level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
level=info msg="  --ipv6-range='auto'" subsys=daemon
level=info msg="  --ipv6-service-range='auto'" subsys=daemon
level=info msg="  --join-cluster='false'" subsys=daemon
level=info msg="  --k8s-api-server=''" subsys=daemon
level=info msg="  --k8s-client-burst='20'" subsys=daemon
level=info msg="  --k8s-client-qps='10'" subsys=daemon
level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
level=info msg="  --k8s-namespace='kube-system'" subsys=daemon
level=info msg="  --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
level=info msg="  --k8s-service-proxy-name=''" subsys=daemon
level=info msg="  --k8s-sync-timeout='3m0s'" subsys=daemon
level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
level=info msg="  --keep-config='false'" subsys=daemon
level=info msg="  --kube-proxy-replacement='strict'" subsys=daemon
level=info msg="  --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
level=info msg="  --kvstore=''" subsys=daemon
level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
level=info msg="  --kvstore-max-consecutive-quorum-errors='2'" subsys=daemon
level=info msg="  --kvstore-opt=''" subsys=daemon
level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
level=info msg="  --l2-announcements-lease-duration='3s'" subsys=daemon
level=info msg="  --l2-announcements-renew-deadline='1s'" subsys=daemon
level=info msg="  --l2-announcements-retry-period='500ms'" subsys=daemon
level=info msg="  --l2-pod-announcements-interface=''" subsys=daemon
level=info msg="  --label-prefix-file=''" subsys=daemon
level=info msg="  --labels=''" subsys=daemon
level=info msg="  --legacy-turn-off-k8s-event-handover='false'" subsys=daemon
level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
level=info msg="  --local-max-addr-scope='252'" subsys=daemon
level=info msg="  --local-router-ipv4=''" subsys=daemon
level=info msg="  --local-router-ipv6=''" subsys=daemon
level=info msg="  --log-driver=''" subsys=daemon
level=info msg="  --log-opt=''" subsys=daemon
level=info msg="  --log-system-load='false'" subsys=daemon
level=info msg="  --max-connected-clusters='255'" subsys=daemon
level=info msg="  --max-controller-interval='0'" subsys=daemon
level=info msg="  --max-internal-timer-delay='0s'" subsys=daemon
level=info msg="  --mesh-auth-enabled='true'" subsys=daemon
level=info msg="  --mesh-auth-gc-interval='5m0s'" subsys=daemon
level=info msg="  --mesh-auth-mutual-connect-timeout='5s'" subsys=daemon
level=info msg="  --mesh-auth-mutual-listener-port='0'" subsys=daemon
level=info msg="  --mesh-auth-queue-size='1024'" subsys=daemon
level=info msg="  --mesh-auth-rotated-identities-queue-size='1024'" subsys=daemon
level=info msg="  --mesh-auth-signal-backoff-duration='1s'" subsys=daemon
level=info msg="  --mesh-auth-spiffe-trust-domain='spiffe.cilium'" subsys=daemon
level=info msg="  --mesh-auth-spire-admin-socket=''" subsys=daemon
level=info msg="  --metrics=''" subsys=daemon
level=info msg="  --mke-cgroup-mount=''" subsys=daemon
level=info msg="  --monitor-aggregation='medium'" subsys=daemon
level=info msg="  --monitor-aggregation-flags='all'" subsys=daemon
level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
level=info msg="  --monitor-queue-size='0'" subsys=daemon
level=info msg="  --mtu='0'" subsys=daemon
level=info msg="  --node-encryption-opt-out-labels='node-role.kubernetes.io/control-plane'" subsys=daemon
level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
level=info msg="  --node-port-algorithm='random'" subsys=daemon
level=info msg="  --node-port-bind-protection='true'" subsys=daemon
level=info msg="  --node-port-mode='snat'" subsys=daemon
level=info msg="  --node-port-range='30000,32767'" subsys=daemon
level=info msg="  --nodeport-addresses=''" subsys=daemon
level=info msg="  --nodes-gc-interval='5m0s'" subsys=daemon
level=info msg="  --operator-api-serve-addr='127.0.0.1:9234'" subsys=daemon
level=info msg="  --operator-prometheus-serve-addr=':9963'" subsys=daemon
level=info msg="  --policy-audit-mode='false'" subsys=daemon
level=info msg="  --policy-cidr-match-mode=''" subsys=daemon
level=info msg="  --policy-queue-size='100'" subsys=daemon
level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
level=info msg="  --pprof='false'" subsys=daemon
level=info msg="  --pprof-address='localhost'" subsys=daemon
level=info msg="  --pprof-port='6060'" subsys=daemon
level=info msg="  --preallocate-bpf-maps='false'" subsys=daemon
level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
level=info msg="  --procfs='/host/proc'" subsys=daemon
level=info msg="  --prometheus-serve-addr=':9962'" subsys=daemon
level=info msg="  --proxy-connect-timeout='2'" subsys=daemon
level=info msg="  --proxy-gid='1337'" subsys=daemon
level=info msg="  --proxy-idle-timeout-seconds='60'" subsys=daemon
level=info msg="  --proxy-max-connection-duration-seconds='0'" subsys=daemon
level=info msg="  --proxy-max-requests-per-connection='0'" subsys=daemon
level=info msg="  --proxy-prometheus-port='9964'" subsys=daemon
level=info msg="  --read-cni-conf=''" subsys=daemon
level=info msg="  --remove-cilium-node-taints='true'" subsys=daemon
level=info msg="  --restore='true'" subsys=daemon
level=info msg="  --route-metric='0'" subsys=daemon
level=info msg="  --routing-mode='tunnel'" subsys=daemon
level=info msg="  --service-no-backend-response='reject'" subsys=daemon
level=info msg="  --set-cilium-is-up-condition='true'" subsys=daemon
level=info msg="  --set-cilium-node-taints='true'" subsys=daemon
level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
level=info msg="  --skip-cnp-status-startup-clean='false'" subsys=daemon
level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
level=info msg="  --srv6-encap-mode='reduced'" subsys=daemon
level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
level=info msg="  --synchronize-k8s-nodes='true'" subsys=daemon
level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
level=info msg="  --tofqdns-idle-connection-grace-period='0s'" subsys=daemon
level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
level=info msg="  --trace-payloadlen='128'" subsys=daemon
level=info msg="  --trace-sock='true'" subsys=daemon
level=info msg="  --tunnel-port='0'" subsys=daemon
level=info msg="  --tunnel-protocol='vxlan'" subsys=daemon
level=info msg="  --unmanaged-pod-watcher-interval='15'" subsys=daemon
level=info msg="  --use-cilium-internal-ip-for-ipsec='false'" subsys=daemon
level=info msg="  --version='false'" subsys=daemon
level=info msg="  --vlan-bpf-bypass=''" subsys=daemon
level=info msg="  --vtep-cidr=''" subsys=daemon
level=info msg="  --vtep-endpoint=''" subsys=daemon
level=info msg="  --vtep-mac=''" subsys=daemon
level=info msg="  --vtep-mask=''" subsys=daemon
level=info msg="  --wireguard-persistent-keepalive='0s'" subsys=daemon
level=info msg="  --write-cni-conf-when-ready='/host/etc/cni/net.d/05-cilium.conflist'" subsys=daemon
level=info msg="     _ _ _" subsys=daemon
level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
level=info msg="|  _| | | | | |     |" subsys=daemon
level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
level=info msg="Cilium 1.15.0-pre.3 ab990770 2023-12-04T12:59:37+01:00 go version go1.21.4 linux/arm64" subsys=daemon
level=info msg="clang (10.0.0) and kernel (6.5.11) versions: OK!" subsys=linux-datapath
level=warning msg="BPF system config check: NOT OK." error="CONFIG_NET_CLS_ACT kernel parameter is required (needed for: Essential eBPF infrastructure)" subsys=linux-datapath
level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
level=info msg="Mounted cgroupv2 filesystem at /run/cilium/cgroupv2" subsys=cgroups
level=info msg="Parsing base label prefixes from default label list" subsys=labels-filter
level=info msg="Parsing additional label prefixes from user inputs: []" subsys=labels-filter
level=info msg="Final label prefixes to be used for identity evaluation:" subsys=labels-filter
level=info msg=" - reserved:.*" subsys=labels-filter
level=info msg=" - :io\\.kubernetes\\.pod\\.namespace" subsys=labels-filter
level=info msg=" - :io\\.cilium\\.k8s\\.namespace\\.labels" subsys=labels-filter
level=info msg=" - :app\\.kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:io\\.kubernetes" subsys=labels-filter
level=info msg=" - !:kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:statefulset\\.kubernetes\\.io/pod-name" subsys=labels-filter
level=info msg=" - !:apps\\.kubernetes\\.io/pod-index" subsys=labels-filter
level=info msg=" - !:batch\\.kubernetes\\.io/job-completion-index" subsys=labels-filter
level=info msg=" - !:.*beta\\.kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:k8s\\.io" subsys=labels-filter
level=info msg=" - !:pod-template-generation" subsys=labels-filter
level=info msg=" - !:pod-template-hash" subsys=labels-filter
level=info msg=" - !:controller-revision-hash" subsys=labels-filter
level=info msg=" - !:annotation.*" subsys=labels-filter
level=info msg=" - !:etcd_node" subsys=labels-filter
level=info msg="Auto-disabling \"enable-bpf-clock-probe\" feature since KERNEL_HZ cannot be determined" error="open /proc/schedstat: no such file or directory" subsys=daemon
level=info msg=Invoked duration="460.125µs" function="pprof.glob..func1 (pkg/pprof/cell.go:51)" subsys=hive
level=info msg=Invoked duration="25.292µs" function="gops.registerGopsHooks (pkg/gops/cell.go:39)" subsys=hive
level=info msg=Invoked duration="554.75µs" function="metrics.glob..func1 (pkg/metrics/cell.go:11)" subsys=hive
level=info msg=Invoked duration="19.166µs" function="metricsmap.RegisterCollector (pkg/maps/metricsmap/metricsmap.go:275)" subsys=hive
level=info msg="Spire Delegate API Client is disabled as no socket path is configured" subsys=spire-delegate
level=info msg="Mutual authentication handler is disabled as no port is configured" subsys=auth
level=info msg=Invoked duration=48.14775ms function="cmd.configureAPIServer (cmd/cells.go:207)" subsys=hive
level=info msg=Invoked duration="10.292µs" function="cmd.unlockAfterAPIServer (cmd/deletion_queue.go:114)" subsys=hive
level=info msg=Invoked duration="18.375µs" function="controller.Init (pkg/controller/cell.go:67)" subsys=hive
level=info msg=Invoked duration="2.75µs" function="cmd.glob..func3 (cmd/daemon_main.go:1616)" subsys=hive
level=info msg=Invoked duration="36.167µs" function="cmd.registerEndpointBPFProgWatchdog (cmd/watchdogs.go:58)" subsys=hive
level=info msg=Invoked duration="27.917µs" function="envoy.registerEnvoyVersionCheck (pkg/envoy/cell.go:133)" subsys=hive
level=info msg=Invoked duration="5.333µs" function="utime.initUtimeSync (pkg/datapath/linux/utime/cell.go:32)" subsys=hive
level=info msg=Invoked duration="29.583µs" function="agentliveness.newAgentLivenessUpdater (pkg/datapath/agentliveness/agent_liveness.go:44)" subsys=hive
level=info msg=Invoked duration="13.375µs" function="statedb.RegisterTable[...] (pkg/statedb/db.go:121)" subsys=hive
level=info msg=Invoked duration="31.417µs" function="l2responder.NewL2ResponderReconciler (pkg/datapath/l2responder/l2responder.go:73)" subsys=hive
level=info msg=Invoked duration="32.208µs" function="garp.newGARPProcessor (pkg/datapath/garp/processor.go:27)" subsys=hive
level=info msg=Invoked duration="4.292µs" function="bigtcp.glob..func1 (pkg/datapath/linux/bigtcp/bigtcp.go:59)" subsys=hive
level=info msg=Invoked duration="3.708µs" function="linux.glob..func1 (pkg/datapath/linux/devices_controller.go:63)" subsys=hive
level=info msg=Invoked duration="23.042µs" function="ipcache.glob..func3 (pkg/datapath/ipcache/cell.go:25)" subsys=hive
level=info msg=Starting subsys=hive
level=info msg="Started gops server" address="127.0.0.1:9890" subsys=gops
level=info msg="Start hook executed" duration="229.083µs" function="gops.registerGopsHooks.func1 (pkg/gops/cell.go:44)" subsys=hive
level=info msg="Start hook executed" duration="1.334µs" function="metrics.NewRegistry.func1 (pkg/metrics/registry.go:86)" subsys=hive
level=info msg="Establishing connection to apiserver" host="https://kind-dev-cluster-control-plane:6443" subsys=k8s-client
level=info msg="Serving prometheus metrics on :9962" subsys=metrics
level=info msg="Connected to apiserver" subsys=k8s-client
level=info msg="Start hook executed" duration=3.853708ms function="client.(*compositeClientset).onStart" subsys=hive
level=info msg="Start hook executed" duration="9.5µs" function="*resource.resource[*v1.Node].Start" subsys=hive
level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.2.0.0/16
level=info msg="Opting out from node-to-node encryption on this node as per 'node-encryption-opt-out-labels' label selector" Selector=node-role.kubernetes.io/control-plane subsys=daemon
level=info msg="Start hook executed" duration="722.958µs" function="node.NewLocalNodeStore.func1 (pkg/node/local_node_store.go:96)" subsys=hive
level=info msg="Start hook executed" duration="22.25µs" function="authmap.newAuthMap.func1 (pkg/maps/authmap/cell.go:28)" subsys=hive
level=info msg="Start hook executed" duration="8.834µs" function="configmap.newMap.func1 (pkg/maps/configmap/cell.go:24)" subsys=hive
level=info msg="Start hook executed" duration="41.916µs" function="signalmap.newMap.func1 (pkg/maps/signalmap/cell.go:45)" subsys=hive
level=info msg="Start hook executed" duration="21.791µs" function="nodemap.newNodeMap.func1 (pkg/maps/nodemap/cell.go:24)" subsys=hive
level=info msg="Start hook executed" duration="26.666µs" function="eventsmap.newEventsMap.func1 (pkg/maps/eventsmap/cell.go:36)" subsys=hive
level=info msg="Start hook executed" duration="1.334µs" function="*statedb.DB.Start" subsys=hive
level=info msg="Start hook executed" duration="4.084µs" function="hive.New.func1.2 (pkg/hive/hive.go:106)" subsys=hive
level=info msg="Start hook executed" duration="2.167µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Devices changed" devices="[eth0]" subsys=devices-controller
level=info msg="Start hook executed" duration="330.917µs" function="*linux.devicesController.Start" subsys=hive
level=info msg="Node addresses updated" device=cilium_host node-addresses="10.0.0.2 (cilium_host)" subsys=node-address
level=info msg="Node addresses updated" device=eth0 node-addresses="172.20.0.4 (eth0), fc00:f853:ccd:e793::4 (eth0)" subsys=node-address
level=info msg="Start hook executed" duration="46.75µs" function="tables.(*nodeAddressController).register.func1 (pkg/datapath/tables/node_address.go:193)" subsys=hive
level=info msg="Start hook executed" duration="98.916µs" function="modules.(*Manager).Start" subsys=hive
level=info msg="Start hook executed" duration=1.216708ms function="*iptables.Manager.Start" subsys=hive
level=info msg="Start hook executed" duration="1.333µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Start hook executed" duration="6.083µs" function="endpointmanager.newDefaultEndpointManager.func1 (pkg/endpointmanager/cell.go:217)" subsys=hive
level=info msg="Start hook executed" duration="7.917µs" function="cmd.newPolicyTrifecta.func1 (cmd/policy.go:127)" subsys=hive
level=info msg="Start hook executed" duration="1.875µs" function="*resource.resource[*cilium.io/v2.CiliumEgressGatewayPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration=417ns function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=info msg="Start hook executed" duration=416ns function="*resource.resource[*types.CiliumEndpoint].Start" subsys=hive
level=info msg="Start hook executed" duration="89.292µs" function="*bandwidth.manager.Start" subsys=hive
level=info msg="Restored 0 node IDs from the BPF map" subsys=linux-datapath
level=info msg="Start hook executed" duration="23.334µs" function="datapath.newDatapath.func1 (pkg/datapath/cells.go:171)" subsys=hive
level=info msg="Start hook executed" duration="5.375µs" function="*resource.resource[*v1.Service].Start" subsys=hive
level=info msg="Start hook executed" duration=100.39975ms function="*store.diffStore[*v1.Service].Start" subsys=hive
level=info msg="Start hook executed" duration="1.416µs" function="*resource.resource[*k8s.Endpoints].Start" subsys=hive
level=info msg="Using discoveryv1.EndpointSlice" subsys=k8s
level=info msg="Start hook executed" duration=200.822084ms function="*store.diffStore[*k8s.Endpoints].Start" subsys=hive
level=info msg="Start hook executed" duration="5.291µs" function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=info msg="Start hook executed" duration="2.5µs" function="*resource.resource[*v1.Pod].Start" subsys=hive
level=info msg="Start hook executed" duration="1.875µs" function="*resource.resource[*v1.Namespace].Start" subsys=hive
level=info msg="Start hook executed" duration="9.666µs" function="*resource.resource[*v1.NetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="2.042µs" function="*resource.resource[*cilium.io/v2.CiliumNetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="5.417µs" function="*resource.resource[*cilium.io/v2.CiliumClusterwideNetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="5.375µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumCIDRGroup].Start" subsys=hive
level=info msg="Start hook executed" duration="8.542µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Start hook executed" duration="188.084µs" function="*manager.manager.Start" subsys=hive
level=info msg="Start hook executed" duration="266.5µs" function="*cni.cniConfigManager.Start" subsys=hive
level=info msg="Start hook executed" duration="1.417µs" function="k8s.newServiceCache.func1 (pkg/k8s/service_cache.go:145)" subsys=hive
level=info msg="Generating CNI configuration file with mode none" subsys=cni-config
level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
level=info msg="Start hook executed" duration="515.666µs" function="agent.newMonitorAgent.func1 (pkg/monitor/agent/cell.go:62)" subsys=hive
level=info msg="Start hook executed" duration="9.75µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumL2AnnouncementPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="4.125µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Start hook executed" duration="18.5µs" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="72.75µs" function="envoy.newEnvoyAccessLogServer.func1 (pkg/envoy/cell.go:108)" subsys=hive
level=info msg="Start hook executed" duration="82.125µs" function="envoy.newArtifactCopier.func1 (pkg/envoy/cell.go:179)" subsys=hive
level=info msg="Envoy: Starting access log server listening on /var/run/cilium/envoy/sockets/access_log.sock" subsys=envoy-manager
level=info msg="Start hook executed" duration="199.917µs" function="envoy.newEnvoyXDSServer.func1 (pkg/envoy/cell.go:66)" subsys=hive
level=info msg="Start hook executed" duration="5.167µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/envoy/sockets/xds.sock" subsys=envoy-manager
level=info msg="Start hook executed" duration=1.54375ms function="signal.provideSignalManager.func1 (pkg/signal/cell.go:26)" subsys=hive
level=info msg="Datapath signal listener running" subsys=signal
level=info msg="Start hook executed" duration=2.36625ms function="auth.registerAuthManager.func1 (pkg/auth/cell.go:113)" subsys=hive
level=info msg="Start hook executed" duration="8.5µs" function="auth.registerGCJobs.func1 (pkg/auth/cell.go:163)" subsys=hive
level=info msg="Start hook executed" duration="16.083µs" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="4.042µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Setting IPv6 gso_max_size to 65536 and gro_max_size to 65536" device=eth0 subsys=big-tcp
level=info msg="Setting IPv4 gso_max_size to 65536 and gro_max_size to 65536" device=eth0 subsys=big-tcp
level=info msg="Start hook executed" duration="453.792µs" function="bigtcp.newBIGTCP.func1 (pkg/datapath/linux/bigtcp/bigtcp.go:241)" subsys=hive
level=info msg="Start hook executed" duration="6.917µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Start hook executed" duration="248µs" function="*ipsec.keyCustodian.Start" subsys=hive
level=info msg="Start hook executed" duration="1.625µs" function="*job.group.Start" subsys=hive
level=info msg="Inheriting MTU from external network interface" device=eth0 ipAddr=172.20.0.4 mtu=65535 subsys=mtu
level=info msg="Start hook executed" duration=1.053417ms function="mtu.newForCell.func1 (pkg/mtu/cell.go:41)" subsys=hive
level=info msg="Using Managed Neighbor Kernel support" subsys=daemon
level=warning msg="Deprecated value for --kube-proxy-replacement: strict (use either \"true\", or \"false\")" subsys=daemon
level=info msg="Auto-enabling \"enable-node-port\", \"enable-external-ips\", \"bpf-lb-sock\", \"enable-host-port\", \"enable-session-affinity\" features" subsys=daemon
level=info msg="Cgroup metadata manager is enabled" subsys=cgroup-manager
level=info msg="Removed map pin at /sys/fs/bpf/tc/globals/cilium_ipcache, recreating and re-pinning map cilium_ipcache" file-path=/sys/fs/bpf/tc/globals/cilium_ipcache name=cilium_ipcache subsys=bpf
level=info msg="Removed map pin at /sys/fs/bpf/tc/globals/cilium_tunnel_map, recreating and re-pinning map cilium_tunnel_map" file-path=/sys/fs/bpf/tc/globals/cilium_tunnel_map name=cilium_tunnel_map subsys=bpf
level=info msg="Restored services from maps" failedServices=0 restoredServices=4 subsys=service
level=info msg="Restored backends from maps" failedBackends=0 restoredBackends=1 skippedBackends=0 subsys=service
level=info msg="Reading old endpoints..." subsys=daemon
level=info msg="No old endpoints found." subsys=daemon
level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
level=info msg="Creating or updating CiliumNode resource" node=kind-dev-cluster-control-plane subsys=nodediscovery
level=info msg="Retrieved node information from cilium node" nodeName=kind-dev-cluster-control-plane subsys=daemon
level=info msg="Received own node information from API server" ipAddr.ipv4=172.20.0.4 ipAddr.ipv6="<nil>" k8sNodeIP=172.20.0.4 labels="map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/os:linux kubernetes.io/arch:arm64 kubernetes.io/hostname:kind-dev-cluster-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:]" nodeName=kind-dev-cluster-control-plane subsys=daemon v4Prefix=10.0.0.0/24 v6Prefix="<nil>"
level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
level=info msg="Direct routing device detected" direct-routing-device=eth0 subsys=linux-datapath
level=info msg="BPF host routing requires enable-bpf-masquerade. Falling back to legacy host routing (enable-host-legacy-routing=true)." subsys=daemon
level=info msg="Enabling k8s event listener" subsys=k8s-watcher
level=info msg="Removing stale endpoint interfaces" subsys=daemon
level=info msg="Skipping kvstore configuration" subsys=daemon
level=info msg="Waiting until local node addressing before starting watchers depending on it" subsys=k8s-watcher
level=info msg="Restored router address from node_config" file=/var/run/cilium/state/globals/node_config.h ipv4=10.0.0.2 ipv6="<nil>" subsys=node
level=info msg="Initializing node addressing" subsys=daemon
level=info msg="Initializing cluster-pool IPAM" subsys=ipam v4Prefix=10.0.0.0/24 v6Prefix="<nil>"
level=info msg="Restoring endpoints..." subsys=daemon
level=info msg="Endpoints restored" failed=0 restored=0 subsys=daemon
level=info msg="Addressing information:" subsys=daemon
level=info msg="  Cluster-Name: default" subsys=daemon
level=info msg="  Cluster-ID: 0" subsys=daemon
level=info msg="  Local node-name: kind-dev-cluster-control-plane" subsys=daemon
level=info msg="  Node-IPv6: <nil>" subsys=daemon
level=info msg="  External-Node IPv4: 172.20.0.4" subsys=daemon
level=info msg="  Internal-Node IPv4: 10.0.0.2" subsys=daemon
level=info msg="  IPv4 allocation prefix: 10.0.0.0/24" subsys=daemon
level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
level=info msg="  Local IPv4 addresses:" subsys=daemon
level=info msg="  - 10.0.0.2" subsys=daemon
level=info msg="  - 172.20.0.4" subsys=daemon
level=info msg="Adding local node to cluster" node=kind-dev-cluster-control-plane subsys=nodediscovery
level=info msg="Creating or updating CiliumNode resource" node=kind-dev-cluster-control-plane subsys=nodediscovery
level=info msg="Waiting until all pre-existing resources have been received" subsys=k8s-watcher
level=info msg="Initializing identity allocator" subsys=identity-cache
level=info msg="Allocating identities between range" cluster-id=0 max=65535 min=256 subsys=identity-cache
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.forwarding sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.accept_local sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.send_redirects sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.forwarding sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.accept_local sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.send_redirects sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_vxlan.forwarding sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_vxlan.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_vxlan.accept_local sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_vxlan.send_redirects sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.core.bpf_jit_enable sysParamValue=1
level=warning msg="Unable to ensure that BPF JIT compilation is enabled. This can be ignored when Cilium is running inside non-host network namespace (e.g. with kind or minikube)" error="could not open the sysctl file /host/proc/sys/net/core/bpf_jit_enable: open /host/proc/sys/net/core/bpf_jit_enable: no such file or directory" subsys=sysctl sysParamName=net.core.bpf_jit_enable sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.fib_multipath_use_neigh sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.timer_migration sysParamValue=0
level=info msg="Updated link /sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_sendmsg for program cil_sock6_sendmsg" subsys=socketlb
level=info msg="Updated link /sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_post_bind for program cil_sock6_post_bind" subsys=socketlb
level=info msg="Updated link /sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_connect for program cil_sock4_connect" subsys=socketlb
level=info msg="Updated link /sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_sendmsg for program cil_sock4_sendmsg" subsys=socketlb
level=info msg="Updated link /sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_recvmsg for program cil_sock4_recvmsg" subsys=socketlb
level=info msg="Updated link /sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_getpeername for program cil_sock4_getpeername" subsys=socketlb
level=info msg="Updated link /sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_post_bind for program cil_sock4_post_bind" subsys=socketlb
level=info msg="Updated link /sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_connect for program cil_sock6_connect" subsys=socketlb
level=info msg="Updated link /sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_recvmsg for program cil_sock6_recvmsg" subsys=socketlb
level=info msg="Updated link /sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_getpeername for program cil_sock6_getpeername" subsys=socketlb
level=info msg="Re-pinning map with ':pending' suffix" bpfMapName=cilium_calls_overlay_2 bpfMapPath=/sys/fs/bpf/tc/globals/cilium_calls_overlay_2 subsys=bpf
level=info msg="Repinning without ':pending' suffix after failed migration" bpfMapName=cilium_calls_overlay_2 bpfMapPath=/sys/fs/bpf/tc/globals/cilium_calls_overlay_2 subsys=bpf
level=warning msg="Removed new pinned map after failed migration" bpfMapName=cilium_calls_overlay_2 bpfMapPath=/sys/fs/bpf/tc/globals/cilium_calls_overlay_2 subsys=bpf
level=fatal msg="Load overlay network failed" error="program cil_from_overlay: replacing clsact qdisc for interface cilium_vxlan: operation not supported" interface=cilium_vxlan subsys=datapath-loader

Anything else?

I am trying this on kind cluster, here is my kind config

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kind-dev-cluster
networking:
  disableDefaultCNI: true
  kubeProxyMode: "none"
  apiServerAddress: "127.0.0.1"
  apiServerPort: 6443
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    listenAddress: 127.0.0.1
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    listenAddress: 127.0.0.1
    protocol: TCP
- role: worker
- role: worker

and cilium helm values:

kubeProxyReplacement: "strict"
k8sServiceHost: "kind-dev-cluster-control-plane"
k8sServicePort: 6443
l2announcements:
  enabled: true
  leaseDuration: "3s"
  leaseRenewDeadline: "1s"
  leaseRetryPeriod: "500ms"
devices: ["eth0"]
externalIPs:
  enabled: true
eni:
  enabled: false

Code of Conduct

  • I agree to follow this project's Code of Conduct
@RazaGR RazaGR added kind/bug This is a bug in the Cilium logic. kind/community-report This was reported by a user in the Cilium community, eg via Slack. needs/triage This issue requires triaging to establish severity and next steps. labels Dec 16, 2023
@RazaGR
Copy link
Author

RazaGR commented Dec 16, 2023

I tried another simple config and this is what I get:

level=fatal msg="Load overlay network failed" error="program cil_from_overlay: replacing clsact qdisc for interface cilium_vxlan: operation not supported" interface=cilium_vxlan subsys=datapath-loader

to reproduce, Kind cluster:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kind-dev-cluster
networking:
  disableDefaultCNI: true
  apiServerAddress: "127.0.0.1"
  apiServerPort: 6443
nodes:
- role: control-plane
- role: worker
- role: worker

and cilium

helm repo add cilium https://helm.cilium.io/
docker pull quay.io/cilium/cilium:v1.14.5
kind load docker-image quay.io/cilium/cilium:v1.14.5 --name kind-dev-cluster
helm install cilium cilium/cilium --version 1.14.5 \
   --namespace kube-system \
   --set image.pullPolicy=IfNotPresent \
   --set ipam.mode=kubernetes

complete log

level=info msg="Memory available for map entries (0.003% of 8227328000B): 20568320B" subsys=config
level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 131072" subsys=config
level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 65536" subsys=config
level=info msg="option bpf-nat-global-max set by dynamic sizing to 131072" subsys=config
level=info msg="option bpf-neigh-global-max set by dynamic sizing to 131072" subsys=config
level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 65536" subsys=config
level=info msg="  --agent-health-port='9879'" subsys=daemon
level=info msg="  --agent-labels=''" subsys=daemon
level=info msg="  --agent-liveness-update-interval='1s'" subsys=daemon
level=info msg="  --agent-not-ready-taint-key='node.cilium.io/agent-not-ready'" subsys=daemon
level=info msg="  --allocator-list-timeout='3m0s'" subsys=daemon
level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
level=info msg="  --allow-localhost='auto'" subsys=daemon
level=info msg="  --annotate-k8s-node='false'" subsys=daemon
level=info msg="  --api-rate-limit=''" subsys=daemon
level=info msg="  --arping-refresh-period='30s'" subsys=daemon
level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
level=info msg="  --auto-direct-node-routes='false'" subsys=daemon
level=info msg="  --bgp-announce-lb-ip='false'" subsys=daemon
level=info msg="  --bgp-announce-pod-cidr='false'" subsys=daemon
level=info msg="  --bgp-config-path='/var/lib/cilium/bgp/config.yaml'" subsys=daemon
level=info msg="  --bpf-auth-map-max='524288'" subsys=daemon
level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp='2h13m20s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-tcp='2h13m20s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-tcp-grace='1m0s'" subsys=daemon
level=info msg="  --bpf-filter-priority='1'" subsys=daemon
level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
level=info msg="  --bpf-lb-acceleration='disabled'" subsys=daemon
level=info msg="  --bpf-lb-affinity-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-algorithm='random'" subsys=daemon
level=info msg="  --bpf-lb-dev-ip-addr-inherit=''" subsys=daemon
level=info msg="  --bpf-lb-dsr-dispatch='opt'" subsys=daemon
level=info msg="  --bpf-lb-dsr-l4-xlate='frontend'" subsys=daemon
level=info msg="  --bpf-lb-external-clusterip='false'" subsys=daemon
level=info msg="  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
level=info msg="  --bpf-lb-maglev-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-maglev-table-size='16381'" subsys=daemon
level=info msg="  --bpf-lb-map-max='65536'" subsys=daemon
level=info msg="  --bpf-lb-mode='snat'" subsys=daemon
level=info msg="  --bpf-lb-rev-nat-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-rss-ipv4-src-cidr=''" subsys=daemon
level=info msg="  --bpf-lb-rss-ipv6-src-cidr=''" subsys=daemon
level=info msg="  --bpf-lb-service-backend-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-service-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-sock='false'" subsys=daemon
level=info msg="  --bpf-lb-sock-hostns-only='false'" subsys=daemon
level=info msg="  --bpf-lb-source-range-map-max='0'" subsys=daemon
level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
level=info msg="  --bpf-map-event-buffers=''" subsys=daemon
level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
level=info msg="  --bpf-policy-map-full-reconciliation-interval='15m0s'" subsys=daemon
level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
level=info msg="  --bpf-root='/sys/fs/bpf'" subsys=daemon
level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
level=info msg="  --bypass-ip-availability-upon-restore='false'" subsys=daemon
level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
level=info msg="  --cflags=''" subsys=daemon
level=info msg="  --cgroup-root='/run/cilium/cgroupv2'" subsys=daemon
level=info msg="  --cilium-endpoint-gc-interval='5m0s'" subsys=daemon
level=info msg="  --cluster-health-port='4240'" subsys=daemon
level=info msg="  --cluster-id='0'" subsys=daemon
level=info msg="  --cluster-name='default'" subsys=daemon
level=info msg="  --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
level=info msg="  --clustermesh-ip-identities-sync-timeout='1m0s'" subsys=daemon
level=info msg="  --cmdref=''" subsys=daemon
level=info msg="  --cni-chaining-mode='none'" subsys=daemon
level=info msg="  --cni-chaining-target=''" subsys=daemon
level=info msg="  --cni-exclusive='true'" subsys=daemon
level=info msg="  --cni-external-routing='false'" subsys=daemon
level=info msg="  --cni-log-file='/var/run/cilium/cilium-cni.log'" subsys=daemon
level=info msg="  --config=''" subsys=daemon
level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=daemon
level=info msg="  --config-sources='config-map:kube-system/cilium-config'" subsys=daemon
level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
level=info msg="  --conntrack-gc-max-interval='0s'" subsys=daemon
level=info msg="  --controller-group-metrics=''" subsys=daemon
level=info msg="  --crd-wait-timeout='5m0s'" subsys=daemon
level=info msg="  --custom-cni-conf='false'" subsys=daemon
level=info msg="  --datapath-mode='veth'" subsys=daemon
level=info msg="  --debug='false'" subsys=daemon
level=info msg="  --debug-verbose=''" subsys=daemon
level=info msg="  --derive-masquerade-ip-addr-from-device=''" subsys=daemon
level=info msg="  --devices=''" subsys=daemon
level=info msg="  --direct-routing-device=''" subsys=daemon
level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
level=info msg="  --dns-max-ips-per-restored-rule='1000'" subsys=daemon
level=info msg="  --dns-policy-unload-on-shutdown='false'" subsys=daemon
level=info msg="  --dnsproxy-concurrency-limit='0'" subsys=daemon
level=info msg="  --dnsproxy-concurrency-processing-grace-period='0s'" subsys=daemon
level=info msg="  --dnsproxy-lock-count='131'" subsys=daemon
level=info msg="  --dnsproxy-lock-timeout='500ms'" subsys=daemon
level=info msg="  --egress-gateway-policy-map-max='16384'" subsys=daemon
level=info msg="  --egress-gateway-reconciliation-trigger-interval='1s'" subsys=daemon
level=info msg="  --egress-masquerade-interfaces=''" subsys=daemon
level=info msg="  --egress-multi-home-ip-rule-compat='false'" subsys=daemon
level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
level=info msg="  --enable-bandwidth-manager='false'" subsys=daemon
level=info msg="  --enable-bbr='false'" subsys=daemon
level=info msg="  --enable-bgp-control-plane='false'" subsys=daemon
level=info msg="  --enable-bpf-clock-probe='false'" subsys=daemon
level=info msg="  --enable-bpf-masquerade='false'" subsys=daemon
level=info msg="  --enable-bpf-tproxy='false'" subsys=daemon
level=info msg="  --enable-cilium-api-server-access='*'" subsys=daemon
level=info msg="  --enable-cilium-endpoint-slice='false'" subsys=daemon
level=info msg="  --enable-cilium-health-api-server-access='*'" subsys=daemon
level=info msg="  --enable-custom-calls='false'" subsys=daemon
level=info msg="  --enable-encryption-strict-mode='false'" subsys=daemon
level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
level=info msg="  --enable-endpoint-routes='false'" subsys=daemon
level=info msg="  --enable-envoy-config='false'" subsys=daemon
level=info msg="  --enable-external-ips='false'" subsys=daemon
level=info msg="  --enable-health-check-loadbalancer-ip='false'" subsys=daemon
level=info msg="  --enable-health-check-nodeport='true'" subsys=daemon
level=info msg="  --enable-health-checking='true'" subsys=daemon
level=info msg="  --enable-high-scale-ipcache='false'" subsys=daemon
level=info msg="  --enable-host-firewall='false'" subsys=daemon
level=info msg="  --enable-host-legacy-routing='false'" subsys=daemon
level=info msg="  --enable-host-port='false'" subsys=daemon
level=info msg="  --enable-hubble='true'" subsys=daemon
level=info msg="  --enable-hubble-recorder-api='true'" subsys=daemon
level=info msg="  --enable-icmp-rules='true'" subsys=daemon
level=info msg="  --enable-identity-mark='true'" subsys=daemon
level=info msg="  --enable-ip-masq-agent='false'" subsys=daemon
level=info msg="  --enable-ipsec='false'" subsys=daemon
level=info msg="  --enable-ipsec-key-watcher='true'" subsys=daemon
level=info msg="  --enable-ipv4='true'" subsys=daemon
level=info msg="  --enable-ipv4-big-tcp='false'" subsys=daemon
level=info msg="  --enable-ipv4-egress-gateway='false'" subsys=daemon
level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
level=info msg="  --enable-ipv4-masquerade='true'" subsys=daemon
level=info msg="  --enable-ipv6='false'" subsys=daemon
level=info msg="  --enable-ipv6-big-tcp='false'" subsys=daemon
level=info msg="  --enable-ipv6-masquerade='true'" subsys=daemon
level=info msg="  --enable-ipv6-ndp='false'" subsys=daemon
level=info msg="  --enable-k8s='true'" subsys=daemon
level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
level=info msg="  --enable-k8s-networkpolicy='true'" subsys=daemon
level=info msg="  --enable-k8s-terminating-endpoint='true'" subsys=daemon
level=info msg="  --enable-l2-announcements='false'" subsys=daemon
level=info msg="  --enable-l2-neigh-discovery='true'" subsys=daemon
level=info msg="  --enable-l2-pod-announcements='false'" subsys=daemon
level=info msg="  --enable-l7-proxy='true'" subsys=daemon
level=info msg="  --enable-local-node-route='true'" subsys=daemon
level=info msg="  --enable-local-redirect-policy='false'" subsys=daemon
level=info msg="  --enable-masquerade-to-route-source='false'" subsys=daemon
level=info msg="  --enable-metrics='true'" subsys=daemon
level=info msg="  --enable-mke='false'" subsys=daemon
level=info msg="  --enable-monitor='true'" subsys=daemon
level=info msg="  --enable-nat46x64-gateway='false'" subsys=daemon
level=info msg="  --enable-node-port='false'" subsys=daemon
level=info msg="  --enable-pmtu-discovery='false'" subsys=daemon
level=info msg="  --enable-policy='default'" subsys=daemon
level=info msg="  --enable-recorder='false'" subsys=daemon
level=info msg="  --enable-remote-node-identity='true'" subsys=daemon
level=info msg="  --enable-runtime-device-detection='false'" subsys=daemon
level=info msg="  --enable-sctp='false'" subsys=daemon
level=info msg="  --enable-service-topology='false'" subsys=daemon
level=info msg="  --enable-session-affinity='false'" subsys=daemon
level=info msg="  --enable-srv6='false'" subsys=daemon
level=info msg="  --enable-stale-cilium-endpoint-cleanup='true'" subsys=daemon
level=info msg="  --enable-svc-source-range-check='true'" subsys=daemon
level=info msg="  --enable-tracing='false'" subsys=daemon
level=info msg="  --enable-unreachable-routes='false'" subsys=daemon
level=info msg="  --enable-vtep='false'" subsys=daemon
level=info msg="  --enable-well-known-identities='false'" subsys=daemon
level=info msg="  --enable-wireguard='false'" subsys=daemon
level=info msg="  --enable-wireguard-userspace-fallback='false'" subsys=daemon
level=info msg="  --enable-xdp-prefilter='false'" subsys=daemon
level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
level=info msg="  --encrypt-interface=''" subsys=daemon
level=info msg="  --encrypt-node='false'" subsys=daemon
level=info msg="  --encryption-strict-mode-allow-remote-node-identities='false'" subsys=daemon
level=info msg="  --encryption-strict-mode-cidr=''" subsys=daemon
level=info msg="  --endpoint-bpf-prog-watchdog-interval='30s'" subsys=daemon
level=info msg="  --endpoint-gc-interval='5m0s'" subsys=daemon
level=info msg="  --endpoint-queue-size='25'" subsys=daemon
level=info msg="  --endpoint-status=''" subsys=daemon
level=info msg="  --envoy-config-timeout='2m0s'" subsys=daemon
level=info msg="  --envoy-log=''" subsys=daemon
level=info msg="  --exclude-local-address=''" subsys=daemon
level=info msg="  --external-envoy-proxy='false'" subsys=daemon
level=info msg="  --fixed-identity-mapping=''" subsys=daemon
level=info msg="  --fqdn-regex-compile-lru-size='1024'" subsys=daemon
level=info msg="  --gops-port='9890'" subsys=daemon
level=info msg="  --http-403-msg=''" subsys=daemon
level=info msg="  --http-idle-timeout='0'" subsys=daemon
level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
level=info msg="  --http-normalize-path='true'" subsys=daemon
level=info msg="  --http-request-timeout='3600'" subsys=daemon
level=info msg="  --http-retry-count='3'" subsys=daemon
level=info msg="  --http-retry-timeout='0'" subsys=daemon
level=info msg="  --hubble-disable-tls='false'" subsys=daemon
level=info msg="  --hubble-event-buffer-capacity='4095'" subsys=daemon
level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
level=info msg="  --hubble-export-allowlist=''" subsys=daemon
level=info msg="  --hubble-export-denylist=''" subsys=daemon
level=info msg="  --hubble-export-fieldmask=''" subsys=daemon
level=info msg="  --hubble-export-file-compress='false'" subsys=daemon
level=info msg="  --hubble-export-file-max-backups='5'" subsys=daemon
level=info msg="  --hubble-export-file-max-size-mb='10'" subsys=daemon
level=info msg="  --hubble-export-file-path=''" subsys=daemon
level=info msg="  --hubble-flowlogs-config-path=''" subsys=daemon
level=info msg="  --hubble-listen-address=':4244'" subsys=daemon
level=info msg="  --hubble-metrics=''" subsys=daemon
level=info msg="  --hubble-metrics-server=''" subsys=daemon
level=info msg="  --hubble-monitor-events=''" subsys=daemon
level=info msg="  --hubble-prefer-ipv6='false'" subsys=daemon
level=info msg="  --hubble-recorder-sink-queue-size='1024'" subsys=daemon
level=info msg="  --hubble-recorder-storage-path='/var/run/cilium/pcaps'" subsys=daemon
level=info msg="  --hubble-redact-enabled='false'" subsys=daemon
level=info msg="  --hubble-redact-http-headers-allow=''" subsys=daemon
level=info msg="  --hubble-redact-http-headers-deny=''" subsys=daemon
level=info msg="  --hubble-redact-http-urlquery='false'" subsys=daemon
level=info msg="  --hubble-redact-http-userinfo='true'" subsys=daemon
level=info msg="  --hubble-redact-kafka-apikey='false'" subsys=daemon
level=info msg="  --hubble-skip-unknown-cgroup-ids='true'" subsys=daemon
level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
level=info msg="  --hubble-tls-cert-file='/var/lib/cilium/tls/hubble/server.crt'" subsys=daemon
level=info msg="  --hubble-tls-client-ca-files='/var/lib/cilium/tls/hubble/client-ca.crt'" subsys=daemon
level=info msg="  --hubble-tls-key-file='/var/lib/cilium/tls/hubble/server.key'" subsys=daemon
level=info msg="  --identity-allocation-mode='crd'" subsys=daemon
level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
level=info msg="  --identity-gc-interval='15m0s'" subsys=daemon
level=info msg="  --identity-heartbeat-timeout='30m0s'" subsys=daemon
level=info msg="  --identity-restore-grace-period='10m0s'" subsys=daemon
level=info msg="  --install-egress-gateway-routes='false'" subsys=daemon
level=info msg="  --install-iptables-rules='true'" subsys=daemon
level=info msg="  --install-no-conntrack-iptables-rules='false'" subsys=daemon
level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
level=info msg="  --ipam='kubernetes'" subsys=daemon
level=info msg="  --ipam-cilium-node-update-rate='15s'" subsys=daemon
level=info msg="  --ipam-default-ip-pool='default'" subsys=daemon
level=info msg="  --ipam-multi-pool-pre-allocation=''" subsys=daemon
level=info msg="  --ipsec-key-file=''" subsys=daemon
level=info msg="  --ipsec-key-rotation-duration='5m0s'" subsys=daemon
level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
level=info msg="  --iptables-random-fully='false'" subsys=daemon
level=info msg="  --ipv4-native-routing-cidr=''" subsys=daemon
level=info msg="  --ipv4-node='auto'" subsys=daemon
level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
level=info msg="  --ipv4-range='auto'" subsys=daemon
level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
level=info msg="  --ipv4-service-range='auto'" subsys=daemon
level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
level=info msg="  --ipv6-mcast-device=''" subsys=daemon
level=info msg="  --ipv6-native-routing-cidr=''" subsys=daemon
level=info msg="  --ipv6-node='auto'" subsys=daemon
level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
level=info msg="  --ipv6-range='auto'" subsys=daemon
level=info msg="  --ipv6-service-range='auto'" subsys=daemon
level=info msg="  --join-cluster='false'" subsys=daemon
level=info msg="  --k8s-api-server=''" subsys=daemon
level=info msg="  --k8s-client-burst='20'" subsys=daemon
level=info msg="  --k8s-client-qps='10'" subsys=daemon
level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
level=info msg="  --k8s-namespace='kube-system'" subsys=daemon
level=info msg="  --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
level=info msg="  --k8s-service-proxy-name=''" subsys=daemon
level=info msg="  --k8s-sync-timeout='3m0s'" subsys=daemon
level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
level=info msg="  --keep-config='false'" subsys=daemon
level=info msg="  --kube-proxy-replacement='false'" subsys=daemon
level=info msg="  --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
level=info msg="  --kvstore=''" subsys=daemon
level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
level=info msg="  --kvstore-max-consecutive-quorum-errors='2'" subsys=daemon
level=info msg="  --kvstore-opt=''" subsys=daemon
level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
level=info msg="  --l2-announcements-lease-duration='15s'" subsys=daemon
level=info msg="  --l2-announcements-renew-deadline='5s'" subsys=daemon
level=info msg="  --l2-announcements-retry-period='2s'" subsys=daemon
level=info msg="  --l2-pod-announcements-interface=''" subsys=daemon
level=info msg="  --label-prefix-file=''" subsys=daemon
level=info msg="  --labels=''" subsys=daemon
level=info msg="  --legacy-turn-off-k8s-event-handover='false'" subsys=daemon
level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
level=info msg="  --local-max-addr-scope='252'" subsys=daemon
level=info msg="  --local-router-ipv4=''" subsys=daemon
level=info msg="  --local-router-ipv6=''" subsys=daemon
level=info msg="  --log-driver=''" subsys=daemon
level=info msg="  --log-opt=''" subsys=daemon
level=info msg="  --log-system-load='false'" subsys=daemon
level=info msg="  --max-connected-clusters='255'" subsys=daemon
level=info msg="  --max-controller-interval='0'" subsys=daemon
level=info msg="  --max-internal-timer-delay='0s'" subsys=daemon
level=info msg="  --mesh-auth-enabled='true'" subsys=daemon
level=info msg="  --mesh-auth-gc-interval='5m0s'" subsys=daemon
level=info msg="  --mesh-auth-mutual-connect-timeout='5s'" subsys=daemon
level=info msg="  --mesh-auth-mutual-listener-port='0'" subsys=daemon
level=info msg="  --mesh-auth-queue-size='1024'" subsys=daemon
level=info msg="  --mesh-auth-rotated-identities-queue-size='1024'" subsys=daemon
level=info msg="  --mesh-auth-signal-backoff-duration='1s'" subsys=daemon
level=info msg="  --mesh-auth-spiffe-trust-domain='spiffe.cilium'" subsys=daemon
level=info msg="  --mesh-auth-spire-admin-socket=''" subsys=daemon
level=info msg="  --metrics=''" subsys=daemon
level=info msg="  --mke-cgroup-mount=''" subsys=daemon
level=info msg="  --monitor-aggregation='medium'" subsys=daemon
level=info msg="  --monitor-aggregation-flags='all'" subsys=daemon
level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
level=info msg="  --monitor-queue-size='0'" subsys=daemon
level=info msg="  --mtu='0'" subsys=daemon
level=info msg="  --node-encryption-opt-out-labels='node-role.kubernetes.io/control-plane'" subsys=daemon
level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
level=info msg="  --node-port-algorithm='random'" subsys=daemon
level=info msg="  --node-port-bind-protection='true'" subsys=daemon
level=info msg="  --node-port-mode='snat'" subsys=daemon
level=info msg="  --node-port-range='30000,32767'" subsys=daemon
level=info msg="  --nodeport-addresses=''" subsys=daemon
level=info msg="  --nodes-gc-interval='5m0s'" subsys=daemon
level=info msg="  --operator-api-serve-addr='127.0.0.1:9234'" subsys=daemon
level=info msg="  --operator-prometheus-serve-addr=':9963'" subsys=daemon
level=info msg="  --policy-audit-mode='false'" subsys=daemon
level=info msg="  --policy-cidr-match-mode=''" subsys=daemon
level=info msg="  --policy-queue-size='100'" subsys=daemon
level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
level=info msg="  --pprof='false'" subsys=daemon
level=info msg="  --pprof-address='localhost'" subsys=daemon
level=info msg="  --pprof-port='6060'" subsys=daemon
level=info msg="  --preallocate-bpf-maps='false'" subsys=daemon
level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
level=info msg="  --procfs='/host/proc'" subsys=daemon
level=info msg="  --prometheus-serve-addr=':9962'" subsys=daemon
level=info msg="  --proxy-connect-timeout='2'" subsys=daemon
level=info msg="  --proxy-gid='1337'" subsys=daemon
level=info msg="  --proxy-idle-timeout-seconds='60'" subsys=daemon
level=info msg="  --proxy-max-connection-duration-seconds='0'" subsys=daemon
level=info msg="  --proxy-max-requests-per-connection='0'" subsys=daemon
level=info msg="  --proxy-prometheus-port='9964'" subsys=daemon
level=info msg="  --read-cni-conf=''" subsys=daemon
level=info msg="  --remove-cilium-node-taints='true'" subsys=daemon
level=info msg="  --restore='true'" subsys=daemon
level=info msg="  --route-metric='0'" subsys=daemon
level=info msg="  --routing-mode='tunnel'" subsys=daemon
level=info msg="  --service-no-backend-response='reject'" subsys=daemon
level=info msg="  --set-cilium-is-up-condition='true'" subsys=daemon
level=info msg="  --set-cilium-node-taints='true'" subsys=daemon
level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
level=info msg="  --skip-cnp-status-startup-clean='false'" subsys=daemon
level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
level=info msg="  --srv6-encap-mode='reduced'" subsys=daemon
level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
level=info msg="  --synchronize-k8s-nodes='true'" subsys=daemon
level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
level=info msg="  --tofqdns-idle-connection-grace-period='0s'" subsys=daemon
level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
level=info msg="  --trace-payloadlen='128'" subsys=daemon
level=info msg="  --trace-sock='true'" subsys=daemon
level=info msg="  --tunnel-port='0'" subsys=daemon
level=info msg="  --tunnel-protocol='vxlan'" subsys=daemon
level=info msg="  --unmanaged-pod-watcher-interval='15'" subsys=daemon
level=info msg="  --use-cilium-internal-ip-for-ipsec='false'" subsys=daemon
level=info msg="  --version='false'" subsys=daemon
level=info msg="  --vlan-bpf-bypass=''" subsys=daemon
level=info msg="  --vtep-cidr=''" subsys=daemon
level=info msg="  --vtep-endpoint=''" subsys=daemon
level=info msg="  --vtep-mac=''" subsys=daemon
level=info msg="  --vtep-mask=''" subsys=daemon
level=info msg="  --wireguard-persistent-keepalive='0s'" subsys=daemon
level=info msg="  --write-cni-conf-when-ready='/host/etc/cni/net.d/05-cilium.conflist'" subsys=daemon
level=info msg="     _ _ _" subsys=daemon
level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
level=info msg="|  _| | | | | |     |" subsys=daemon
level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
level=info msg="Cilium 1.15.0-pre.3 ab990770 2023-12-04T12:59:37+01:00 go version go1.21.4 linux/arm64" subsys=daemon
level=info msg="clang (10.0.0) and kernel (6.5.11) versions: OK!" subsys=linux-datapath
level=warning msg="BPF system config check: NOT OK." error="CONFIG_NET_SCH_INGRESS kernel parameter or module is required (needed for: Essential eBPF infrastructure)" subsys=linux-datapath
level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
level=info msg="Mounted cgroupv2 filesystem at /run/cilium/cgroupv2" subsys=cgroups
level=info msg="Parsing base label prefixes from default label list" subsys=labels-filter
level=info msg="Parsing additional label prefixes from user inputs: []" subsys=labels-filter
level=info msg="Final label prefixes to be used for identity evaluation:" subsys=labels-filter
level=info msg=" - reserved:.*" subsys=labels-filter
level=info msg=" - :io\\.kubernetes\\.pod\\.namespace" subsys=labels-filter
level=info msg=" - :io\\.cilium\\.k8s\\.namespace\\.labels" subsys=labels-filter
level=info msg=" - :app\\.kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:io\\.kubernetes" subsys=labels-filter
level=info msg=" - !:kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:statefulset\\.kubernetes\\.io/pod-name" subsys=labels-filter
level=info msg=" - !:apps\\.kubernetes\\.io/pod-index" subsys=labels-filter
level=info msg=" - !:batch\\.kubernetes\\.io/job-completion-index" subsys=labels-filter
level=info msg=" - !:.*beta\\.kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:k8s\\.io" subsys=labels-filter
level=info msg=" - !:pod-template-generation" subsys=labels-filter
level=info msg=" - !:pod-template-hash" subsys=labels-filter
level=info msg=" - !:controller-revision-hash" subsys=labels-filter
level=info msg=" - !:annotation.*" subsys=labels-filter
level=info msg=" - !:etcd_node" subsys=labels-filter
level=info msg="Auto-disabling \"enable-bpf-clock-probe\" feature since KERNEL_HZ cannot be determined" error="open /proc/schedstat: no such file or directory" subsys=daemon
level=info msg=Invoked duration="540.542µs" function="pprof.glob..func1 (pkg/pprof/cell.go:51)" subsys=hive
level=info msg=Invoked duration="24.875µs" function="gops.registerGopsHooks (pkg/gops/cell.go:39)" subsys=hive
level=info msg=Invoked duration="617.333µs" function="metrics.glob..func1 (pkg/metrics/cell.go:11)" subsys=hive
level=info msg=Invoked duration="36.542µs" function="metricsmap.RegisterCollector (pkg/maps/metricsmap/metricsmap.go:275)" subsys=hive
level=info msg="Spire Delegate API Client is disabled as no socket path is configured" subsys=spire-delegate
level=info msg="Mutual authentication handler is disabled as no port is configured" subsys=auth
level=info msg=Invoked duration=53.348042ms function="cmd.configureAPIServer (cmd/cells.go:207)" subsys=hive
level=info msg=Invoked duration="11.333µs" function="cmd.unlockAfterAPIServer (cmd/deletion_queue.go:114)" subsys=hive
level=info msg=Invoked duration="70.666µs" function="controller.Init (pkg/controller/cell.go:67)" subsys=hive
level=info msg=Invoked duration="5.292µs" function="cmd.glob..func3 (cmd/daemon_main.go:1616)" subsys=hive
level=info msg=Invoked duration="51µs" function="cmd.registerEndpointBPFProgWatchdog (cmd/watchdogs.go:58)" subsys=hive
level=info msg=Invoked duration="32.958µs" function="envoy.registerEnvoyVersionCheck (pkg/envoy/cell.go:133)" subsys=hive
level=info msg=Invoked duration="7.667µs" function="utime.initUtimeSync (pkg/datapath/linux/utime/cell.go:32)" subsys=hive
level=info msg=Invoked duration="42.084µs" function="agentliveness.newAgentLivenessUpdater (pkg/datapath/agentliveness/agent_liveness.go:44)" subsys=hive
level=info msg=Invoked duration="16.625µs" function="statedb.RegisterTable[...] (pkg/statedb/db.go:121)" subsys=hive
level=info msg=Invoked duration="57.416µs" function="l2responder.NewL2ResponderReconciler (pkg/datapath/l2responder/l2responder.go:73)" subsys=hive
level=info msg=Invoked duration="36.167µs" function="garp.newGARPProcessor (pkg/datapath/garp/processor.go:27)" subsys=hive
level=info msg=Invoked duration="5.334µs" function="bigtcp.glob..func1 (pkg/datapath/linux/bigtcp/bigtcp.go:59)" subsys=hive
level=info msg=Invoked duration="5.583µs" function="linux.glob..func1 (pkg/datapath/linux/devices_controller.go:63)" subsys=hive
level=info msg=Invoked duration="29.625µs" function="ipcache.glob..func3 (pkg/datapath/ipcache/cell.go:25)" subsys=hive
level=info msg=Starting subsys=hive
level=info msg="Started gops server" address="127.0.0.1:9890" subsys=gops
level=info msg="Start hook executed" duration="228.792µs" function="gops.registerGopsHooks.func1 (pkg/gops/cell.go:44)" subsys=hive
level=info msg="Start hook executed" duration="1.042µs" function="metrics.NewRegistry.func1 (pkg/metrics/registry.go:86)" subsys=hive
level=info msg="Establishing connection to apiserver" host="https://10.96.0.1:443" subsys=k8s-client
level=info msg="Serving prometheus metrics on :9962" subsys=metrics
level=info msg="Connected to apiserver" subsys=k8s-client
level=info msg="Start hook executed" duration=4.978417ms function="client.(*compositeClientset).onStart" subsys=hive
level=info msg="Start hook executed" duration="7.292µs" function="*resource.resource[*v1.Node].Start" subsys=hive
level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.162.0.0/16
level=info msg="Opting out from node-to-node encryption on this node as per 'node-encryption-opt-out-labels' label selector" Selector=node-role.kubernetes.io/control-plane subsys=daemon
level=info msg="Start hook executed" duration="800.958µs" function="node.NewLocalNodeStore.func1 (pkg/node/local_node_store.go:96)" subsys=hive
level=info msg="Start hook executed" duration="25.625µs" function="authmap.newAuthMap.func1 (pkg/maps/authmap/cell.go:28)" subsys=hive
level=info msg="Start hook executed" duration="10.541µs" function="configmap.newMap.func1 (pkg/maps/configmap/cell.go:24)" subsys=hive
level=info msg="Start hook executed" duration="48.084µs" function="signalmap.newMap.func1 (pkg/maps/signalmap/cell.go:45)" subsys=hive
level=info msg="Start hook executed" duration="7.709µs" function="nodemap.newNodeMap.func1 (pkg/maps/nodemap/cell.go:24)" subsys=hive
level=info msg="Start hook executed" duration="29.25µs" function="eventsmap.newEventsMap.func1 (pkg/maps/eventsmap/cell.go:36)" subsys=hive
level=info msg="Start hook executed" duration="10.625µs" function="*statedb.DB.Start" subsys=hive
level=info msg="Start hook executed" duration="4.625µs" function="hive.New.func1.2 (pkg/hive/hive.go:106)" subsys=hive
level=info msg="Start hook executed" duration="2.417µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Devices changed" devices="[eth0]" subsys=devices-controller
level=info msg="Start hook executed" duration="444.834µs" function="*linux.devicesController.Start" subsys=hive
level=info msg="Node addresses updated" device=cilium_host node-addresses="10.244.0.162 (cilium_host)" subsys=node-address
level=info msg="Node addresses updated" device=eth0 node-addresses="172.18.0.4 (eth0), fc00:f853:ccd:e793::4 (eth0)" subsys=node-address
level=info msg="Start hook executed" duration="53.125µs" function="tables.(*nodeAddressController).register.func1 (pkg/datapath/tables/node_address.go:193)" subsys=hive
level=info msg="Start hook executed" duration="108.209µs" function="modules.(*Manager).Start" subsys=hive
level=info msg="Start hook executed" duration=1.43275ms function="*iptables.Manager.Start" subsys=hive
level=info msg="Start hook executed" duration="2.583µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Start hook executed" duration="8.958µs" function="endpointmanager.newDefaultEndpointManager.func1 (pkg/endpointmanager/cell.go:217)" subsys=hive
level=info msg="Start hook executed" duration="6.042µs" function="cmd.newPolicyTrifecta.func1 (cmd/policy.go:127)" subsys=hive
level=info msg="Start hook executed" duration=875ns function="*resource.resource[*cilium.io/v2.CiliumEgressGatewayPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration=459ns function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=info msg="Start hook executed" duration=459ns function="*resource.resource[*types.CiliumEndpoint].Start" subsys=hive
level=info msg="Start hook executed" duration="101.292µs" function="*bandwidth.manager.Start" subsys=hive
level=info msg="Restored 0 node IDs from the BPF map" subsys=linux-datapath
level=info msg="Start hook executed" duration="29.292µs" function="datapath.newDatapath.func1 (pkg/datapath/cells.go:171)" subsys=hive
level=info msg="Start hook executed" duration="6.375µs" function="*resource.resource[*v1.Service].Start" subsys=hive
level=info msg="Start hook executed" duration=100.143667ms function="*store.diffStore[*v1.Service].Start" subsys=hive
level=info msg="Start hook executed" duration="6.209µs" function="*resource.resource[*k8s.Endpoints].Start" subsys=hive
level=info msg="Using discoveryv1.EndpointSlice" subsys=k8s
level=info msg="Start hook executed" duration=100.656125ms function="*store.diffStore[*k8s.Endpoints].Start" subsys=hive
level=info msg="Start hook executed" duration="2.208µs" function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=info msg="Start hook executed" duration="1.167µs" function="*resource.resource[*v1.Pod].Start" subsys=hive
level=info msg="Start hook executed" duration="1.459µs" function="*resource.resource[*v1.Namespace].Start" subsys=hive
level=info msg="Start hook executed" duration="1µs" function="*resource.resource[*v1.NetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="2.209µs" function="*resource.resource[*cilium.io/v2.CiliumNetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="1.625µs" function="*resource.resource[*cilium.io/v2.CiliumClusterwideNetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="2.166µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumCIDRGroup].Start" subsys=hive
level=info msg="Start hook executed" duration="6.25µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Start hook executed" duration="27.167µs" function="*manager.manager.Start" subsys=hive
level=info msg="Start hook executed" duration="100.292µs" function="*cni.cniConfigManager.Start" subsys=hive
level=info msg="Start hook executed" duration=750ns function="k8s.newServiceCache.func1 (pkg/k8s/service_cache.go:145)" subsys=hive
level=info msg="Generating CNI configuration file with mode none" subsys=cni-config
level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
level=info msg="Start hook executed" duration="268.75µs" function="agent.newMonitorAgent.func1 (pkg/monitor/agent/cell.go:62)" subsys=hive
level=info msg="Start hook executed" duration="1.583µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumL2AnnouncementPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="2.208µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Start hook executed" duration="6.042µs" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="45.833µs" function="envoy.newEnvoyAccessLogServer.func1 (pkg/envoy/cell.go:108)" subsys=hive
level=info msg="Envoy: Starting access log server listening on /var/run/cilium/envoy/sockets/access_log.sock" subsys=envoy-manager
level=info msg="Start hook executed" duration="25.458µs" function="envoy.newArtifactCopier.func1 (pkg/envoy/cell.go:179)" subsys=hive
level=info msg="Start hook executed" duration="113.791µs" function="envoy.newEnvoyXDSServer.func1 (pkg/envoy/cell.go:66)" subsys=hive
level=info msg="Start hook executed" duration="1.75µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/envoy/sockets/xds.sock" subsys=envoy-manager
level=info msg="Start hook executed" duration="379.833µs" function="signal.provideSignalManager.func1 (pkg/signal/cell.go:26)" subsys=hive
level=info msg="Datapath signal listener running" subsys=signal
level=info msg="Start hook executed" duration="589.583µs" function="auth.registerAuthManager.func1 (pkg/auth/cell.go:113)" subsys=hive
level=info msg="Start hook executed" duration="3.167µs" function="auth.registerGCJobs.func1 (pkg/auth/cell.go:163)" subsys=hive
level=info msg="Start hook executed" duration="13.833µs" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="1.084µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Setting IPv6 gso_max_size to 65536 and gro_max_size to 65536" device=eth0 subsys=big-tcp
level=info msg="Setting IPv4 gso_max_size to 65536 and gro_max_size to 65536" device=eth0 subsys=big-tcp
level=info msg="Start hook executed" duration="237.583µs" function="bigtcp.newBIGTCP.func1 (pkg/datapath/linux/bigtcp/bigtcp.go:241)" subsys=hive
level=info msg="Start hook executed" duration="8.666µs" function="hive.(*internalLifecycle).Append.func1 (pkg/hive/hive.go:182)" subsys=hive
level=info msg="Start hook executed" duration="73.916µs" function="*ipsec.keyCustodian.Start" subsys=hive
level=info msg="Start hook executed" duration=792ns function="*job.group.Start" subsys=hive
level=info msg="Inheriting MTU from external network interface" device=eth0 ipAddr=172.18.0.4 mtu=65535 subsys=mtu
level=info msg="Start hook executed" duration="435.875µs" function="mtu.newForCell.func1 (pkg/mtu/cell.go:41)" subsys=hive
level=info msg="Using Managed Neighbor Kernel support" subsys=daemon
level=info msg="Removed map pin at /sys/fs/bpf/tc/globals/cilium_ipcache, recreating and re-pinning map cilium_ipcache" file-path=/sys/fs/bpf/tc/globals/cilium_ipcache name=cilium_ipcache subsys=bpf
level=info msg="Removed map pin at /sys/fs/bpf/tc/globals/cilium_tunnel_map, recreating and re-pinning map cilium_tunnel_map" file-path=/sys/fs/bpf/tc/globals/cilium_tunnel_map name=cilium_tunnel_map subsys=bpf
level=info msg="Restored services from maps" failedServices=0 restoredServices=4 subsys=service
level=info msg="Restored backends from maps" failedBackends=0 restoredBackends=1 skippedBackends=0 subsys=service
level=info msg="Reading old endpoints..." subsys=daemon
level=info msg="No old endpoints found." subsys=daemon
level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
level=info msg="Retrieved node information from kubernetes node" nodeName=pensionera-dev-cluster-control-plane subsys=daemon
level=info msg="Received own node information from API server" ipAddr.ipv4=172.18.0.4 ipAddr.ipv6="<nil>" k8sNodeIP=172.18.0.4 labels="map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/os:linux kubernetes.io/arch:arm64 kubernetes.io/hostname:pensionera-dev-cluster-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:]" nodeName=pensionera-dev-cluster-control-plane subsys=daemon v4Prefix=10.244.0.0/24 v6Prefix="<nil>"
level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
level=info msg="Enabling k8s event listener" subsys=k8s-watcher
level=info msg="Removing stale endpoint interfaces" subsys=daemon
level=info msg="Skipping kvstore configuration" subsys=daemon
level=info msg="Waiting until local node addressing before starting watchers depending on it" subsys=k8s-watcher
level=info msg="Restored router address from node_config" file=/var/run/cilium/state/globals/node_config.h ipv4=10.244.0.162 ipv6="<nil>" subsys=node
level=info msg="Initializing node addressing" subsys=daemon
level=info msg="Initializing kubernetes IPAM" subsys=ipam v4Prefix=10.244.0.0/24 v6Prefix="<nil>"
level=info msg="Restoring endpoints..." subsys=daemon
level=info msg="Endpoints restored" failed=0 restored=0 subsys=daemon
level=info msg="Addressing information:" subsys=daemon
level=info msg="  Cluster-Name: default" subsys=daemon
level=info msg="  Cluster-ID: 0" subsys=daemon
level=info msg="  Local node-name: pensionera-dev-cluster-control-plane" subsys=daemon
level=info msg="  Node-IPv6: <nil>" subsys=daemon
level=info msg="  External-Node IPv4: 172.18.0.4" subsys=daemon
level=info msg="  Internal-Node IPv4: 10.244.0.162" subsys=daemon
level=info msg="  IPv4 allocation prefix: 10.244.0.0/24" subsys=daemon
level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
level=info msg="  Local IPv4 addresses:" subsys=daemon
level=info msg="  - 10.244.0.162" subsys=daemon
level=info msg="  - 172.18.0.4" subsys=daemon
level=info msg="Adding local node to cluster" node=pensionera-dev-cluster-control-plane subsys=nodediscovery
level=info msg="Creating or updating CiliumNode resource" node=pensionera-dev-cluster-control-plane subsys=nodediscovery
level=info msg="Waiting until all pre-existing resources have been received" subsys=k8s-watcher
level=info msg="Initializing identity allocator" subsys=identity-cache
level=info msg="Allocating identities between range" cluster-id=0 max=65535 min=256 subsys=identity-cache
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.forwarding sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.accept_local sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.send_redirects sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.forwarding sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.accept_local sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.send_redirects sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_vxlan.forwarding sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_vxlan.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_vxlan.accept_local sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_vxlan.send_redirects sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.core.bpf_jit_enable sysParamValue=1
level=warning msg="Unable to ensure that BPF JIT compilation is enabled. This can be ignored when Cilium is running inside non-host network namespace (e.g. with kind or minikube)" error="could not open the sysctl file /host/proc/sys/net/core/bpf_jit_enable: open /host/proc/sys/net/core/bpf_jit_enable: no such file or directory" subsys=sysctl sysParamName=net.core.bpf_jit_enable sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.fib_multipath_use_neigh sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.timer_migration sysParamValue=0
level=info msg="Re-pinning map with ':pending' suffix" bpfMapName=cilium_calls_overlay_2 bpfMapPath=/sys/fs/bpf/tc/globals/cilium_calls_overlay_2 subsys=bpf
level=info msg="Repinning without ':pending' suffix after failed migration" bpfMapName=cilium_calls_overlay_2 bpfMapPath=/sys/fs/bpf/tc/globals/cilium_calls_overlay_2 subsys=bpf
level=warning msg="Removed new pinned map after failed migration" bpfMapName=cilium_calls_overlay_2 bpfMapPath=/sys/fs/bpf/tc/globals/cilium_calls_overlay_2 subsys=bpf
level=fatal msg="Load overlay network failed" error="program cil_from_overlay: replacing clsact qdisc for interface cilium_vxlan: operation not supported" interface=cilium_vxlan subsys=datapath-loader

@max-mulawa
Copy link

Looks similar to this problem with recent Docker Desktop update 4.26.0-1 kubernetes/minikube#17780

@RazaGR
Copy link
Author

RazaGR commented Dec 24, 2023

correct, downgrading docker version to 4.25.2 worked

@RazaGR RazaGR closed this as completed Dec 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug This is a bug in the Cilium logic. kind/community-report This was reported by a user in the Cilium community, eg via Slack. needs/triage This issue requires triaging to establish severity and next steps.
Projects
None yet
Development

No branches or pull requests

2 participants