Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cilium Operator panics when starting without Kubernetes #24767

Closed
2 tasks done
EtienneM opened this issue Apr 5, 2023 · 9 comments
Closed
2 tasks done

Cilium Operator panics when starting without Kubernetes #24767

EtienneM opened this issue Apr 5, 2023 · 9 comments
Labels
kind/bug This is a bug in the Cilium logic. kind/community-report This was reported by a user in the Cilium community, eg via Slack. sig/agent Cilium agent related. stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

Comments

@EtienneM
Copy link

EtienneM commented Apr 5, 2023

Is there an existing issue for this?

  • I have searched the existing issues

What happened?

We are trying to use Cilium with Docker containers without Kubernetes. I explained our setup in this Slack message.

For now, I'm just working on a proof of concept:
On my laptop, I start a three nodes etcd cluster with no TLS and no authentication.
I'm also starting a VM (using Vagrant). In it I start 3 Docker containers using Docker Compose: an agent, a Docker plugin and an operator.
This is inspired from this documentation page from Cilium 1.9.

I noticed the PR #21344 which adds the enable-k8s option. It brings confidence about having a working proof of concept for this setup.

Docker Compose configuration file
version: '2'
services:
  cilium_agent:
    container_name: cilium-agent
    image: cilium/cilium:v${CILIUM_VERSION}
    command: cilium-agent ${CILIUM_OPTS}
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /var/run/cilium:/var/run/cilium
      - /sys/fs/bpf:/sys/fs/bpf
      # To access Docker container netns:
      - /var/run/docker/netns:/var/run/docker/netns:rshared
      # To create named netns for cilium-health endpoint:
      - /var/run/netns:/var/run/netns:rshared
      # To have access to etcd.yml
      - /vagrant:/vagrant
    network_mode: "host"
    cap_add:
      - "NET_ADMIN"
    privileged: true

  cilium_docker:
    container_name: cilium-docker-plugin
    image: cilium/docker-plugin:v${CILIUM_VERSION}
    command: cilium-docker
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /var/run/cilium:/var/run/cilium
      - /run/docker/plugins:/run/docker/plugins
    network_mode: "host"
    cap_add:
      - "NET_ADMIN"
    privileged: true
    depends_on:
      - cilium_agent

  cilium_operator:
    container_name: cilium-operator
    image: cilium/operator-generic:v${CILIUM_VERSION}
    command: cilium-operator-generic --enable-k8s=false --enable-ipv4=true --enable-ipv6=false --ipam=cluster-pool --kvstore etcd --kvstore-opt etcd.config=/vagrant/_dev/cilium/etcd.yml
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /var/run/cilium:/var/run/cilium
      - /run/docker/plugins:/run/docker/plugins
      # To have access to etcd.yml
      - /vagrant:/vagrant
    network_mode: "host"
    cap_add:
      - "NET_ADMIN"
    privileged: true
    depends_on:
      - cilium_agent

The operator panics when starting:

Operator logs
level=info msg="  --alibaba-cloud-vpc-id=''" subsys=cilium-operator
level=info msg="  --aws-enable-prefix-delegation='false'" subsys=cilium-operator
level=info msg="  --aws-instance-limit-mapping=''" subsys=cilium-operator
level=info msg="  --aws-release-excess-ips='false'" subsys=cilium-operator
level=info msg="  --aws-use-primary-address='false'" subsys=cilium-operator
level=info msg="  --azure-resource-group=''" subsys=cilium-operator
level=info msg="  --azure-subscription-id=''" subsys=cilium-operator
level=info msg="  --azure-use-primary-address='false'" subsys=cilium-operator
level=info msg="  --azure-user-assigned-identity-id=''" subsys=cilium-operator
level=info msg="  --bgp-announce-lb-ip='false'" subsys=cilium-operator
level=info msg="  --bgp-config-path='/var/lib/cilium/bgp/config.yaml'" subsys=cilium-operator
level=info msg="  --ces-max-ciliumendpoints-per-ces='100'" subsys=cilium-operator
level=info msg="  --ces-slice-mode='cesSliceModeIdentity'" subsys=cilium-operator
level=info msg="  --cilium-endpoint-gc-interval='5m0s'" subsys=cilium-operator
level=info msg="  --cilium-pod-labels='k8s-app=cilium'" subsys=cilium-operator
level=info msg="  --cilium-pod-namespace=''" subsys=cilium-operator
level=info msg="  --cluster-id='0'" subsys=cilium-operator
level=info msg="  --cluster-name='default'" subsys=cilium-operator
level=info msg="  --cluster-pool-ipv4-cidr=''" subsys=cilium-operator
level=info msg="  --cluster-pool-ipv4-mask-size='24'" subsys=cilium-operator
level=info msg="  --cluster-pool-ipv6-cidr=''" subsys=cilium-operator
level=info msg="  --cluster-pool-ipv6-mask-size='112'" subsys=cilium-operator
level=info msg="  --cmdref=''" subsys=cilium-operator
level=info msg="  --cnp-node-status-gc-interval='2m0s'" subsys=cilium-operator
level=info msg="  --cnp-status-cleanup-burst='20'" subsys=cilium-operator
level=info msg="  --cnp-status-cleanup-qps='10'" subsys=cilium-operator
level=info msg="  --cnp-status-update-interval='1s'" subsys=cilium-operator
level=info msg="  --config=''" subsys=cilium-operator
level=info msg="  --config-dir=''" subsys=cilium-operator
level=info msg="  --debug='false'" subsys=cilium-operator
level=info msg="  --disable-cnp-status-updates='false'" subsys=cilium-operator
level=info msg="  --disable-endpoint-crd='false'" subsys=cilium-operator
level=info msg="  --ec2-api-endpoint=''" subsys=cilium-operator
level=info msg="  --enable-cilium-endpoint-slice='false'" subsys=cilium-operator
level=info msg="  --enable-ipv4='true'" subsys=cilium-operator
level=info msg="  --enable-ipv4-egress-gateway='false'" subsys=cilium-operator
level=info msg="  --enable-ipv6='false'" subsys=cilium-operator
level=info msg="  --enable-k8s='false'" subsys=cilium-operator
level=info msg="  --enable-k8s-api-discovery='true'" subsys=cilium-operator
level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=cilium-operator
level=info msg="  --enable-k8s-event-handover='false'" subsys=cilium-operator
level=info msg="  --enable-local-redirect-policy='false'" subsys=cilium-operator
level=info msg="  --enable-metrics='false'" subsys=cilium-operator
level=info msg="  --enable-srv6='false'" subsys=cilium-operator
level=info msg="  --eni-gc-interval='5m0s'" subsys=cilium-operator
level=info msg="  --eni-gc-tags=''" subsys=cilium-operator
level=info msg="  --eni-tags=''" subsys=cilium-operator
level=info msg="  --excess-ip-release-delay='180'" subsys=cilium-operator
level=info msg="  --gops-port='9891'" subsys=cilium-operator
level=info msg="  --identity-allocation-mode='kvstore'" subsys=cilium-operator
level=info msg="  --identity-gc-interval='15m0s'" subsys=cilium-operator
level=info msg="  --identity-gc-rate-interval='1m0s'" subsys=cilium-operator
level=info msg="  --identity-gc-rate-limit='2500'" subsys=cilium-operator
level=info msg="  --identity-heartbeat-timeout='30m0s'" subsys=cilium-operator
level=info msg="  --ingress-lb-annotation-prefixes='service.beta.kubernetes.io,service.kubernetes.io,cloud.google.com'" subsys=cilium-operator
level=info msg="  --instance-tags-filter=''" subsys=cilium-operator
level=info msg="  --ipam='cluster-pool'" subsys=cilium-operator
level=info msg="  --k8s-api-server=''" subsys=cilium-operator
level=info msg="  --k8s-client-burst='0'" subsys=cilium-operator
level=info msg="  --k8s-client-qps='0'" subsys=cilium-operator
level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=cilium-operator
level=info msg="  --k8s-kubeconfig-path=''" subsys=cilium-operator
level=info msg="  --k8s-namespace=''" subsys=cilium-operator
level=info msg="  --k8s-service-proxy-name=''" subsys=cilium-operator
level=info msg="  --kvstore='etcd'" subsys=cilium-operator
level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=cilium-operator
level=info msg="  --kvstore-opt='etcd.config=/vagrant/_dev/cilium/etcd.yml'" subsys=cilium-operator
level=info msg="  --leader-election-lease-duration='15s'" subsys=cilium-operator
level=info msg="  --leader-election-renew-deadline='10s'" subsys=cilium-operator
level=info msg="  --leader-election-retry-period='2s'" subsys=cilium-operator
level=info msg="  --limit-ipam-api-burst='20'" subsys=cilium-operator
level=info msg="  --limit-ipam-api-qps='4'" subsys=cilium-operator
level=info msg="  --log-driver=''" subsys=cilium-operator
level=info msg="  --log-opt=''" subsys=cilium-operator
level=info msg="  --nodes-gc-interval='5m0s'" subsys=cilium-operator
level=info msg="  --operator-api-serve-addr='localhost:9234'" subsys=cilium-operator
level=info msg="  --operator-pprof='false'" subsys=cilium-operator
level=info msg="  --operator-pprof-port='6061'" subsys=cilium-operator
level=info msg="  --operator-prometheus-serve-addr=':9963'" subsys=cilium-operator
level=info msg="  --parallel-alloc-workers='50'" subsys=cilium-operator
level=info msg="  --remove-cilium-node-taints='true'" subsys=cilium-operator
level=info msg="  --set-cilium-is-up-condition='true'" subsys=cilium-operator
level=info msg="  --skip-cnp-status-startup-clean='false'" subsys=cilium-operator
level=info msg="  --skip-crd-creation='false'" subsys=cilium-operator
level=info msg="  --subnet-ids-filter=''" subsys=cilium-operator
level=info msg="  --subnet-tags-filter=''" subsys=cilium-operator
level=info msg="  --synchronize-k8s-nodes='true'" subsys=cilium-operator
level=info msg="  --synchronize-k8s-services='true'" subsys=cilium-operator
level=info msg="  --unmanaged-pod-watcher-interval='15'" subsys=cilium-operator
level=info msg="  --update-ec2-adapter-limit-via-api='false'" subsys=cilium-operator
level=info msg="  --version='false'" subsys=cilium-operator
level=info msg=Invoked duration="357.423µs" function="gops.registerGopsHooks (cell.go:39)" subsys=hive
level=info msg=Invoked duration="199.947µs" function="cmd.registerOperatorHooks (root.go:125)" subsys=hive
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1acfbe0]

goroutine 1 [running]:
github.com/cilium/cilium/pkg/k8s/client/clientset/versioned.(*Clientset).CiliumV2alpha1(0xc0004f0706?)
	/go/src/github.com/cilium/cilium/pkg/k8s/client/clientset/versioned/clientset.go:39
github.com/cilium/cilium/operator/cmd.glob..func3({0x4acbf60, 0xc0004f0030}, {0x4b7e968?, 0xc00057aa00?})
	/go/src/github.com/cilium/cilium/operator/cmd/resources.go:42 +0x44
reflect.Value.call({0x3ba0480?, 0x4600fd8?, 0x40d93f?}, {0x4473f73, 0x4}, {0xc0004f07e0, 0x2, 0x30?})
	/usr/local/go/src/reflect/value.go:584 +0x8c5
reflect.Value.Call({0x3ba0480?, 0x4600fd8?, 0x40dc87?}, {0xc0004f07e0?, 0x3e5f500?, 0xc0004f0701?})
	/usr/local/go/src/reflect/value.go:368 +0xbc
go.uber.org/dig.defaultInvoker({0x3ba0480?, 0x4600fd8?, 0xc000792360?}, {0xc0004f07e0?, 0x2?, 0x4b38eb0?})
	/go/src/github.com/cilium/cilium/vendor/go.uber.org/dig/container.go:220 +0x28
go.uber.org/dig.(*constructorNode).Call(0xc000108f00, {0x4b38eb0, 0xc0004f8000})
	/go/src/github.com/cilium/cilium/vendor/go.uber.org/dig/constructor.go:154 +0x297
go.uber.org/dig.paramSingle.Build({{0x0, 0x0}, 0x0, {0x4b42978, 0x3d7c200}}, {0x4b38eb0, 0xc0004f80a0})
	/go/src/github.com/cilium/cilium/vendor/go.uber.org/dig/param.go:288 +0x2af
go.uber.org/dig.paramObjectField.Build(...)
	/go/src/github.com/cilium/cilium/vendor/go.uber.org/dig/param.go:485
go.uber.org/dig.paramObject.Build({{0x4b42978, 0x41602c0}, {0xc000666000, 0x7, 0x8}, {0x0, 0x0, 0x0}}, {0x4b38eb0, 0xc0004f80a0})
	/go/src/github.com/cilium/cilium/vendor/go.uber.org/dig/param.go:412 +0x269
go.uber.org/dig.paramList.BuildList({{0x4b42978, 0x3b043a0}, {0xc00031dea0, 0x1, 0x1}}, {0x4b38eb0, 0xc0004f80a0})
	/go/src/github.com/cilium/cilium/vendor/go.uber.org/dig/param.go:151 +0xb9
go.uber.org/dig.(*constructorNode).Call(0xc000108fc0, {0x4b38eb0, 0xc0004f80a0})
	/go/src/github.com/cilium/cilium/vendor/go.uber.org/dig/constructor.go:145 +0x132
go.uber.org/dig.paramSingle.Build({{0x0, 0x0}, 0x0, {0x4b42978, 0x42cddc0}}, {0x4b38eb0, 0xc0004f80a0})
	/go/src/github.com/cilium/cilium/vendor/go.uber.org/dig/param.go:288 +0x2af
go.uber.org/dig.paramList.BuildList({{0x4b42978, 0x3a6c280}, {0xc0004a0930, 0x1, 0x1}}, {0x4b38eb0, 0xc0004f80a0})
	/go/src/github.com/cilium/cilium/vendor/go.uber.org/dig/param.go:151 +0xb9
go.uber.org/dig.(*Scope).Invoke(0xc0004f80a0, {0x3a6c280?, 0x46010c8}, {0x8?, 0xc000109080?, 0xc00016ac00?})
	/go/src/github.com/cilium/cilium/vendor/go.uber.org/dig/invoke.go:85 +0x288
github.com/cilium/cilium/pkg/hive/cell.(*invoker).invoke(0xc0006dd500)
	/go/src/github.com/cilium/cilium/pkg/hive/cell/invoke.go:30 +0x20d
github.com/cilium/cilium/pkg/hive.(*Hive).populate(0xc0006eb600)
	/go/src/github.com/cilium/cilium/pkg/hive/hive.go:218 +0xe3
github.com/cilium/cilium/pkg/hive.(*Hive).Start(0xc0006eb600, {0x4b12ca0, 0xc0008059e0})
	/go/src/github.com/cilium/cilium/pkg/hive/hive.go:233 +0x45
github.com/cilium/cilium/pkg/hive.(*Hive).Run(0xc0006eb600)
	/go/src/github.com/cilium/cilium/pkg/hive/hive.go:167 +0x7c
github.com/cilium/cilium/operator/cmd.init.10.func1(0x75c4c60?, {0x44745fb?, 0x8?, 0x8?})
	/go/src/github.com/cilium/cilium/operator/cmd/root.go:171 +0x71
github.com/spf13/cobra.(*Command).execute(0x75c4c60, {0xc0000720a0, 0x8, 0x8})
	/go/src/github.com/cilium/cilium/vendor/github.com/spf13/cobra/command.go:920 +0x847
github.com/spf13/cobra.(*Command).ExecuteC(0x75c4c60)
	/go/src/github.com/cilium/cilium/vendor/github.com/spf13/cobra/command.go:1044 +0x3bd
github.com/spf13/cobra.(*Command).Execute(...)
	/go/src/github.com/cilium/cilium/vendor/github.com/spf13/cobra/command.go:968
github.com/cilium/cilium/operator/cmd.Execute()
	/go/src/github.com/cilium/cilium/operator/cmd/root.go:119 +0x25
main.main()
	/go/src/github.com/cilium/cilium/operator/main.go:13 +0x17

I'm not sure to understand why there is a call to a function which seems useful for Kubernetes:

/go/src/github.com/cilium/cilium/pkg/k8s/client/clientset/versioned/clientset.go:39

The two other containers (agent and Docker plugin) successfully start:

Agent logs
cilium-agent  | level=info msg="Memory available for map entries (0.003% of 1024114688B): 2560286B" subsys=config
cilium-agent  | level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 131072" subsys=config
cilium-agent  | level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 65536" subsys=config
cilium-agent  | level=info msg="option bpf-nat-global-max set by dynamic sizing to 131072" subsys=config
cilium-agent  | level=info msg="option bpf-neigh-global-max set by dynamic sizing to 131072" subsys=config
cilium-agent  | level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 65536" subsys=config
cilium-agent  | level=info msg="  --agent-health-port='9879'" subsys=daemon
cilium-agent  | level=info msg="  --agent-labels=''" subsys=daemon
cilium-agent  | level=info msg="  --agent-not-ready-taint-key='node.cilium.io/agent-not-ready'" subsys=daemon
cilium-agent  | level=info msg="  --allocator-list-timeout='3m0s'" subsys=daemon
cilium-agent  | level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
cilium-agent  | level=info msg="  --allow-localhost='auto'" subsys=daemon
cilium-agent  | level=info msg="  --annotate-k8s-node='false'" subsys=daemon
cilium-agent  | level=info msg="  --api-rate-limit=''" subsys=daemon
cilium-agent  | level=info msg="  --arping-refresh-period='30s'" subsys=daemon
cilium-agent  | level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
cilium-agent  | level=info msg="  --auto-direct-node-routes='false'" subsys=daemon
cilium-agent  | level=info msg="  --bgp-announce-lb-ip='false'" subsys=daemon
cilium-agent  | level=info msg="  --bgp-announce-pod-cidr='false'" subsys=daemon
cilium-agent  | level=info msg="  --bgp-config-path='/var/lib/cilium/bgp/config.yaml'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-ct-timeout-regular-tcp='6h0m0s'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-ct-timeout-service-tcp='6h0m0s'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-ct-timeout-service-tcp-grace='1m0s'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-filter-priority='1'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-acceleration='disabled'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-affinity-map-max='0'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-algorithm='random'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-dev-ip-addr-inherit=''" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-dsr-dispatch='opt'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-dsr-l4-xlate='frontend'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-external-clusterip='false'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-maglev-map-max='0'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-maglev-table-size='16381'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-map-max='65536'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-mode='snat'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-rev-nat-map-max='0'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-rss-ipv4-src-cidr=''" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-rss-ipv6-src-cidr=''" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-service-backend-map-max='0'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-service-map-max='0'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-sock='false'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-sock-hostns-only='false'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-lb-source-range-map-max='0'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-map-event-buffers=''" subsys=daemon
cilium-agent  | level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
cilium-agent  | level=info msg="  --bpf-root=''" subsys=daemon
cilium-agent  | level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
cilium-agent  | level=info msg="  --bypass-ip-availability-upon-restore='false'" subsys=daemon
cilium-agent  | level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
cilium-agent  | level=info msg="  --cflags=''" subsys=daemon
cilium-agent  | level=info msg="  --cgroup-root=''" subsys=daemon
cilium-agent  | level=info msg="  --cluster-health-port='4240'" subsys=daemon
cilium-agent  | level=info msg="  --cluster-id='0'" subsys=daemon
cilium-agent  | level=info msg="  --cluster-name='default'" subsys=daemon
cilium-agent  | level=info msg="  --clustermesh-config=''" subsys=daemon
cilium-agent  | level=info msg="  --cmdref=''" subsys=daemon
cilium-agent  | level=info msg="  --cni-chaining-mode=''" subsys=daemon
cilium-agent  | level=info msg="  --config=''" subsys=daemon
cilium-agent  | level=info msg="  --config-dir=''" subsys=daemon
cilium-agent  | level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
cilium-agent  | level=info msg="  --crd-wait-timeout='5m0s'" subsys=daemon
cilium-agent  | level=info msg="  --datapath-mode='veth'" subsys=daemon
cilium-agent  | level=info msg="  --debug='false'" subsys=daemon
cilium-agent  | level=info msg="  --debug-verbose=''" subsys=daemon
cilium-agent  | level=info msg="  --derive-masquerade-ip-addr-from-device=''" subsys=daemon
cilium-agent  | level=info msg="  --devices=''" subsys=daemon
cilium-agent  | level=info msg="  --direct-routing-device=''" subsys=daemon
cilium-agent  | level=info msg="  --disable-cnp-status-updates='false'" subsys=daemon
cilium-agent  | level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
cilium-agent  | level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
cilium-agent  | level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
cilium-agent  | level=info msg="  --dns-max-ips-per-restored-rule='1000'" subsys=daemon
cilium-agent  | level=info msg="  --dns-policy-unload-on-shutdown='false'" subsys=daemon
cilium-agent  | level=info msg="  --dnsproxy-concurrency-limit='0'" subsys=daemon
cilium-agent  | level=info msg="  --dnsproxy-concurrency-processing-grace-period='0s'" subsys=daemon
cilium-agent  | level=info msg="  --dnsproxy-lock-count='128'" subsys=daemon
cilium-agent  | level=info msg="  --dnsproxy-lock-timeout='500ms'" subsys=daemon
cilium-agent  | level=info msg="  --egress-masquerade-interfaces=''" subsys=daemon
cilium-agent  | level=info msg="  --egress-multi-home-ip-rule-compat='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-bandwidth-manager='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-bbr='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-bgp-control-plane='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-bpf-clock-probe='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-bpf-masquerade='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-bpf-tproxy='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-cilium-endpoint-slice='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-custom-calls='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-endpoint-routes='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-envoy-config='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-external-ips='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-health-check-nodeport='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-health-checking='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-host-firewall='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-host-legacy-routing='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-host-port='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-hubble='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-hubble-recorder-api='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-icmp-rules='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-identity-mark='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-ip-masq-agent='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-ipsec='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-ipv4='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-ipv4-egress-gateway='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-ipv4-masquerade='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-ipv6='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-ipv6-big-tcp='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-ipv6-masquerade='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-ipv6-ndp='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-k8s-event-handover='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-k8s-terminating-endpoint='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-l2-neigh-discovery='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-l7-proxy='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-local-node-route='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-local-redirect-policy='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-mke='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-monitor='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-nat46x64-gateway='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-node-port='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-pmtu-discovery='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-policy='default'" subsys=daemon
cilium-agent  | level=info msg="  --enable-recorder='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-remote-node-identity='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-runtime-device-detection='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-sctp='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-service-topology='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-session-affinity='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-srv6='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-stale-cilium-endpoint-cleanup='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-svc-source-range-check='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-tracing='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-unreachable-routes='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-vtep='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-well-known-identities='true'" subsys=daemon
cilium-agent  | level=info msg="  --enable-wireguard='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-wireguard-userspace-fallback='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-xdp-prefilter='false'" subsys=daemon
cilium-agent  | level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
cilium-agent  | level=info msg="  --encrypt-interface=''" subsys=daemon
cilium-agent  | level=info msg="  --encrypt-node='false'" subsys=daemon
cilium-agent  | level=info msg="  --endpoint-gc-interval='5m0s'" subsys=daemon
cilium-agent  | level=info msg="  --endpoint-queue-size='25'" subsys=daemon
cilium-agent  | level=info msg="  --endpoint-status=''" subsys=daemon
cilium-agent  | level=info msg="  --envoy-config-timeout='2m0s'" subsys=daemon
cilium-agent  | level=info msg="  --envoy-log=''" subsys=daemon
cilium-agent  | level=info msg="  --exclude-local-address=''" subsys=daemon
cilium-agent  | level=info msg="  --fixed-identity-mapping=''" subsys=daemon
cilium-agent  | level=info msg="  --force-local-policy-eval-at-source='false'" subsys=daemon
cilium-agent  | level=info msg="  --fqdn-regex-compile-lru-size='1024'" subsys=daemon
cilium-agent  | level=info msg="  --gops-port='9890'" subsys=daemon
cilium-agent  | level=info msg="  --http-403-msg=''" subsys=daemon
cilium-agent  | level=info msg="  --http-idle-timeout='0'" subsys=daemon
cilium-agent  | level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
cilium-agent  | level=info msg="  --http-normalize-path='true'" subsys=daemon
cilium-agent  | level=info msg="  --http-request-timeout='3600'" subsys=daemon
cilium-agent  | level=info msg="  --http-retry-count='3'" subsys=daemon
cilium-agent  | level=info msg="  --http-retry-timeout='0'" subsys=daemon
cilium-agent  | level=info msg="  --hubble-disable-tls='false'" subsys=daemon
cilium-agent  | level=info msg="  --hubble-event-buffer-capacity='4095'" subsys=daemon
cilium-agent  | level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
cilium-agent  | level=info msg="  --hubble-export-file-compress='false'" subsys=daemon
cilium-agent  | level=info msg="  --hubble-export-file-max-backups='5'" subsys=daemon
cilium-agent  | level=info msg="  --hubble-export-file-max-size-mb='10'" subsys=daemon
cilium-agent  | level=info msg="  --hubble-export-file-path=''" subsys=daemon
cilium-agent  | level=info msg="  --hubble-listen-address=''" subsys=daemon
cilium-agent  | level=info msg="  --hubble-metrics=''" subsys=daemon
cilium-agent  | level=info msg="  --hubble-metrics-server=''" subsys=daemon
cilium-agent  | level=info msg="  --hubble-prefer-ipv6='false'" subsys=daemon
cilium-agent  | level=info msg="  --hubble-recorder-sink-queue-size='1024'" subsys=daemon
cilium-agent  | level=info msg="  --hubble-recorder-storage-path='/var/run/cilium/pcaps'" subsys=daemon
cilium-agent  | level=info msg="  --hubble-skip-unknown-cgroup-ids='true'" subsys=daemon
cilium-agent  | level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
cilium-agent  | level=info msg="  --hubble-tls-cert-file=''" subsys=daemon
cilium-agent  | level=info msg="  --hubble-tls-client-ca-files=''" subsys=daemon
cilium-agent  | level=info msg="  --hubble-tls-key-file=''" subsys=daemon
cilium-agent  | level=info msg="  --identity-allocation-mode='kvstore'" subsys=daemon
cilium-agent  | level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
cilium-agent  | level=info msg="  --identity-restore-grace-period='10m0s'" subsys=daemon
cilium-agent  | level=info msg="  --install-egress-gateway-routes='false'" subsys=daemon
cilium-agent  | level=info msg="  --install-iptables-rules='true'" subsys=daemon
cilium-agent  | level=info msg="  --install-no-conntrack-iptables-rules='false'" subsys=daemon
cilium-agent  | level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
cilium-agent  | level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
cilium-agent  | level=info msg="  --ipam='cluster-pool'" subsys=daemon
cilium-agent  | level=info msg="  --ipsec-key-file=''" subsys=daemon
cilium-agent  | level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
cilium-agent  | level=info msg="  --iptables-random-fully='false'" subsys=daemon
cilium-agent  | level=info msg="  --ipv4-native-routing-cidr='10.15.0.0/16'" subsys=daemon
cilium-agent  | level=info msg="  --ipv4-node='auto'" subsys=daemon
cilium-agent  | level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
cilium-agent  | level=info msg="  --ipv4-range='auto'" subsys=daemon
cilium-agent  | level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
cilium-agent  | level=info msg="  --ipv4-service-range='auto'" subsys=daemon
cilium-agent  | level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
cilium-agent  | level=info msg="  --ipv6-mcast-device=''" subsys=daemon
cilium-agent  | level=info msg="  --ipv6-native-routing-cidr=''" subsys=daemon
cilium-agent  | level=info msg="  --ipv6-node='auto'" subsys=daemon
cilium-agent  | level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
cilium-agent  | level=info msg="  --ipv6-range='auto'" subsys=daemon
cilium-agent  | level=info msg="  --ipv6-service-range='auto'" subsys=daemon
cilium-agent  | level=info msg="  --join-cluster='false'" subsys=daemon
cilium-agent  | level=info msg="  --k8s-api-server=''" subsys=daemon
cilium-agent  | level=info msg="  --k8s-client-burst='0'" subsys=daemon
cilium-agent  | level=info msg="  --k8s-client-qps='0'" subsys=daemon
cilium-agent  | level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
cilium-agent  | level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
cilium-agent  | level=info msg="  --k8s-namespace=''" subsys=daemon
cilium-agent  | level=info msg="  --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
cilium-agent  | level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
cilium-agent  | level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
cilium-agent  | level=info msg="  --k8s-service-proxy-name=''" subsys=daemon
cilium-agent  | level=info msg="  --k8s-sync-timeout='3m0s'" subsys=daemon
cilium-agent  | level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
cilium-agent  | level=info msg="  --keep-config='false'" subsys=daemon
cilium-agent  | level=info msg="  --kube-proxy-replacement='partial'" subsys=daemon
cilium-agent  | level=info msg="  --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
cilium-agent  | level=info msg="  --kvstore='etcd'" subsys=daemon
cilium-agent  | level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
cilium-agent  | level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
cilium-agent  | level=info msg="  --kvstore-max-consecutive-quorum-errors='2'" subsys=daemon
cilium-agent  | level=info msg="  --kvstore-opt='etcd.config=/vagrant/_dev/cilium/etcd.yml'" subsys=daemon
cilium-agent  | level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
cilium-agent  | level=info msg="  --label-prefix-file=''" subsys=daemon
cilium-agent  | level=info msg="  --labels=''" subsys=daemon
cilium-agent  | level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
cilium-agent  | level=info msg="  --local-max-addr-scope='252'" subsys=daemon
cilium-agent  | level=info msg="  --local-router-ipv4=''" subsys=daemon
cilium-agent  | level=info msg="  --local-router-ipv6=''" subsys=daemon
cilium-agent  | level=info msg="  --log-driver=''" subsys=daemon
cilium-agent  | level=info msg="  --log-opt=''" subsys=daemon
cilium-agent  | level=info msg="  --log-system-load='false'" subsys=daemon
cilium-agent  | level=info msg="  --max-controller-interval='0'" subsys=daemon
cilium-agent  | level=info msg="  --metrics=''" subsys=daemon
cilium-agent  | level=info msg="  --mke-cgroup-mount=''" subsys=daemon
cilium-agent  | level=info msg="  --monitor-aggregation='None'" subsys=daemon
cilium-agent  | level=info msg="  --monitor-aggregation-flags='syn,fin,rst'" subsys=daemon
cilium-agent  | level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
cilium-agent  | level=info msg="  --monitor-queue-size='0'" subsys=daemon
cilium-agent  | level=info msg="  --mtu='0'" subsys=daemon
cilium-agent  | level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
cilium-agent  | level=info msg="  --node-port-algorithm='random'" subsys=daemon
cilium-agent  | level=info msg="  --node-port-bind-protection='true'" subsys=daemon
cilium-agent  | level=info msg="  --node-port-mode='snat'" subsys=daemon
cilium-agent  | level=info msg="  --node-port-range='30000,32767'" subsys=daemon
cilium-agent  | level=info msg="  --policy-audit-mode='false'" subsys=daemon
cilium-agent  | level=info msg="  --policy-queue-size='100'" subsys=daemon
cilium-agent  | level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
cilium-agent  | level=info msg="  --pprof='false'" subsys=daemon
cilium-agent  | level=info msg="  --pprof-address='localhost'" subsys=daemon
cilium-agent  | level=info msg="  --pprof-port='6060'" subsys=daemon
cilium-agent  | level=info msg="  --preallocate-bpf-maps='true'" subsys=daemon
cilium-agent  | level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
cilium-agent  | level=info msg="  --procfs='/proc'" subsys=daemon
cilium-agent  | level=info msg="  --prometheus-serve-addr=':9962'" subsys=daemon
cilium-agent  | level=info msg="  --proxy-connect-timeout='1'" subsys=daemon
cilium-agent  | level=info msg="  --proxy-gid='1337'" subsys=daemon
cilium-agent  | level=info msg="  --proxy-max-connection-duration-seconds='0'" subsys=daemon
cilium-agent  | level=info msg="  --proxy-max-requests-per-connection='0'" subsys=daemon
cilium-agent  | level=info msg="  --proxy-prometheus-port='0'" subsys=daemon
cilium-agent  | level=info msg="  --read-cni-conf=''" subsys=daemon
cilium-agent  | level=info msg="  --restore='true'" subsys=daemon
cilium-agent  | level=info msg="  --route-metric='0'" subsys=daemon
cilium-agent  | level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
cilium-agent  | level=info msg="  --single-cluster-route='false'" subsys=daemon
cilium-agent  | level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
cilium-agent  | level=info msg="  --sockops-enable='false'" subsys=daemon
cilium-agent  | level=info msg="  --srv6-encap-mode='reduced'" subsys=daemon
cilium-agent  | level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
cilium-agent  | level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
cilium-agent  | level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
cilium-agent  | level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
cilium-agent  | level=info msg="  --tofqdns-idle-connection-grace-period='0s'" subsys=daemon
cilium-agent  | level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
cilium-agent  | level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
cilium-agent  | level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
cilium-agent  | level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
cilium-agent  | level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
cilium-agent  | level=info msg="  --trace-payloadlen='128'" subsys=daemon
cilium-agent  | level=info msg="  --trace-sock='true'" subsys=daemon
cilium-agent  | level=info msg="  --tunnel=''" subsys=daemon
cilium-agent  | level=info msg="  --tunnel-port='0'" subsys=daemon
cilium-agent  | level=info msg="  --version='false'" subsys=daemon
cilium-agent  | level=info msg="  --vlan-bpf-bypass=''" subsys=daemon
cilium-agent  | level=info msg="  --vtep-cidr=''" subsys=daemon
cilium-agent  | level=info msg="  --vtep-endpoint=''" subsys=daemon
cilium-agent  | level=info msg="  --vtep-mac=''" subsys=daemon
cilium-agent  | level=info msg="  --vtep-mask='255.255.255.0'" subsys=daemon
cilium-agent  | level=info msg="  --write-cni-conf-when-ready=''" subsys=daemon
cilium-agent  | level=info msg="     _ _ _" subsys=daemon
cilium-agent  | level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
cilium-agent  | level=info msg="|  _| | | | | |     |" subsys=daemon
cilium-agent  | level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
cilium-agent  | level=info msg="Cilium 1.13.1 a6be57eb 2023-03-15T19:39:01+01:00 go version go1.19.6 linux/amd64" subsys=daemon
cilium-agent  | level=info msg="cilium-envoy  version: 04413917ff99e4f6ab51d1c6eb424d4a055f4462/1.23.4/Distribution/RELEASE/BoringSSL" subsys=daemon
cilium-agent  | level=info msg="clang (10.0.0) and kernel (5.4.0) versions: OK!" subsys=linux-datapath
cilium-agent  | level=info msg="linking environment: OK!" subsys=linux-datapath
cilium-agent  | level=info msg="Kernel config file not found: if the agent fails to start, check the system requirements at https://docs.cilium.io/en/stable/operations/system_requirements" subsys=probes
cilium-agent  | level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
cilium-agent  | level=info msg="Mounted cgroupv2 filesystem at /run/cilium/cgroupv2" subsys=cgroups
cilium-agent  | level=info msg="Parsing base label prefixes from default label list" subsys=labels-filter
cilium-agent  | level=info msg="Parsing additional label prefixes from user inputs: []" subsys=labels-filter
cilium-agent  | level=info msg="Final label prefixes to be used for identity evaluation:" subsys=labels-filter
cilium-agent  | level=info msg=" - reserved:.*" subsys=labels-filter
cilium-agent  | level=info msg=" - :io\\.kubernetes\\.pod\\.namespace" subsys=labels-filter
cilium-agent  | level=info msg=" - :io\\.cilium\\.k8s\\.namespace\\.labels" subsys=labels-filter
cilium-agent  | level=info msg=" - :app\\.kubernetes\\.io" subsys=labels-filter
cilium-agent  | level=info msg=" - !:io\\.kubernetes" subsys=labels-filter
cilium-agent  | level=info msg=" - !:kubernetes\\.io" subsys=labels-filter
cilium-agent  | level=info msg=" - !:.*beta\\.kubernetes\\.io" subsys=labels-filter
cilium-agent  | level=info msg=" - !:k8s\\.io" subsys=labels-filter
cilium-agent  | level=info msg=" - !:pod-template-generation" subsys=labels-filter
cilium-agent  | level=info msg=" - !:pod-template-hash" subsys=labels-filter
cilium-agent  | level=info msg=" - !:controller-revision-hash" subsys=labels-filter
cilium-agent  | level=info msg=" - !:annotation.*" subsys=labels-filter
cilium-agent  | level=info msg=" - !:etcd_node" subsys=labels-filter
cilium-agent  | level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.15.0.0/16
cilium-agent  | level=info msg=Invoked duration=4.187683ms function="gops.registerGopsHooks (cell.go:39)" subsys=hive
cilium-agent  | level=info msg=Invoked duration=12.637194ms function="cmd.glob..func2 (daemon_main.go:1644)" subsys=hive
cilium-agent  | level=info msg="Started gops server" address="127.0.0.1:9890" subsys=gops
cilium-agent  | level=info msg="Start hook executed" duration=2.97149ms function="gops.registerGopsHooks.func1 (cell.go:44)" subsys=hive
cilium-agent  | level=info msg="Start hook executed" duration=124.633766ms function="cmd.newDatapath.func1 (daemon_main.go:1624)" subsys=hive
cilium-agent  | level=info msg="Inheriting MTU from external network interface" device=enp0s3 ipAddr=10.0.2.15 mtu=1500 subsys=mtu
cilium-agent  | level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock" subsys=envoy-manager
cilium-agent  | level=info msg="Restored 0 node IDs from the BPF map" subsys=linux-datapath
cilium-agent  | level=info msg="Restored backends from maps" failedBackends=0 restoredBackends=0 subsys=service
cilium-agent  | level=info msg="Restored services from maps" failedServices=0 restoredServices=0 subsys=service
cilium-agent  | level=info msg="Reading old endpoints..." subsys=daemon
cilium-agent  | level=info msg="Reusing previous DNS proxy port: 34867" subsys=daemon
cilium-agent  | level=info msg="Detected devices" devices="[]" subsys=linux-datapath
cilium-agent  | level=info msg="Removing stale endpoint interfaces" subsys=daemon
cilium-agent  | level=info msg="Creating etcd client" ConfigPath=/vagrant/_dev/cilium/etcd.yml KeepAliveHeartbeat=15s KeepAliveTimeout=25s RateLimit=20 subsys=kvstore
cilium-agent  | level=info msg="Connecting to etcd server..." config=/vagrant/_dev/cilium/etcd.yml endpoints="[http://192.168.56.1:22379 http://192.168.56.1:22381 http://192.168.56.1:22383]" subsys=kvstore
cilium-agent  | level=info msg="Restored router address from node_config" file=/var/run/cilium/state/globals/node_config.h ipv4=10.15.59.110 ipv6="<nil>" subsys=node
cilium-agent  | level=info msg="Initializing node addressing" subsys=daemon
cilium-agent  | level=info msg="Initializing cluster-pool IPAM" subsys=ipam v4Prefix=10.15.0.0/16 v6Prefix="<nil>"
cilium-agent  | level=info msg="Restoring endpoints..." subsys=daemon
cilium-agent  | level=info msg="Endpoints restored" failed=0 restored=1 subsys=daemon
cilium-agent  | level=info msg="Addressing information:" subsys=daemon
cilium-agent  | level=info msg="  Cluster-Name: default" subsys=daemon
cilium-agent  | level=info msg="  Cluster-ID: 0" subsys=daemon
cilium-agent  | level=info msg="  Local node-name: ubuntu-focal" subsys=daemon
cilium-agent  | level=info msg="  Node-IPv6: <nil>" subsys=daemon
cilium-agent  | level=info msg="  External-Node IPv4: 10.0.2.15" subsys=daemon
cilium-agent  | level=info msg="  Internal-Node IPv4: 10.15.59.110" subsys=daemon
cilium-agent  | level=info msg="  IPv4 allocation prefix: 10.15.0.0/16" subsys=daemon
cilium-agent  | level=info msg="  IPv4 native routing prefix: 10.15.0.0/16" subsys=daemon
cilium-agent  | level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
cilium-agent  | level=info msg="  Local IPv4 addresses:" subsys=daemon
cilium-agent  | level=info msg="  - 10.0.2.15" subsys=daemon
cilium-agent  | level=info msg="  - 192.168.56.2" subsys=daemon
cilium-agent  | level=info msg="  - 10.15.59.110" subsys=daemon
cilium-agent  | level=info msg="Initializing identity allocator" subsys=identity-cache
cilium-agent  | level=info msg="Allocating identities between range" cluster-id=0 max=65535 min=256 subsys=identity-cache
cilium-agent  | level=info msg="Adding local node to cluster" node="{ubuntu-focal default [{InternalIP 10.0.2.15} {CiliumInternalIP 10.15.59.110}] 10.15.0.0/16 [] <nil> [] 10.15.249.189 <nil> <nil> <nil> 0 local 0 map[] map[] 1 }" subsys=nodediscovery
cilium-agent  | level=info msg="Setting up BPF datapath" bpfClockSource=ktime bpfInsnSet=v2 subsys=datapath-loader
cilium-agent  | level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.core.bpf_jit_enable sysParamValue=1
cilium-agent  | level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
cilium-agent  | level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.fib_multipath_use_neigh sysParamValue=1
cilium-agent  | level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
cilium-agent  | level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.timer_migration sysParamValue=0
cilium-agent  | level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.forwarding sysParamValue=1
cilium-agent  | level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.rp_filter sysParamValue=0
cilium-agent  | level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.accept_local sysParamValue=1
cilium-agent  | level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.send_redirects sysParamValue=0
cilium-agent  | level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.forwarding sysParamValue=1
cilium-agent  | level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.rp_filter sysParamValue=0
cilium-agent  | level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.accept_local sysParamValue=1
cilium-agent  | level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.send_redirects sysParamValue=0
cilium-agent  | level=info msg="Got lease ID 4f24874d04cffb2a and the session TTL is 15m0s" subsys=kvstore
cilium-agent  | level=info msg="Got lock lease ID 62f6874d04d0502a" subsys=kvstore
cilium-agent  | level=info msg="Initial etcd session established" config=/vagrant/_dev/cilium/etcd.yml endpoints="[http://192.168.56.1:22379 http://192.168.56.1:22381 http://192.168.56.1:22383]" subsys=kvstore
cilium-agent  | level=info msg="Successfully verified version of etcd endpoint" config=/vagrant/_dev/cilium/etcd.yml endpoints="[http://192.168.56.1:22379 http://192.168.56.1:22381 http://192.168.56.1:22383]" etcdEndpoint="http://192.168.56.1:22379" subsys=kvstore version=3.5.7
cilium-agent  | level=info msg="Successfully verified version of etcd endpoint" config=/vagrant/_dev/cilium/etcd.yml endpoints="[http://192.168.56.1:22379 http://192.168.56.1:22381 http://192.168.56.1:22383]" etcdEndpoint="http://192.168.56.1:22381" subsys=kvstore version=3.5.7
cilium-agent  | level=info msg="Successfully verified version of etcd endpoint" config=/vagrant/_dev/cilium/etcd.yml endpoints="[http://192.168.56.1:22379 http://192.168.56.1:22381 http://192.168.56.1:22383]" etcdEndpoint="http://192.168.56.1:22383" subsys=kvstore version=3.5.7
cilium-agent  | level=info msg="Iptables rules installed" subsys=iptables
cilium-agent  | level=info msg="Adding new proxy port rules for cilium-dns-egress:34867" id=cilium-dns-egress subsys=proxy
cilium-agent  | level=info msg="Iptables proxy rules installed" subsys=iptables
cilium-agent  | level=info msg="Beginning to read perf buffer" startTime="2023-04-05 16:16:33.565923585 +0000 UTC m=+7.262703669" subsys=monitor-agent
cilium-agent  | level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
cilium-agent  | level=info msg="Starting IP identity watcher" subsys=ipcache
cilium-agent  | level=info msg="Start hook executed" duration=5.429130478s function="cmd.newDaemonPromise.func1 (daemon_main.go:1677)" subsys=hive
cilium-agent  | level=info msg="Initializing daemon" subsys=daemon
cilium-agent  | level=info msg="Validating configured node address ranges" subsys=daemon
cilium-agent  | level=info msg="Starting connection tracking garbage collector" subsys=daemon
cilium-agent  | level=info msg="Datapath signal listener running" subsys=signal
cilium-agent  | level=info msg="Initial scan of connection tracking completed" subsys=ct-gc
cilium-agent  | level=info msg="Regenerating restored endpoints" numRestored=1 subsys=daemon
cilium-agent  | level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=4014 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint
cilium-agent  | level=info msg="Successfully restored endpoint. Scheduling regeneration" endpointID=4014 subsys=daemon
cilium-agent  | level=info msg="Removed endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3651 identity=4 ipv4=10.15.150.153 ipv6=10.15.150.153 k8sPodName=/ subsys=endpoint
cilium-agent  | level=info msg="Launching Cilium health daemon" subsys=daemon
cilium-agent  | level=info msg="Launching Cilium health endpoint" subsys=daemon
cilium-agent  | level=info msg="Serving prometheus metrics on :9962" subsys=daemon
cilium-agent  | level=info msg="Started healthz status API server" address="127.0.0.1:9879" subsys=daemon
cilium-agent  | level=info msg="Initializing Cilium API" subsys=daemon
cilium-agent  | level=info msg="Daemon initialization completed" bootstrapTime=7.32972851s subsys=daemon
cilium-agent  | level=info msg="Hubble server is disabled" subsys=hubble
cilium-agent  | level=info msg="Serving cilium API at unix:///var/run/cilium/cilium.sock" subsys=daemon
cilium-agent  | level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2283 ipv4= ipv6= k8sPodName=/ subsys=endpoint
cilium-agent  | level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2283 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ subsys=endpoint
cilium-agent  | level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2283 identity=4 identityLabels="reserved:health" ipv4= ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
cilium-agent  | level=info msg="Compiled new BPF template" BPFCompilationTime=3.732403104s file-path=/var/run/cilium/state/templates/f46e70b489f572c3cc372367b92508edbb48369f2104b6934355cf8cc8214cdd/bpf_host.o subsys=datapath-loader
cilium-agent  | level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=4014 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint
cilium-agent  | level=info msg="Restored endpoint" endpointID=4014 ipAddr="[ ]" subsys=endpoint
cilium-agent  | level=info msg="Finished regenerating restored endpoints" regenerated=1 subsys=daemon total=1
cilium-agent  | level=info msg="Removed stale bpf map" file-path=/sys/fs/bpf/tc/globals/cilium_lb4_source_range subsys=daemon
cilium-agent  | level=info msg="Serving cilium health API at unix:///var/run/cilium/health.sock" subsys=health-server
cilium-agent  | level=info msg="Compiled new BPF template" BPFCompilationTime=11.297336708s file-path=/var/run/cilium/state/templates/419a4d5badde2b9a33db61c617ec94c4416ce09768f116376ec1e0873e7a77f0/bpf_lxc.o subsys=datapath-loader
cilium-agent  | level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=2283 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint
Docker plugin logs
cilium-docker-plugin  | level=info msg="Waiting for cilium daemon to start up..." ciliumSockPath="unix:///var/run/cilium/cilium.sock" dockerHostPath="unix:///var/run/docker.sock" subsys=cilium-docker-driver
cilium-docker-plugin  | level=info msg="Waiting for cilium daemon to start up..." ciliumSockPath="unix:///var/run/cilium/cilium.sock" dockerHostPath="unix:///var/run/docker.sock" subsys=cilium-docker-driver
cilium-docker-plugin  | level=info msg="Waiting for cilium daemon to start up..." ciliumSockPath="unix:///var/run/cilium/cilium.sock" dockerHostPath="unix:///var/run/docker.sock" subsys=cilium-docker-driver
cilium-docker-plugin  | level=info msg="Waiting for cilium daemon to start up..." ciliumSockPath="unix:///var/run/cilium/cilium.sock" dockerHostPath="unix:///var/run/docker.sock" subsys=cilium-docker-driver
cilium-docker-plugin  | level=info msg="Waiting for cilium daemon to start up..." ciliumSockPath="unix:///var/run/cilium/cilium.sock" dockerHostPath="unix:///var/run/docker.sock" subsys=cilium-docker-driver
cilium-docker-plugin  | level=info msg="Starting docker events watcher" subsys=cilium-docker-driver
cilium-docker-plugin  | level=info msg="Cilium Docker plugin ready" subsys=cilium-docker-driver
cilium-docker-plugin  | level=info msg="Listening for events from Docker" file-path=/run/docker/plugins/cilium.sock subsys=cilium-docker
Vagrantfile
Vagrant.require_version ">= 2.3.4"

cilium_version = "1.13.1"
etcd_version = "3.5.7"

cilium_opts_default = "--enable-ipv4=true --enable-ipv6=false \
  --ipam=cluster-pool \
  --enable-ipv4-masquerade=true --ipv4-native-routing-cidr=10.15.0.0/16 \
  --kvstore etcd --kvstore-opt etcd.config=/vagrant/_dev/cilium/etcd.yml \
  "
cilium_opts = (ENV['CILIUM_OPTS'] || cilium_opts_default)

Vagrant.configure("2") do |config|
	# Ubuntu 20.04
  config.vm.box = "ubuntu/focal64"
	config.vm.box_url = "https://app.vagrantup.com/ubuntu/boxes/focal64"

  # Required to install Cilium in the bootstrap script
  config.vm.provision "docker" do |d|
    d.images = [
      "cilium/cilium:v#{cilium_version}",
      "cilium/docker-plugin:v#{cilium_version}",
      "cilium/operator-generic:v#{cilium_version}",
      "quay.io/coreos/etcd:v#{etcd_version}"
    ]
  end

  config.vm.provision :shell do |s|
    s.path = "bootstrap.sh"
    s.env = {"CILIUM_VERSION" => cilium_version, "CILIUM_OPTS" => cilium_opts}
  end

  config.vm.network "private_network", ip: "192.168.56.2"

  config.vm.provision "run", type: "shell", run: "always" do |s|
    s.path = "run.sh"
    s.env = {"CILIUM_VERSION" => cilium_version, "CILIUM_OPTS" => cilium_opts}
    s.privileged = false
  end
end

Cilium Version

Client: 1.13.1 a6be57e 2023-03-15T19:39:01+01:00 go version go1.19.6 linux/amd64
Daemon: 1.13.1 a6be57e 2023-03-15T19:39:01+01:00 go version go1.19.6 linux/amd64

Kernel Version

Linux ubuntu-focal 5.4.0-139-generic #156-Ubuntu SMP Fri Jan 20 17:27:18 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Kubernetes Version

n/a

Sysdump

No response

I'm still working on figuring out how to make this work.

Relevant log output

No response

Anything else?

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@EtienneM EtienneM added kind/bug This is a bug in the Cilium logic. kind/community-report This was reported by a user in the Cilium community, eg via Slack. needs/triage This issue requires triaging to establish severity and next steps. labels Apr 5, 2023
@EtienneM
Copy link
Author

EtienneM commented Apr 6, 2023

I can see that the code which panics has been added in #21764. Could it be possible that the use case without Kubernetes was not taken into account at the time?

@dylandreimerink dylandreimerink added the sig/agent Cilium agent related. label Apr 11, 2023
@dylandreimerink
Copy link
Member

Given that I authored #21764, I can confidently say that this is not something I took into account. The ability of the operator to function without kubernetes is not documented or very well known for that matter. Given that we implicitly accepted this requirement in the past, I think we should continue to unless otherwise decided. But to be honnest, I don't know what the operator is supposed to do without api-server.

A few other tings of note:

  • The agent can run without kubernetes, but only with a limited feature set, I would not expect a working multi cluster setup without kubernetes. See https://github.com/cilium/cilium/blob/master/test/l4lb/test.sh as an example of deploying the agent without k8s as L4 loadbalancer.
  • The referenced docker guide was removed in 1.10 because it was out of date and not very useful since it was not a multi node setup.

It would be good to also add a test to the CI which starts the operator outside of k8s to assert that works so we don't regress on this again.

@dylandreimerink dylandreimerink removed the needs/triage This issue requires triaging to establish severity and next steps. label Apr 11, 2023
@EtienneM
Copy link
Author

Thanks a lot for your answer :)

I would not expect a working multi cluster setup without kubernetes

Thanks for the warning. We are not interested in multi-cluster setup so that's OK for us. Our use case would be:

We have multiple customers hosting apps and databases in a single cluster. We want all apps and databases from one customer to be in some kind of private network (e.g. using VXLAN), isolated from the other customers apps and databases. The traffic between apps and databases should be encrypted.

The referenced docker guide was removed in 1.10

Yes we used it as a base but are aware that some things may be deprecated or no relevant anymore. But for us that's already a good starting point :)

@dylandreimerink
Copy link
Member

dylandreimerink commented Apr 11, 2023

I would not expect a working multi cluster setup without kubernetes

Thanks for the warning. We are not interested in multi-cluster setup so that's OK for us. Our use case would be:

Sorry, I meant to say multi node setup. Without kubernetes you are missing a lot of inter node communication which a lot of feature need. I believe the L4 LB case is the only scenario we currently support. Even for that we have had to add the ability to configure cilium via the CLI/API. So you might need to make some changes to get everything to do what you want.

@EtienneM
Copy link
Author

EtienneM commented Apr 11, 2023

If that's the case it would be a really sad news. Moreover a setup without Kubernetes is advertised on the website and in the documentation so we expect it to work.

That being said, we still think that it may work. We found various mentions of running Cilium without Kubernetes in GitHub issues (e.g. #18334), and a comprehensive answer from @joestringer seems to indicate that it could work with the CNI plugin (#18334 (comment)).

Hence we expect that with some extra glue, we could make our setup work.

@joestringer
Copy link
Member

I think that if the community sees value in Cilium use cases without k8s, then it's up to those community members to propose patches to support those cases, develop test cases to avoid regressions, and so on. Currently the core Cilium team is primarily focused on k8s environments so we can expect those cases to have the best support. That said, if there are sufficient community members interested in other platforms then the code should be (made to be) generic enough to support those platforms. I would say that the CI today doesn't provide any guarantees that Cilium works outside k8s environments, otherwise Dylan's submission would have failed that test and it would have been updated to consider this. But given that a few community members have been able to run Cilium without k8s with just a few tweaks, I think that the code is not that far from that capability. At least so far, I haven't seen any fundamental decisions that differ or maintenance burden that the core Cilium team needs to take on in order to support this, we're just relying on developers who care to submit the patches to make it work.

@joestringer
Copy link
Member

One more note, while we do commonly rely on Kubernetes for defining the configuration schemas for Cilium functionality, Kubernetes is in no way a requirement for multi-node connectivity. Before Cilium supported Kubernetes, it already did a bunch of state sharing across nodes directly via etcd. We continue to support that deployment case in core Cilium (supplemented by k8s state distribution for various features like network policies).

@github-actions
Copy link

This issue has been automatically marked as stale because it has not
had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added the stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale. label Jun 11, 2023
@github-actions
Copy link

This issue has not seen any activity since it was marked stale.
Closing.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug This is a bug in the Cilium logic. kind/community-report This was reported by a user in the Cilium community, eg via Slack. sig/agent Cilium agent related. stale The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Projects
None yet
Development

No branches or pull requests

3 participants