Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kpr: Enable/fix Cilium socket based load balancing in different environments #16259

Merged

Conversation

aditighag
Copy link
Member

@aditighag aditighag commented May 21, 2021

This PR aims to revisit some of the assumptions made around cgroup hierarchies in Cilium in order to enable socket-lb in different environments.

Context

Cilium attaches BPF_CGROUP_* type programs to provide socket based load-balancing. The default cgroup root in the agent is set to a custom location (/var/run/cilium/cgroupv2), where the agent tries to mount cgroup filesystem. The cgroup root is then passed to init.sh in order to attach the BPF_CGROUP_* programs at relevant hook points. While we have some extended logic in place to accommodate environments like kind, the overall logic breaks in certain scenarios. While the following list is not an exhaustive list, it helps in identifying general patterns.

Scenarios where current logic breaks

  • Virtualized cgroup root in the cgroup namespace mode (Cilium attaching to the wrong cgroup #15137)
    If container runtimes are run with cgroup v2, Cilium agent pod would be deployed in a separate cgroup namespace. For example, Docker container runtime with cgroupv2 support switched to private cgroup namespace mode as the default. Due to cgroup namespaces, the cgroup fs mounted by the Cilium pod points to a virtualized cgroup hierarchy instead of the host cgroup root. As a result, BPF programs are attached to the nested cgroup root, and socket-lb isn't effective for other pods.
    Resolution:
    Mount cgroup fs on the host from init containers. We need to specify cgroup as the enterable namespace in the nsenter command. Cilium agent will auto mount cgroup2 fs on the underlying host if not already mounted. This requires mounting host's /proc inside an init container temporarily. As an alternative, users can disable auto mount, and specify a mount point on the host where cgroup2 fs is already mounted. Cgroup2 fs mount point is platform dependent. Hence, we introduce a new helm option for the host cgroup2 fs mount point. See this note in cgroup man page -
    Note that on many modern systems, systemd(1) automatically mounts
    the cgroup2 filesystem at /sys/fs/cgroup/unified during the boot
    process.

  • Commit 866969e hard-coded kubelet string which may not work in some platforms. Depending on the value of kubelet config --cgroup-root, this string may or may not be present. Moreover, the logic to get cgroup root is specific to kind environments only so that it won't take effect for minikube clusters.
    Resolution:
    The kind cgroup root detection logic can be removed as kind nodes and cilium agent pod are deployed in the same cgroup ns (if they don't, the above init container fix should be sufficient).

  • Deploying a kind cluster alongside "other" BPF cgroup programs (Kind with socket-lb doesn't work in dev VM + update kind docs #16078)
    I ran into this issue while deploying a kind cluster on the dev VM. Cilium pod inside the kind cluster fails to come up with this warning message failed to attach program, and is stuck in a "CrashLoopback" state. We can have cilium/ebpf loader print better error messages/hints in such cases (I'll file an issue).
    I traced the error return code 255 (aka operation not permitted) from to this check [1] in the kernel source that disallows attach in the presence of programs (no override/multi) in the parent cgroup. After I removed the BPF programs attached in the dev VM cgroup root, I was able to create a kind cluster with socket-lb successfully.
    [1] https://elixir.bootlin.com/linux/latest/source/kernel/bpf/cgroup.c#L457
    Resolution:
    We document this case along with potential steps to resolve the issue.

Testing

Tested the changes on kind, GKE and bare metal k8s cluster by verifying that BPF programs are correctly attached, and socket-lb works as expected.

Deferred to follow-ups -

Fixes: #16078
Fixes: #15769
Fixes: 866969e
Fixes: #15137
(Reported-by : @kkourt)

Release note

Fixes connectivity issues when kube-proxy replacement is enabled, caused by 
ineffective socket based load balancing (aka host reachable services) in the private 
cgroup namespace mode of container runtimes (e.g., docker cgroupv2 configuration). 

@aditighag aditighag requested review from a team as code owners May 21, 2021 05:30
@aditighag aditighag requested review from a team and kaworu May 21, 2021 05:30
@maintainer-s-little-helper maintainer-s-little-helper bot added the dont-merge/needs-release-note-label The author needs to describe the release impact of these changes. label May 21, 2021
@aditighag aditighag requested a review from brb May 21, 2021 05:30
@aditighag aditighag marked this pull request as draft May 21, 2021 05:30
@aditighag aditighag added the release-note/minor This PR changes functionality that users may find relevant to operating Cilium. label May 21, 2021
@maintainer-s-little-helper maintainer-s-little-helper bot removed the dont-merge/needs-release-note-label The author needs to describe the release impact of these changes. label May 21, 2021
Copy link
Contributor

@kkourt kkourt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the patch!

The patch at its current state does not seem to work for me [1]. it's probably because cilium tries to mount in /var/run/cilium and it uses this directory (which is not the global one). Passing a --cgroup-root also does not seem to work because then the detection will not run.

I'm confident that we can get the patch to work, but I would like to propose to leave the detection logic outside of the cilium agent, and just pass a proper argument with --cgroup-root. We can even pass two options, one for the root and one for where to attach the bpf programs, if we care about this distinction. We already had a special case for Kind, now we are going to add another one, and I think we should keep this complexity out of the agent.

[1] This is the logs:

level=info msg="Mounted cgroupv2 filesystem at /var/run/cilium/cgroupv2" subsys=cgroups
level=warning msg="Failed to determine cgroup v2 hierarchy for Kind node. Socket-based LB (--enable-host-reachable-services) will not work." error="cannot find \"kubelet\" in cgroup v2 path: \"/var/run/cilium/cgroupv2\"" subsys=cgroups

@brb
Copy link
Member

brb commented May 21, 2021

Cilium only needs to be able to attach the BPF cgroup programs to a cgroup root that's common for all pods (and not necessarily to the host cgroup root).

Unfortunately we need to include host itself, as otherwise an application running on a host won't be able to access a ClusterIP service via bpf_sock. So, finding a common root for all pods won't work.

@aditighag
Copy link
Member Author

aditighag commented May 21, 2021

Cilium only needs to be able to attach the BPF cgroup programs to a cgroup root that's common for all pods (and not necessarily to the host cgroup root).

Unfortunately we need to include host itself, as otherwise an application running on a host won't be able to access a ClusterIP service via bpf_sock. So, finding a common root for all pods won't work.

Yeah, it was at the back of my mind, but I wasn't really sure how it fits into the 2nd scenario above (Virtualized cgroup root). 🤔 I also would like to keep the detection logic to only the specific cases that I've called out above, and leave out the default as-is.

@borkmann Do you have any explanation for the #15137 like environments where (cgroup root as seen by Cilium) != (host cgroup root)because of cgroup namespaces? We weren't quite able to figure that out in the slack thread. It'll also be helpful to understand why cgroup fs isn't mounted by the init container similar to bpf fs, and if there are any implications of doing that?

@aditighag
Copy link
Member Author

aditighag commented May 21, 2021

Do you have any explanation for the #15137 like environments where (cgroup root as seen by Cilium) != (host cgroup root)because of cgroup namespaces?

Ah! I wonder if this unlocks the mystery of the 2nd scenario - https://docs.docker.com/config/containers/runmetrics/#running-docker-on-cgroup-v2. Discussed the overall PR with @borkmann offline, he also pointed out cgroupv1 compatibility issues.

Note that the cgroup v2 mode behaves slightly different from the cgroup v1 mode:
The default cgroup driver (dockerd --exec-opt native.cgroupdriver) is “systemd” on v2, “cgroupfs” on v1.
The default cgroup namespace mode (docker run --cgroupns) is “private” on v2, “host” on v1.
The docker run flags --oom-kill-disable and --kernel-memory are discarded on v2.

--cgroupns |   | API 1.41+Cgroup namespace to use (host|private) 'host': Run the container in the Docker host's cgroup namespace 'private': Run the container in its own private cgroup namespace '': Use the cgroup namespace as configured by the default-cgroupns-mode option on the daemon (default)

@brb
Copy link
Member

brb commented May 26, 2021

wonder if this unlocks the mystery of the 2nd scenario

@aditighag Aha, good find! I'm using Docker with the systemd cgroup driver (=cgroup v2), and I have completely disabled cgroupv1 on my host.

> docker run --cgroupns=host --name foobar --privileged -td alpine sleep 3600
cd33c0d2dfa48d976100a1fd7ceffa0d5588148f65b58ec5b2904c6ad8c3d37b
> docker exec -ti foobar /bin/sh
/ # mkdir /foo
/ # mount -t cgroup2 none /foo
/ # cd /foo/
/foo # cat cgroup.procs  | wc -l
153

> docker run --cgroupns=private --name foobar --privileged -td alpine sleep 3600
4a834a698271c9b1d9284e26a64873f0ca2203d73837aae205a04cbb5562fa51
> docker exec -ti foobar /bin/sh
/ # mkdir /foo
/ # mount -t cgroup2 none /foo
/ # cd /foo/
/foo # cat cgroup.procs  | wc -l
4

I think it might be possible to run on Kind when cgroupns=private, as otherwise the detection of common subhierarchy won't work. Anyway, let's discuss it during this week's sig-datapath.

@aditighag
Copy link
Member Author

aditighag commented May 27, 2021

Discussed the overall issue in the sig-datapath meeting (05/27), the suggestion was to also mount the host cgroupv2 fs (prior to cilium agent running) from the init script. This is similar to how we mount the BPF fs.

@brb
Copy link
Member

brb commented May 27, 2021

Some relevant examples:

> cat /proc/$(docker inspect kind-control-plane -f '{{.State.Pid}}')/cgroup
0::/system.slice/docker-8b5a44de9a72711ff1d0ad827a8612ff87e7c06acd81b60c31d690973b0ac392.scope/init.scope

> sudo cat /proc/$(pidof cilium-agent)/cgroup
0::/system.slice/docker-8b5a44de9a72711ff1d0ad827a8612ff87e7c06acd81b60c31d690973b0ac392.scope/kubelet/kubepods/burstable/podaf2cded2-e493-43e4-9199-dc870ac556a5/ca7fb7b008460fb77187247f772afbe9079674692513b14eaecfdaf327562a5a

> docker exec -ti kind-control-plane /bin/bash
root@kind-control-plane:/# cat /proc/self/cgroup
0::/init.scope

root@kind-control-plane:/# cat /proc/$(pidof cilium-agent)/cgroup
0::/kubelet/kubepods/burstable/podaf2cded2-e493-43e4-9199-dc870ac556a5/ca7fb7b008460fb77187247f772afbe9079674692513b14eaecfdaf327562a5a

> ks exec -ti cilium-x8c7m /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Defaulted container "cilium-agent" out of: cilium-agent, wait-for-node-init (init), clean-cilium-state (init)
root@kind-control-plane:/home/cilium# cat /proc/self/cgroup
0::/kubelet/kubepods/burstable/podaf2cded2-e493-43e4-9199-dc870ac556a5/ca7fb7b008460fb77187247f772afbe9079674692513b14eaecfdaf327562a5a

As you can see, the kind-control-plane Docker container and the cilium-agent pod running in it has a different cgroup parent, so mounting the cgroupfs from node-init won't work in the namespaced cgroup mode.

@aditighag
Copy link
Member Author

aditighag commented May 28, 2021

@brb I don't follow your comment. The cgroup hierarchies are relative to the top level cgroup root. As long as we attach the BPF programs at every kind node's cgroup root, it should work, no?

I mounted cgroup fs as part of an init container, disabled kind detection logic, and socket lb worked fine. See details about cgroup paths - #16078 (comment).

Can you elaborate more?

@brb
Copy link
Member

brb commented May 28, 2021

The cgroup hierarchies are relative to the top level cgroup root. As long as we attach the BPF programs at every kind node's cgroup root, it should work, no?

With cgroup namespaces on, the node-init container will mount /system.slice/docker-foobar.scope/init.scope as the cgroup root, while cilium-agent (or any other pod) will run in the /system.slice/docker-foobar.scope/kubelet/kubepods/burstable/pod-foo/bar cgroup. The bpf_sock will be attached to /system.slice/docker-foobar.scope/init.scope which will render them useless for pods, as the common ancestor is /system.slice/docker-foobar.scope/ which doesn't have bpf_sock attached.

With cgroup namespaces off, the node-init container will mount the host's cgroup root. So we still would need to find an appropriate hierarchy when running on Kind for each cilium-agent.

@brb
Copy link
Member

brb commented May 28, 2021

I think you can ignore what I wrote above, as it is no longer relevant. The latest findings are that even with cgroup NS on, on Kind we gonna run the cilium-agent pod in the same cgroup NS as the Kind node container. Meaning that if we mount cgroupfs from the node-init and then propagate the mount via DaemonSet into the cilium-agent pod, then cilium-agent will attach the bpf_sock to the right cgroup root 🎉

Next step is to check in what cgroup NS the node-init is running.

@brb
Copy link
Member

brb commented May 28, 2021

Next step is to check in what cgroup NS the node-init is running.

I've tried running the following Pod (on regular k8s node, i.e. not on Kind= which should have the same privileges and configuration as the node-init:

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox
    imagePullPolicy: Always
    name: busybox
    command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
    securityContext:
      privileged: true
  hostPID: true
  hostNetwork: true

Unfortunately, it's running in a different cgroup NS than the host 😞

> sudo ls -al /proc/$(pidof sleep)/ns/cgroup
lrwxrwxrwx 1 root root 0 May 28 18:29 /proc/837276/ns/cgroup -> 'cgroup:[4026533221]'
> sudo ls -al /proc/self/ns/cgroup
lrwxrwxrwx 1 root root 0 May 28 18:38 /proc/self/ns/cgroup -> 'cgroup:[4026531835]'

@aditighag
Copy link
Member Author

Unfortunately, it's running in a different cgroup NS than the host 😞

That's expected since you are only running in the network and pid namespaces of the host.

In the node-init container, we use nsenter to enter in the host namespace. I'll push the changes shortly. However, I noticed that the node-int container is run only when the nodeinit config option is enabled. That's not the case with the cilium-cli logic though. Planning to bring this up in the community meeting to get more context.

@brb
Copy link
Member

brb commented May 31, 2021

we use nsenter to enter in the host namespace

But that's only for net and mount ns? I think the nodeinit will still run in a container's cgroup ns which we want to avoid.

@aditighag
Copy link
Member Author

We need to specify cgroup as the enterable namespace in the nsenter command. I tried this in the dev VM (didn't want to mess up the /sys/fs/cgroup so I used a test dir) -

vagrant@k8s1:/sys/fs/cgroup$ cd /var/run/cilium/test-cgroupv2/
vagrant@k8s1:/var/run/cilium/test-cgroupv2$ ls
root@k8s1:/home/cilium# nsenter -t 1 -m -C -- mount -t cgroup2 none /var/run/cilium/test-cgroupv2
$ cat /sys/fs/cgroup/cgroup.procs  | wc -l
112
vagrant@k8s1:/var/run/cilium/test-cgroupv2$ cat cgroup.procs | wc -l
112

We can confirm that node-init can run in the same cgroup ns as the host (with hostPID set to true) -

root@k8s1:/home/cilium# nsenter -t 1 -C -- sleep 5000 &
[1] 12108
vagrant@k8s1:~/go/src/github.com/cilium/cilium$ sudo ls -al /proc/12108/ns/cgroup
lrwxrwxrwx 1 root root 0 Jun  1 05:06 /proc/12108/ns/cgroup -> 'cgroup:[4026531835]'
vagrant@k8s1:~/go/src/github.com/cilium/cilium$ sudo ls -al /proc/self/ns/cgroup
lrwxrwxrwx 1 root root 0 Jun  1 05:06 /proc/self/ns/cgroup -> 'cgroup:[4026531835]'

@aditighag aditighag changed the title kpr: Enable Cilium socket based load balancing on different platforms kpr: Enable/fix Cilium socket based load balancing in different environments Jun 2, 2021
@maintainer-s-little-helper maintainer-s-little-helper bot moved this from Backport pending to v1.9 to Backport done to v1.9 in 1.9.9 Jul 15, 2021
dghubble added a commit to poseidon/terraform-render-bootstrap that referenced this pull request Jul 23, 2021
* Add init container to auto-mount /sys/fs/cgroup cgroup2
at /run/cilium/cgroupv2 for the Cilium agent
* Enable CNI exclusive mode, to disable other configs
found in /etc/cni/net.d/
* cilium/cilium#16259
dghubble added a commit to poseidon/terraform-render-bootstrap that referenced this pull request Jul 24, 2021
* Add init container to auto-mount /sys/fs/cgroup cgroup2
at /run/cilium/cgroupv2 for the Cilium agent
* Enable CNI exclusive mode, to disable other configs
found in /etc/cni/net.d/
* cilium/cilium#16259
dghubble added a commit to poseidon/typhoon that referenced this pull request Jul 24, 2021
* On Fedora CoreOS, Cilium cross-node service IP load balancing
stopped working for a time (first observable as CoreDNS pods
located on worker nodes not being able to reach the kubernetes
API service 10.3.0.1). This turned out to have two parts:
* Fedora CoreOS switched to cgroups v2 by default. In our early
testing with cgroups v2, Calico (default) was used. With the
cgroups v2 change, SELinux policy denied some eBPF operations.
Since fixed in all Fedora CoreOS channels
* Cilium requires new mounts to support cgroups v2, which are
added here

* coreos/fedora-coreos-tracker#292
* coreos/fedora-coreos-tracker#881
* cilium/cilium#16259
dghubble added a commit to poseidon/typhoon that referenced this pull request Jul 24, 2021
* On Fedora CoreOS, Cilium cross-node service IP load balancing
stopped working for a time (first observable as CoreDNS pods
located on worker nodes not being able to reach the kubernetes
API service 10.3.0.1). This turned out to have two parts:
* Fedora CoreOS switched to cgroups v2 by default. In our early
testing with cgroups v2, Calico (default) was used. With the
cgroups v2 change, SELinux policy denied some eBPF operations.
Since fixed in all Fedora CoreOS channels
* Cilium requires new mounts to support cgroups v2, which are
added here

* coreos/fedora-coreos-tracker#292
* coreos/fedora-coreos-tracker#881
* cilium/cilium#16259
foltik pushed a commit to foltik/typhoon that referenced this pull request Sep 1, 2021
* On Fedora CoreOS, Cilium cross-node service IP load balancing
stopped working for a time (first observable as CoreDNS pods
located on worker nodes not being able to reach the kubernetes
API service 10.3.0.1). This turned out to have two parts:
* Fedora CoreOS switched to cgroups v2 by default. In our early
testing with cgroups v2, Calico (default) was used. With the
cgroups v2 change, SELinux policy denied some eBPF operations.
Since fixed in all Fedora CoreOS channels
* Cilium requires new mounts to support cgroups v2, which are
added here

* coreos/fedora-coreos-tracker#292
* coreos/fedora-coreos-tracker#881
* cilium/cilium#16259
aditighag added a commit to aditighag/cilium-cli that referenced this pull request Nov 24, 2021
We need to mount cgroup2 filesystem on the underlying host
in order to enable socket-based load-balancing in environments
with container runtime cgroupv2 configurations.

See issues for more details - cilium/cilium#16259
and cilium/cilium#16815.
aditighag added a commit to aditighag/cilium-cli that referenced this pull request Nov 24, 2021
We need to mount cgroup2 filesystem on the underlying host
in order to enable socket-based load-balancing in environments
with container runtime cgroupv2 configurations.

See issues for more details - cilium/cilium#16259
and cilium/cilium#16815.

Signed-off-by: Aditi Ghag <aditi@cilium.io>
elemental-lf pushed a commit to elemental-lf/typhoon that referenced this pull request Dec 11, 2021
* On Fedora CoreOS, Cilium cross-node service IP load balancing
stopped working for a time (first observable as CoreDNS pods
located on worker nodes not being able to reach the kubernetes
API service 10.3.0.1). This turned out to have two parts:
* Fedora CoreOS switched to cgroups v2 by default. In our early
testing with cgroups v2, Calico (default) was used. With the
cgroups v2 change, SELinux policy denied some eBPF operations.
Since fixed in all Fedora CoreOS channels
* Cilium requires new mounts to support cgroups v2, which are
added here

* coreos/fedora-coreos-tracker#292
* coreos/fedora-coreos-tracker#881
* cilium/cilium#16259
aditighag added a commit to aditighag/cilium that referenced this pull request Jan 3, 2022
For kube-proxy replacement (specifically, socket-based load-balancing)
to work correctly in KIND clusters, the BPF cgroup programs need to be
attached at the correct cgroup hierarchy. For this to happen, the KIND
nodes need to have their own separate cgroup namespace.
More details in PR - cilium#16259.

While cgroup namespaces are supported across both cgroup v1 and v2 modes,
container runtimes like Docker enable private cgroup namespace mode
by default only with cgroup v2 [1]. With cgroup v1, the default is host
cgroup namespace, whereby KIND node containers (and also cilium agent pods)
are created in the same cgroup namespace as the underlying host.

[1] https://docs.docker.com/config/containers/runmetrics/#running-docker-on-cgroup-v2

Signed-off-by: Aditi Ghag <aditi@cilium.io>
christarazi pushed a commit that referenced this pull request Jan 5, 2022
For kube-proxy replacement (specifically, socket-based load-balancing)
to work correctly in KIND clusters, the BPF cgroup programs need to be
attached at the correct cgroup hierarchy. For this to happen, the KIND
nodes need to have their own separate cgroup namespace.
More details in PR - #16259.

While cgroup namespaces are supported across both cgroup v1 and v2 modes,
container runtimes like Docker enable private cgroup namespace mode
by default only with cgroup v2 [1]. With cgroup v1, the default is host
cgroup namespace, whereby KIND node containers (and also cilium agent pods)
are created in the same cgroup namespace as the underlying host.

[1] https://docs.docker.com/config/containers/runmetrics/#running-docker-on-cgroup-v2

Signed-off-by: Aditi Ghag <aditi@cilium.io>
pchaigno pushed a commit to pchaigno/cilium that referenced this pull request Jan 13, 2022
[ upstream commit 635ba6c ]

For kube-proxy replacement (specifically, socket-based load-balancing)
to work correctly in KIND clusters, the BPF cgroup programs need to be
attached at the correct cgroup hierarchy. For this to happen, the KIND
nodes need to have their own separate cgroup namespace.
More details in PR - cilium#16259.

While cgroup namespaces are supported across both cgroup v1 and v2 modes,
container runtimes like Docker enable private cgroup namespace mode
by default only with cgroup v2 [1]. With cgroup v1, the default is host
cgroup namespace, whereby KIND node containers (and also cilium agent pods)
are created in the same cgroup namespace as the underlying host.

[1] https://docs.docker.com/config/containers/runmetrics/#running-docker-on-cgroup-v2

Signed-off-by: Aditi Ghag <aditi@cilium.io>
Signed-off-by: Paul Chaignon <paul@cilium.io>
aanm pushed a commit that referenced this pull request Jan 17, 2022
[ upstream commit 635ba6c ]

For kube-proxy replacement (specifically, socket-based load-balancing)
to work correctly in KIND clusters, the BPF cgroup programs need to be
attached at the correct cgroup hierarchy. For this to happen, the KIND
nodes need to have their own separate cgroup namespace.
More details in PR - #16259.

While cgroup namespaces are supported across both cgroup v1 and v2 modes,
container runtimes like Docker enable private cgroup namespace mode
by default only with cgroup v2 [1]. With cgroup v1, the default is host
cgroup namespace, whereby KIND node containers (and also cilium agent pods)
are created in the same cgroup namespace as the underlying host.

[1] https://docs.docker.com/config/containers/runmetrics/#running-docker-on-cgroup-v2

Signed-off-by: Aditi Ghag <aditi@cilium.io>
Signed-off-by: Paul Chaignon <paul@cilium.io>
Snaipe pushed a commit to aristanetworks/monsoon that referenced this pull request Apr 13, 2023
* On Fedora CoreOS, Cilium cross-node service IP load balancing
stopped working for a time (first observable as CoreDNS pods
located on worker nodes not being able to reach the kubernetes
API service 10.3.0.1). This turned out to have two parts:
* Fedora CoreOS switched to cgroups v2 by default. In our early
testing with cgroups v2, Calico (default) was used. With the
cgroups v2 change, SELinux policy denied some eBPF operations.
Since fixed in all Fedora CoreOS channels
* Cilium requires new mounts to support cgroups v2, which are
added here

* coreos/fedora-coreos-tracker#292
* coreos/fedora-coreos-tracker#881
* cilium/cilium#16259
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready-to-merge This PR has passed all tests and received consensus from code owners to merge. release-note/minor This PR changes functionality that users may find relevant to operating Cilium.
Projects
No open projects
1.10.2
Backport done to v1.10
1.9.9
Backport done to v1.9
9 participants