-
Notifications
You must be signed in to change notification settings - Fork 343
Performance improvements in v0.6
In v0.6, we profiled KubeArmor using pprof and did some major performance improvements.
When Containerd is being used as the runtime, KubeAmor uses the containerd client for monitoring containers in the cluster. However, the container monitor was looking for new containers too frequently and calling a particular time consuming method. Reducing this frequency saved us a lot of CPU cycles.
KubeArmor’s system monitor pushes events to a BPF map. The userspace code then uses Cilium eBPF’s reader for reading these events into a ring buffer. The reader would continuously check the BPF map for events and add them to the ring buffer if it is empty and new events are discovered. However, we discovered that too many events were getting lost. On debugging, we found out that the ring buffer size we allocated was too small and while the userspace code processed the events, the ring buffer was completely filled. This led to the reader dropping events due to no space being left. So, we increased the perf buffer size which solved the problem of dropping events and helped us bring down CPU utilization.
With the -logPath
flag you can specify an output source where you want KubeArmor to log telemetry events. By default, KubeArmor will write these to /tmp/kubearmor.log
in the host node’s filesystem. However, writing all the events was impacting performance because we were calling Write
syscall too frequently. So from v0.6, KubeArmor won’t write telemetry logs to /tmp/kubearmor
by default.
We realized that not all events for the open
syscall accessing /proc
and /sys
are needed. So, we modified the system monitor to drop them in the kernel space to save processing time in the user space
With migration to Cilium eBPF we decreased KubeArmor's memory usage.
So, with these and a couple of minor changes, we were able to go from this:
to this 🎉
We have further performance improvements in mind for the v0.7 release. Follow up with #653 for more.