-
Notifications
You must be signed in to change notification settings - Fork 902
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[New Feature]: Falco native resource utilization metrics logs support #2222
Comments
Thanks for reporting and tracking this! For more context, Falco has two options that allow something similar to this:
The output of this will be something like this in the upcoming Falco 0.33 (last update in #2182): {
"sample": 71,
"k8s_audit": {
"cur": {
"events": 1
},
"delta": {
"events": 1
}
},
"syscall": {
"cur": {
"drop_pct": 0,
"drops": 0,
"events": 9525,
"preemptions": 0
},
"delta": {
"drop_pct": 0,
"drops": 0,
"events": 137,
"preemptions": 0
}
}
} So I think possible next steps for this are:
|
cc @leogr |
@jasondellaluce awesome this sounds like a lot of really great options and certainly calls for a good amount of collaboration and needing multiple PRs to get everything in place. Maybe worth creating a new umbrella issue for "stats" to have one place for tracking? Agreed JSON is a good starting point and more can be supported later as new endpoints are added. This is exciting. Re cron syntax support I think this would be really nice, but it can be tracked as "nice to have", some presets like every hour, every minute etc would work fine at the beginning as well. To me it would be intuitive to support multiple stats categories like some end users may not need these utilization metrics and only want to log other metrics vs for example for us this is the most important measurement to be able to decide if the tool meets our budgeting requirements. Certain that's what you had in mind 🙃 with the re-design and more. Could start looking into new APIs/methods to get the metrics we don't yet have in libs and maybe would you want to start a new metrics interface design since you know the project so well? |
Yeah definitely. Will start researching a good way to standardize metrics in our codebase after the upcoming release. |
Amazing, thanks a bunch @jasondellaluce! |
Hi @jasondellaluce starting to look into the parts I signed myself up for (getting the math functions in place to derive / snapshot the metrics end users can emit on a cron tab). Would it be possible to first discuss a few details here? There are multiple ways to derive CPU and memory usage, should we also feature multiple metrics in this regard?
@gnosek would it be ok to also ask you for some feedback in this regard? Would appreciate your thoughts on this a lot 🙏 . |
Since you mention cron, I assume you don't want to poll them every 100 ms :) so maybe (just maybe) the prometheus exporter approach would work? Your cron command would just be a curl then. Maybe we can extend As you can probably guess from the above, I'm not really a fan of adding a cron-type scheduler to falco for this. You're already running cron, no reason to reinvent the wheel IMO. For not reinventing the wheel further, I'd drop the CPU/memory metrics altogether (leaving only the falco-specific data). There are tons of tools to monitor these already. If we do want to do this ourselves, I'd prefer to keep the calculations simple, i.e. either:
I'd very much prefer not to do the delta calculation in falco (i.e. do it the For memory, I'd probably only return the raw values too (rss_kb, vsize_kb would be the two obvious ones). tl;dr: I'd rather expose the absolute minimum of stats as a prometheus exporter and do the fancy in promql (but I am all for adding new falco-specific metrics, e.g. in-kernel cpu overhead if we can determine it) |
@gnosek this is excellent feedback and what I was looking for to have a more concrete idea of what could be reasonable, meaning it strikes a balance of having these metrics "natively supported" while not going crazy on host either :). Couldn't agree more: Doing the remainder of calculation in promql or in your SQL-like engine you have available in your post processing compute platform (in case it's not prometheus) is cheap either way. Re Additional question (more forward leaning for a v2 or v3 of such metrics after knowing if v1 is useful): As eBPF evolves, there could be interesting ways to monitor eBPF perf better, such as measuring the average time spent in each bpf and trying to optimize for numbers reflecting "faster" as optimizations are added. At the moment bpftool is not too granular, meaning no tail-call resolution of stats, but hopefully the tool evolves too :) Could anyone think of ways how we could bundle or master
Ah and maybe additional insights into the motivation that drives the "native support" of CPU and memory performance metrics parts of this feature request could be useful for anyone reading this: For example, we are fortunate enough to have large infrastructure and SREs teams to already have proper metrics in place over prometheus. In practice, the major overhead appears to be sustaining different data pipelines or intermediary brokers to forward performance metrics to where you (person who deploys and maintains Falco in production) would like to have this data available or preserved for custom correlations (-> to get to the bottom of perf overhead <-> detection capabilities tradeoff). Aware that this observation is based on the experience of maintaining large deployments in custom ecosystems where a simple unified approach could be a relief, so it may not apply to everyone. |
Thanks for the kind words @incertum :)
bpftool is a cli wrapper around libbpf, which IIRC we already bundle in libs and the underlying machinery seems to be fairly simple anyway:
To get more insight from these stats we'd need to split the one huge eBPF tracepoint into per-event ones (or come up with some meta-instrumentation for the eBPF probe; the in-kernel stats are pretty basic anyway).
Sure, but this feels like dragging falco into the guerilla warfare between you and your SREs ;) If you already have an officially blessed prom exporter deployed by SREs, you can scrape it and correlate the data between it and falco, or maybe you can deploy a lightweight exporter to gather the generic stats yourself. I'm wary of implementing everything inside falco, since sooner or later it will start competing with systemd ;) |
🤯 re the details around You will probably laugh at hearing this, but I currently have a nice hack and "exfiltrate" those bpftool metrics every hour and by exfiltrating I mean I use some bash tricks to make the numbers appear in syscall related data fields I can export over Falco rules lol just so I have only one data pipeline to worry about. Re the CPU and memory stats metrics 🙃 yeah it's not an easy overall story when looking at it from an ecosystem / diverse deployments point of view ... if it's ok to export 4 more raw numbers over the new stats event - at the same time why not? Let's think more about it and discuss further. If we need to make a tradeoff then the specialized metrics you can't easily get via alternatives should take precedence. |
I'm not laughing, you have my sympathy :) At the same time, I'm not sure cron and a system stats collector are core falco features ;) As I have finally noticed in your initial comment, you want an event with these stats. In an ideal world, the non-falco stats could be provided by a plugin. Since every engine can (and will) have its own stats, we'd probably need arbitrary k/v event data (I'm mildly reluctant to just shove json in there) and this could be extended with plugins to measure anything you need. So, thinking somewhat longer term (not sure what timeline you have in mind), we would:
The extra events should probably be injected in sinsp, not scap, since scap_next is fairly limited in scope while sinsp::next already handles everything, including kitchen sinks ;) (having multiple engines in one handle is up there on the list I'd like to see in libs, along with LSM hooks) |
;) I like those suggestions better than what I initially thought and re timeline - don't think an intermediary workaround is worth it just to have something faster, meaning I think let's do it rather the proper way and maybe aim for 0.35? Does it sound like a good plan to focus on extending the "concept of stats to falco and libs" and more specifically the core stats and bpf per-tracepoint overhead numbers at first in the more near-term? And push the other generic stats, like CPU and memory as plugin option to the longer term? CC @jasondellaluce? re the cleaner stats interface any more thoughts? As as @gnosek pointed out lots of ingredients like the event counter or the drop counter etc are kind of already there ... but not exposed for export at a regular interval. e.g. if you don't have any drops you don't know about n_evts at all. At minimum being able to reconstruct event rate regularly and correlate with CPU overhead would be a big win. And agreed makes more sense to inject extra events to sinsp.
Nice, big support for this. Is there an existing issue with an outline or could more details be shared elsewhere (would be interested)? As chats about the LSM hooks are becoming more concrete now as well it would be nice to have concrete overhead numbers than needing to rely on reputation. I am one of the folks who wants all the nice and correct data/features, but I am also constantly fighting against overhead budgeting constraints. |
Yes, IMHO. Since we're interested in engine (bpf et al.) stats, this has to live in libscap and then the upper layers (sinsp, falco) would build on top.
Not that I'm aware of. For a fully generic solution there would be issues with e.g. the process table (each engine can supply its own and we'd have to 1. make sense of it and ideally 2. not duplicate work if e.g. both engines scan /proc). The easy way out is to run multiple (sinsp) inspectors in parallel but that doesn't help this particular use case. My never ending patch series slowly evolves to the point where scap wouldn't need to care about system state at all (just be an event pipe; all state would be managed by sinsp) so maybe it would be easier then. |
ACK Yes, it seems that as the project is evolving scap in deed shall best be reduced to a pipe. Would be very supportive of that, as it will make many future contributions that attempt to make the tool even more "intelligent" easier. Worth the refactoring trouble would say. |
Coming late to the party, but I'm supportive of all the discussion above. Let's set a milestone for this not to lose track of the conversation, and eventually move this to the next closest release in which the first changes can fit. /milestone 0.34.0 |
Any updates on this? 🤔 |
Agree @leogr let's try to prioritize as it seems to have become more relevant in past few weeks. Have some cycles and can start today to get a PR open by beginning of next week and hopefully collectively we can make some good progress before the holiday break 🙃 |
/remove-milestone 0.34.0 |
See #2333 (comment), explore:
Created a public HackMD document for additional discussions / clarifications around actual implementation details. |
Additional comments around syscalls counters added in this PR #2361 (comment) |
Updates:
Everything likely will get refactored a bit more under the hood after landing a v1 of these new metrics for Falco 0.35. |
For the most part relevant
Pushed to the existing Falco PR #2333 integrating the current new stats v2 metrics. |
I'm having reduced bandwidth, so I'll try my best to fit in for 0.35, but shortly after the next release would be the next target in case I'll not make it in time. |
Updated the initial comment #2222 (comment). Closing this issue as the task is completed. Syscall counters and prometheus exporter option planned for Falco 0.36 will be tracked in new issues as part of the new roadmap planning. Thanks everyone for the valuable input and help ❤️ !!! |
Motivation
Support for "Falco native resource utilization metrics" is high up on the wish list of SREs I have the pleasure to work with.
While many end users sustaining a large deployment already pull such metrics from their systems using other mechanisms, there is always some loss of information. In addition, it can be cumbersome to join information from different sources and typically specialized metrics are not supported.
Falco could very easily emit basic aggregated resource utilization metrics scheduled on a cron tab (or simpler alternative similar to existing methods). Additional overhead should be low since many metrics are already available.
Finally, it would make it easier to perform ad-hoc performance studies, especially as it seems LSM hooks are favored as additional event sources for the next Falco iterations. That way both tool developers and end users can optimize the tool better for threat detection use cases and derive SLO (Service Level Objectives) that can be the basis for resource overhead budgeting. Such logs with more specialized metrics can also help disentangling factors that can cause higher resource utilization that are however outside of the tool developer's influence, such as hardware, kernel version or actual workload footprint (event rate).
Feature
[Edited May 23, 2023]:
Consider these key points about the new
metrics
feature in Falco:Additional highlights:
libbpf
statsNavigate to the
metrics
key in https://github.com/falcosecurity/falco/blob/master/falco.yaml.Example metrics snapshot schema for Falco 0.35 release using
bpf
driver:Alternatives
End users can continue using their own mechanisms to pull Falco's resource utilization metrics or specialized metrics.
The text was updated successfully, but these errors were encountered: