-
Notifications
You must be signed in to change notification settings - Fork 326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bugfix: matchBinaries in multiple selectors #774
Conversation
5d9b23d
to
ee110ae
Compare
Nice catch, thanks!
I have not looked closely at the patch (will do so soon) but wanted to ask something first. The map seems to be per-kprobe hook. Is this correct? So even if we had something like:
Things would work correctly with the approach in this patch. |
Yes, that's correct. I have just checked the tracing policy that you provided and seems to work as expected. This tracing policy works even without this patch. This patch fixes the issue where we have multiple selectors inside a single kprobe. |
op, err := SelectorOp(b.Operator) | ||
if err != nil { | ||
return fmt.Errorf("matchBinary error: %w", err) | ||
} | ||
if op != SelectorOpIn && op != SelectorOpNotIn { | ||
return fmt.Errorf("matchBinary error: Only In and NotIn operators are supported") | ||
} | ||
k.SetBinaryOp(op) | ||
k.SetBinaryOp(selIdx, op) | ||
for _, s := range b.Values { | ||
if len(s) > 255 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we still need this check? I still need to check the rest of the change, but the binary name does not go to kernel now, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean the len(s) > 255
right? They go into names_map
and this length is due to the map key size (struct names_map_key) as shown in
Lines 67 to 76 in e7449cf
struct names_map_key { | |
char path[256]; | |
}; | |
struct { | |
__uint(type, BPF_MAP_TYPE_HASH); | |
__uint(max_entries, 256); /* maximum number of binary names for all matchBinary selectors */ | |
__type(key, struct names_map_key); | |
__type(value, __u32); | |
} names_map SEC(".maps"); |
so IIUC the user side sets for 'path' in matchBinaries following tables:
the execve sensor then sets 'execve_map_value->binary' value with 'idx' when the the filter code in probe will find execve_map_value for current process and check
if it's the case we match the binary it'd be great to have some simple example of what's in the maps for given matchBinary selector ;-) |
f1292f5
to
e40ae39
Compare
I have updated the commit message to include an example for that. |
47bc0a2
to
986f069
Compare
Using the following tracing policy: kprobes: - call: "fd_install" syscall: false args: - index: 0 type: int - index: 1 type: "file" selectors: - matchBinaries: - operator: "In" values: - "/usr/bin/cat" matchArgs: - index: 1 operator: "Equal" values: - "/etc/passwd" - matchBinaries: - operator: "In" values: - "/usr/bin/tail" matchArgs: - index: 1 operator: "Equal" values: - "/etc/shadow" we expect to get events if: [ binary == /usr/bin/cat AND arg1 == /etc/passwd ] OR [ binary == /usr/bin/tail AND arg1 == /etc/shadow ] Using the previous tracing policy and running: /usr/bin/cat /etc/passwd && /usr/bin/cat /etc/shadow && /usr/bin/tail /etc/passwd && /usr/bin/tail /etc/shadow We get events for all of these. The issue is that in #686 we use a single map for all matchBinaries selectors. That makes the previous tracing policy to behave as: [ (binary == /usr/bin/cat OR binary == /usr/bin/tail) AND arg1 == /etc/passwd ] OR [ (binary == /usr/bin/cat OR binary == /usr/bin/tail) AND arg1 == /etc/shadow ] This patch fixes that issues. We move from a BPF_MAP_TYPE_HASH to a BPF_MAP_TYPE_HASH_OF_MAPS. We index the outter map based on selector ID and the inner map is exactly the same as the one we used in #686. Based on the previous tracing policy, the user side will generate the names_map and sel_names_map. names_map is shared among all selectors that have matchBinaries. Each distinct binary name will appear in the names_map with a unique value per entry (> 0). The contents of the names_map for the previous tracing policy will be: names_map["/usr/bin/cat"] = 1 names_map["/usr/bin/tail"] = 2 Whenever we have an execve event, we check the execve binary name and if is that inside names_map, we set execve_map_value->binary to the value of that entry. If we cannot find that entry inside names_map, we set execve_map_value->binary equals to 0 which means that this binary name is nowhere in all matchBinaries. Based in that we also generate the sel_names_map. This is a BPF_MAP_TYPE_HASH_OF_MAPS map where we index in the outter map using the selector ID and we index in the inner map using the execve_map_value->binary. If that exists, then matchBinary will be matched. The contents of the sel_names_map based on the previous tracing policy will be: sel_names_map[0 /* sel_index */ ] = { hash_map[1 /* value of cat in names_map */] = 1 /* always 1 for now */ } sel_names_map[1 /* sel_index */ ] = { hash_map[2 /* value of tail in names_map */] = 1 /* always 1 for now */ } Finally, if sel_names_map[sel_idx] is NULL this means that the specific selector does not container a matchBinaries action. If sel_names_map[sel_idx].hash_map[idx] is NULL this means that we don't care about the specific binary. Signed-off-by: Anastasios Papagiannis <tasos.papagiannnis@gmail.com>
Signed-off-by: Anastasios Papagiannis <tasos.papagiannnis@gmail.com>
986f069
to
735ed5d
Compare
Using the following tracing policy:
we expect to get events if:
[ binary == /usr/bin/cat AND arg1 == /etc/passwd ] OR [ binary == /usr/bin/tail AND arg1 == /etc/shadow ]
By running:
/usr/bin/cat /etc/passwd && /usr/bin/cat /etc/shadow && /usr/bin/tail /etc/passwd && /usr/bin/tail /etc/shadow
We get events for all of these. Which is not the expected behavior.
The issue is that in #686 we use a single map for all
matchBinaries
selectors. That makes the previous tracing policy to behave as:[ (binary == /usr/bin/cat OR binary == /usr/bin/tail) AND arg1 == /etc/passwd ] OR [ (binary == /usr/bin/cat OR binary == /usr/bin/tail) AND arg1 == /etc/shadow ]
This patch fixes that issues. We move from a
BPF_MAP_TYPE_HASH
to aBPF_MAP_TYPE_HASH_OF_MAPS
. We index the outter map based on the selector ID and the inner map is exactly the same as the one we used in #686.Based on the previous tracing policy, the user side will generate the
names_map
andsel_names_map
.names_map
is shared among all selectors that havematchBinaries
. Each distinct binary name will appear in the names_map with a unique value per entry (> 0). The contents of thenames_map
for the previous tracing policy will be:Whenever we have an execve event, we check the execve binary name and if is that inside
names_map
, we setexecve_map_value->binary
to the value of that entry. If we cannot find that entry insidenames_map
, we setexecve_map_value->binary
equals to 0 which means that this binary name is nowhere in allmatchBinaries
.Based in that we also generate the
sel_names_map
. These are one per-sensor. This is aBPF_MAP_TYPE_HASH_OF_MAPS
map where we index in the outter map using the selector ID and we index in the inner map using theexecve_map_value->binary
. If that exists, thenmatchBinary
will be matched.The contents of the
sel_names_map
based on the previous tracing policy will be:Finally, if
sel_names_map[sel_idx]
is NULL this means that the specific selector does not container amatchBinaries
action. Ifsel_names_map[sel_idx].hash_map[idx]
is NULL this means that we don't care about the specific binary.