Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements for tracking scalars in the BPF verifier #6182

Closed

Conversation

kernel-patches-daemon-bpf[bot]
Copy link

Pull request for series with
subject: Improvements for tracking scalars in the BPF verifier
version: 1
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=811932

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: 9299924
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=811932
version: 1

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: a4897b8
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=811932
version: 1

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: 929154a
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=811932
version: 1

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: b08c8fc
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=811932
version: 1

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: 5abde62
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=811932
version: 1

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: 5abde62
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=811932
version: 1

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: 9c9d9f6
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=811932
version: 1

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: 2ab1efa
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=811932
version: 1

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: a640de4
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=811932
version: 1

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: b456005
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=811932
version: 1

gentoo-root and others added 15 commits January 3, 2024 13:53
The u64_offset_to_skb_data test is supposed to make a 64-bit fill, but
instead makes a 16-bit one. Fix the test according to its intention. The
16-bit fill is covered by u16_offset_to_skb_data.

Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Current infinite loops detection mechanism is speculative:
- first, states_maybe_looping() check is done which simply does memcmp
  for R1-R10 in current frame;
- second, states_equal(..., exact=false) is called. With exact=false
  states_equal() would compare scalars for equality only if in old
  state scalar has precision mark.

Such logic might be problematic if compiler makes some unlucky stack
spill/fill decisions. An artificial example of a false positive looks
as follows:

        r0 = ... unknown scalar ...
        r0 &= 0xff;
        *(u64 *)(r10 - 8) = r0;
        r0 = 0;
    loop:
        r0 = *(u64 *)(r10 - 8);
        if r0 > 10 goto exit_;
        r0 += 1;
        *(u64 *)(r10 - 8) = r0;
        r0 = 0;
        goto loop;

This commit updates call to states_equal to use exact=true, forcing
all scalar comparisons to be exact.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
…detection

Verify that infinite loop detection logic separates states with
identical register states but different imprecise scalars spilled to
stack.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Adjust the check in bpf_get_spilled_reg to take into account spilled
registers narrower than 64 bits. That allows find_equal_scalars to
properly adjust the range of all spilled registers that have the same
ID. Before this change, it was possible for a register and a spilled
register to have the same IDs but different ranges if the spill was
narrower than 64 bits and a range check was performed on the register.

Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
When a range check is performed on a register that was 32-bit spilled to
the stack, the IDs of the two instances of the register are the same, so
the range should also be the same.

Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Extract the common code that generates a register ID for src_reg before
MOV if needed into a new function. This function will also be used in
a following commit.

Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Put calculation of the register value width into a dedicated function.
This function will also be used in a following commit.

Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Currently, when a scalar bounded register is spilled to the stack, its
ID is preserved, but only if was already assigned, i.e. if this register
was MOVed before.

Assign an ID on spill if none is set, so that equal scalars could be
tracked if a register is spilled to the stack and filled into another
register.

One test is adjusted to reflect the change in register IDs.

Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
The previous commit implemented assigning IDs to registers holding
scalars before spill. Add the test cases to check the new functionality.

Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Support the pattern where an unbounded scalar is spilled to the stack,
then boundary checks are performed on the src register, after which the
stack frame slot is refilled into a register.

Before this commit, the verifier didn't treat the src register and the
stack slot as related if the src register was an unbounded scalar. The
register state wasn't copied, the id wasn't preserved, and the stack
slot was marked as STACK_MISC. Subsequent boundary checks on the src
register wouldn't result in updating the boundaries of the spilled
variable on the stack.

After this commit, the verifier will preserve the bond between src and
dst even if src is unbounded, which permits to do boundary checks on src
and refill dst later, still remembering its boundaries. Such a pattern
is sometimes generated by clang when compiling complex long functions.

One test is adjusted to reflect the fact that an untracked register is
marked as precise at an earlier stage, and one more test is adjusted to
reflect that now unbounded scalars are tracked.

Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
The previous commit added tracking for unbounded scalars on spill. Add
the test case to check the new functionality.

Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
When the width of a fill is smaller than the width of the preceding
spill, the information about scalar boundaries can still be preserved,
as long as it's coerced to the right width (done by coerce_reg_to_size).
Even further, if the actual value fits into the fill width, the ID can
be preserved as well for further tracking of equal scalars.

Implement the above improvements, which makes narrowing fills behave the
same as narrowing spills and MOVs between registers.

Two tests are adjusted to accommodate for endianness differences and to
take into account that it's now allowed to do a narrowing fill from the
least significant bits.

reg_bounds_sync is added to coerce_reg_to_size to correctly adjust
umin/umax boundaries after the var_off truncation, for example, a 64-bit
value 0xXXXXXXXX00000000, when read as a 32-bit, gets umin = 0, umax =
0xFFFFFFFF, var_off = (0x0; 0xffffffff00000000), which needs to be
synced down to umax = 0, otherwise reg_bounds_sanity_check doesn't pass.

Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
The previous commit allowed to preserve boundaries and track IDs of
scalars on narrowing fills. Add test cases for that pattern.

Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com>
Changes for scalar ID tracking of spilled unbound scalars lead to
certain verification performance regression. This commit mitigates the
regression by exploiting the following properties maintained by
check_stack_read_fixed_off():
- a mix of STACK_MISC, STACK_ZERO and STACK_INVALID marks is read as
  unbounded scalar register;
- spi with all slots marked STACK_ZERO is read as scalar register with
  value zero.

This commit modifies stacksafe() to consider situations above
equivalent.

Veristat results after this patch show significant gains:

$ ./veristat -e file,prog,states -f '!states_pct<10' -f '!states_b<10' -C not-opt after
File              Program   States (A)  States (B)  States    (DIFF)
----------------  --------  ----------  ----------  ----------------
pyperf180.bpf.o   on_event       10456        8422   -2034 (-19.45%)
pyperf600.bpf.o   on_event       37319       22519  -14800 (-39.66%)
strobemeta.bpf.o  on_event       13435        4703   -8732 (-64.99%)

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Check that stacksafe() considers the following old vs cur stack spill
state combinations equivalent:
- spill of unbound scalar vs combination of STACK_{MISC,ZERO,INVALID}
- STACK_MISC vs spill of unbound scalar
- spill of scalar 0 vs STACK_ZERO
- STACK_ZERO vs spill of scalar 0

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: b456005
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=811932
version: 1

@kernel-patches-daemon-bpf
Copy link
Author

At least one diff in series https://patchwork.kernel.org/project/netdevbpf/list/?series=811932 irrelevant now for [Munch({'archived': False, 'project': 399, 'delegate': 121173})] search patterns

@kernel-patches-daemon-bpf kernel-patches-daemon-bpf bot deleted the series/811932=>bpf-next branch January 6, 2024 05:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants