-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zero overhead PROBE_MEM #7227
Zero overhead PROBE_MEM #7227
Conversation
Upstream branch: f6afdaf |
42fd05c
to
0fec8a3
Compare
Upstream branch: bf977ee |
608710c
to
f94a3f8
Compare
0fec8a3
to
bf05fb9
Compare
Upstream branch: 1ae7a19 |
f94a3f8
to
46f682d
Compare
bf05fb9
to
6404c1d
Compare
Upstream branch: 717d631 |
46f682d
to
e8a125c
Compare
6404c1d
to
dff980f
Compare
Upstream branch: 6ddf3a9 |
e8a125c
to
23781aa
Compare
dff980f
to
eb41634
Compare
Upstream branch: cc5083d |
23781aa
to
78b242e
Compare
eb41634
to
bd766ca
Compare
Upstream branch: 2807db7 |
78b242e
to
51b6427
Compare
bd766ca
to
f38bd14
Compare
Upstream branch: 2bb138c |
51b6427
to
254c917
Compare
f38bd14
to
1593cbd
Compare
Upstream branch: cd387ce |
254c917
to
002ac33
Compare
1593cbd
to
0d1dac8
Compare
Upstream branch: a129787 |
8ce0532
to
40c802a
Compare
26413c1
to
1b434af
Compare
Upstream branch: 5b747c2 |
40c802a
to
494982d
Compare
1b434af
to
036c22b
Compare
Upstream branch: 03922e9 |
494982d
to
c31f447
Compare
036c22b
to
86525ee
Compare
Upstream branch: e4a195e |
c31f447
to
038c2de
Compare
86525ee
to
4006b86
Compare
Upstream branch: 9474f72 |
038c2de
to
8e91eb9
Compare
4006b86
to
84cf8a8
Compare
Upstream branch: da5f8fd |
8e91eb9
to
c200008
Compare
84cf8a8
to
127a7ff
Compare
Upstream branch: 69716e4 |
c200008
to
22fba3c
Compare
127a7ff
to
301b15d
Compare
Currently, on x86, when SMAP is enabled, and a page fault occurs in kernel mode for accessing a user address, the kernel will rightly panic as no valid kernel code can cause such a page fault (unless buggy). There is no valid correct kernel code that can generate such a fault, therefore this behavior would be correct. BPF programs that currently encounter user addresses when doing PROBE_MEM loads (load instructions which are allowed to read any kernel address, only available for root users) avoid a page fault by performing bounds checking on the address. This requires the JIT to emit a jump over each PROBE_MEM load instruction to avoid hitting page faults. We would prefer avoiding these jump instructions to improve performance of programs which use PROBE_MEM loads pervasively. For correct behavior, programs already rely on the kernel addresses being valid when they are executing, but BPF's safety properties must still ensure kernel safety in presence of invalid addresses. Therefore, for correct programs, the bounds checking is an added cost meant to ensure kernel safety. If the do_user_addr_fault handler could perform fixups for the BPF program in such a case, the bounds checking could be eliminated, the load instruction could be emitted directly without any checking. Thus, in case SMAP is enabled (which would mean the kernel traps on accessing a user address), and the instruction pointer belongs to a BPF program, perform fixup for the access by searching exception tables. All BPF programs already execute with SMAP protection. When SMAP is not enabled, the BPF JIT will continue to emit bounds checking instructions. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
The previous patch changed the do_user_addr_fault page fault handler to invoke BPF's fixup routines (by searching exception tables and calling ex_handler_bpf). This would only occur when SMAP is enabled, such that any user address access from BPF programs running in kernel mode would reach this path and invoke the fixup routines. Relying on this behavior, disable any bounds checking instrumentation in the BPF JIT for x86 when X86_FEATURE_SMAP is available. All BPF programs execute with SMAP enabled, therefore when this feature is available, we can assume that SMAP will be enabled during program execution at runtime. This optimizes PROBE_MEM loads down to a normal unchecked load instruction. Any page faults for user or kernel addresses will be handled using the fixup routines, and the generation exception table entries for such load instructions. All in all, this ensures that PROBE_MEM loads will now incur no runtime overhead, and become practically free. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Upstream branch: fd8db07 |
22fba3c
to
f1a47ea
Compare
At least one diff in series https://patchwork.kernel.org/project/netdevbpf/list/?series=863355 irrelevant now for [Munch({'archived': False, 'project': 399, 'delegate': 121173})] search patterns |
Pull request for series with
subject: Zero overhead PROBE_MEM
version: 2
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=863355