Skip to content
Permalink
Alexei-Starovo…
Switch branches/tags

Commits on Feb 4, 2022

  1. bpf: Convert bpf_preload.ko to use light skeleton.

    The main change is a move of the single line
      #include "iterators.lskel.h"
    from iterators/iterators.c to bpf_preload_kern.c.
    Which means that generated light skeleton can be used from user space or
    user mode driver like iterators.c or from the kernel module.
    The direct use of light skeleton from the kernel module simplifies the code,
    since UMD is no longer necessary. The libbpf.a required user space and UMD. The
    CO-RE in the kernel and generated "loader bpf program" used by the light
    skeleton are capable to perform complex loading operations traditionally
    provided by libbpf. In addition UMD approach was launching UMD process
    every time bpffs has to be mounted. With light skeleton in the kernel
    the bpf_preload kernel module loads bpf iterators once and pins them
    multiple times into different bpffs mounts.
    
    Note the light skeleton cannot be used during early boot or out of kthread
    since light skeleton needs a valid mm. This limitation could be lifted in the
    future.
    
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Alexei Starovoitov authored and intel-lab-lkp committed Feb 4, 2022
  2. bpf: Update iterators.lskel.h.

    Light skeleton and skel_internal.h have changed.
    Update iterators.lskel.h.
    
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Alexei Starovoitov authored and intel-lab-lkp committed Feb 4, 2022
  3. bpftool: Generalize light skeleton generation.

    Generealize light skeleton by hiding mmap details in skel_internal.h
    In this form generated lskel.h is usable both by user space and by the kernel.
    
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Alexei Starovoitov authored and intel-lab-lkp committed Feb 4, 2022
  4. libbpf: Prepare light skeleton for the kernel.

    Prepare light skeleton to be used in the kernel module and in the user space.
    The look and feel of lskel.h is mostly the same with the difference that for
    user space the skel->rodata is the same pointer before and after skel_load
    operation, while in the kernel the skel->rodata after skel_open and the
    skel->rodata after skel_load are different pointers.
    Typical usage of skeleton remains the same for kernel and user space:
    skel = my_bpf__open();
    skel->rodata->my_global_var = init_val;
    err = my_bpf__load(skel);
    err = my_bpf__attach(skel);
    // access skel->rodata->my_global_var;
    // access skel->bss->another_var;
    
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Alexei Starovoitov authored and intel-lab-lkp committed Feb 4, 2022
  5. bpf: Extend sys_bpf commands for bpf_syscall programs.

    bpf_sycall programs can be used directly by the kernel modules
    to load programs and create maps via kernel skeleton.
    . Export bpf_sys_bpf syscall wrapper to be used in kernel skeleton.
    . Export bpf_map_get to be used in kernel skeleton.
    . Allow prog_run cmd for bpf_syscall programs with recursion check.
    . Enable link_create and raw_tp_open cmds.
    
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Alexei Starovoitov authored and intel-lab-lkp committed Feb 4, 2022
  6. selftests/bpf: Test bpf_core_types_are_compat() functionality.

    Add several tests to check bpf_core_types_are_compat() functionality:
    - candidate type name exists and types match
    - candidate type name exists but types don't match
    - nested func protos at kernel recursion limit
    - nested func protos above kernel recursion limit. Such bpf prog
      is rejected during the load.
    
    Signed-off-by: Matteo Croce <mcroce@microsoft.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20220204005519.60361-3-mcroce@linux.microsoft.com
    teknoraver authored and Alexei Starovoitov committed Feb 4, 2022
  7. bpf: Implement bpf_core_types_are_compat().

    Adopt libbpf's bpf_core_types_are_compat() for kernel duty by adding
    explicit recursion limit of 2 which is enough to handle 2 levels of
    function prototypes.
    
    Signed-off-by: Matteo Croce <mcroce@microsoft.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20220204005519.60361-2-mcroce@linux.microsoft.com
    teknoraver authored and Alexei Starovoitov committed Feb 4, 2022
  8. bpf, arm64: Enable kfunc call

    Since commit b2eed9b ("arm64/kernel: kaslr: reduce module
    randomization range to 2 GB"), for arm64 whether KASLR is enabled
    or not, the module is placed within 2GB of the kernel region, so
    s32 in bpf_kfunc_desc is sufficient to represente the offset of
    module function relative to __bpf_call_base. The only thing needed
    is to override bpf_jit_supports_kfunc_call().
    
    Signed-off-by: Hou Tao <houtao1@huawei.com>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Link: https://lore.kernel.org/bpf/20220130092917.14544-2-hotforest@gmail.com
    Hou Tao authored and borkmann committed Feb 4, 2022
  9. libbpf: Deprecate forgotten btf__get_map_kv_tids()

    btf__get_map_kv_tids() is in the same group of APIs as
    btf_ext__reloc_func_info()/btf_ext__reloc_line_info() which were only
    used by BCC. It was missed to be marked as deprecated in [0]. Fixing
    that to complete [1].
    
      [0] https://patchwork.kernel.org/project/netdevbpf/patch/20220201014610.3522985-1-davemarchevsky@fb.com/
      [1] Closes: libbpf/libbpf#277
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/bpf/20220203225017.1795946-1-andrii@kernel.org
    anakryiko authored and borkmann committed Feb 4, 2022

Commits on Feb 3, 2022

  1. selftests/bpf: Add a selftest for invalid func btf with btf decl_tag

    Added a selftest similar to [1] which exposed a kernel bug.
    Without the fix in the previous patch, the similar kasan error will appear.
    
      [1] https://lore.kernel.org/bpf/0000000000009b6eaa05d71a8c06@google.com/
    
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Martin KaFai Lau <kafai@fb.com>
    Link: https://lore.kernel.org/bpf/20220203191732.742285-1-yhs@fb.com
    yonghong-song authored and Alexei Starovoitov committed Feb 3, 2022
  2. bpf: Fix a btf decl_tag bug when tagging a function

    syzbot reported a btf decl_tag bug with stack trace below:
    
      general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] PREEMPT SMP KASAN
      KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
      CPU: 0 PID: 3592 Comm: syz-executor914 Not tainted 5.16.0-syzkaller-11424-gb7892f7d5cb2 #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
      RIP: 0010:btf_type_vlen include/linux/btf.h:231 [inline]
      RIP: 0010:btf_decl_tag_resolve+0x83e/0xaa0 kernel/bpf/btf.c:3910
      ...
      Call Trace:
       <TASK>
       btf_resolve+0x251/0x1020 kernel/bpf/btf.c:4198
       btf_check_all_types kernel/bpf/btf.c:4239 [inline]
       btf_parse_type_sec kernel/bpf/btf.c:4280 [inline]
       btf_parse kernel/bpf/btf.c:4513 [inline]
       btf_new_fd+0x19fe/0x2370 kernel/bpf/btf.c:6047
       bpf_btf_load kernel/bpf/syscall.c:4039 [inline]
       __sys_bpf+0x1cbb/0x5970 kernel/bpf/syscall.c:4679
       __do_sys_bpf kernel/bpf/syscall.c:4738 [inline]
       __se_sys_bpf kernel/bpf/syscall.c:4736 [inline]
       __x64_sys_bpf+0x75/0xb0 kernel/bpf/syscall.c:4736
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x44/0xae
    
    The kasan error is triggered with an illegal BTF like below:
       type 0: void
       type 1: int
       type 2: decl_tag to func type 3
       type 3: func to func_proto type 8
    The total number of types is 4 and the type 3 is illegal
    since its func_proto type is out of range.
    
    Currently, the target type of decl_tag can be struct/union, var or func.
    Both struct/union and var implemented their own 'resolve' callback functions
    and hence handled properly in kernel.
    But func type doesn't have 'resolve' callback function. When
    btf_decl_tag_resolve() tries to check func type, it tries to get
    vlen of its func_proto type, which triggered the above kasan error.
    
    To fix the issue, btf_decl_tag_resolve() needs to do btf_func_check()
    before trying to accessing func_proto type.
    In the current implementation, func type is checked with
    btf_func_check() in the main checking function btf_check_all_types().
    To fix the above kasan issue, let us implement 'resolve' callback
    func type properly. The 'resolve' callback will be also called
    in btf_check_all_types() for func types.
    
    Fixes: b5ea834 ("bpf: Support for new btf kind BTF_KIND_TAG")
    Reported-by: syzbot+53619be9444215e785ed@syzkaller.appspotmail.com
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Martin KaFai Lau <kafai@fb.com>
    Link: https://lore.kernel.org/bpf/20220203191727.741862-1-yhs@fb.com
    yonghong-song authored and Alexei Starovoitov committed Feb 3, 2022
  3. libbpf: Deprecate priv/set_priv storage

    Arbitrary storage via bpf_*__set_priv/__priv is being deprecated
    without a replacement ([1]). perf uses this capability, but most of
    that is going away with the removal of prologue generation ([2]).
    perf is already suppressing deprecation warnings, so the remaining
    cleanup will happen separately.
    
      [1]: Closes: libbpf/libbpf#294
      [2]: https://lore.kernel.org/bpf/20220123221932.537060-1-jolsa@kernel.org/
    
    Signed-off-by: Delyan Kratunov <delyank@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220203180032.1921580-1-delyank@fb.com
    BurntBrunch authored and anakryiko committed Feb 3, 2022
  4. bpf: test_run: Fix OOB access in bpf_prog_test_run_xdp

    Fix the following kasan issue reported by syzbot:
    
    BUG: KASAN: slab-out-of-bounds in __skb_frag_set_page include/linux/skbuff.h:3242 [inline]
    BUG: KASAN: slab-out-of-bounds in bpf_prog_test_run_xdp+0x10ac/0x1150 net/bpf/test_run.c:972
    Write of size 8 at addr ffff888048c75000 by task syz-executor.5/23405
    
    CPU: 1 PID: 23405 Comm: syz-executor.5 Not tainted 5.16.0-syzkaller #0
    Hardware name: Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
    Call Trace:
     <TASK>
     __dump_stack lib/dump_stack.c:88 [inline]
     dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
     print_address_description.constprop.0.cold+0x8d/0x336 mm/kasan/report.c:255
     __kasan_report mm/kasan/report.c:442 [inline]
     kasan_report.cold+0x83/0xdf mm/kasan/report.c:459
     __skb_frag_set_page include/linux/skbuff.h:3242 [inline]
     bpf_prog_test_run_xdp+0x10ac/0x1150 net/bpf/test_run.c:972
     bpf_prog_test_run kernel/bpf/syscall.c:3356 [inline]
     __sys_bpf+0x1858/0x59a0 kernel/bpf/syscall.c:4658
     __do_sys_bpf kernel/bpf/syscall.c:4744 [inline]
     __se_sys_bpf kernel/bpf/syscall.c:4742 [inline]
     __x64_sys_bpf+0x75/0xb0 kernel/bpf/syscall.c:4742
     do_syscall_x64 arch/x86/entry/common.c:50 [inline]
     do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
     entry_SYSCALL_64_after_hwframe+0x44/0xae
    RIP: 0033:0x7f4ea30dd059
    RSP: 002b:00007f4ea1a52168 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
    RAX: ffffffffffffffda RBX: 00007f4ea31eff60 RCX: 00007f4ea30dd059
    RDX: 0000000000000048 RSI: 0000000020000000 RDI: 000000000000000a
    RBP: 00007f4ea313708d R08: 0000000000000000 R09: 0000000000000000
    R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
    R13: 00007ffc8367c5af R14: 00007f4ea1a52300 R15: 0000000000022000
     </TASK>
    
    Allocated by task 23405:
     kasan_save_stack+0x1e/0x50 mm/kasan/common.c:38
     kasan_set_track mm/kasan/common.c:46 [inline]
     set_alloc_info mm/kasan/common.c:437 [inline]
     ____kasan_kmalloc mm/kasan/common.c:516 [inline]
     ____kasan_kmalloc mm/kasan/common.c:475 [inline]
     __kasan_kmalloc+0xa9/0xd0 mm/kasan/common.c:525
     kmalloc include/linux/slab.h:586 [inline]
     kzalloc include/linux/slab.h:715 [inline]
     bpf_test_init.isra.0+0x9f/0x150 net/bpf/test_run.c:411
     bpf_prog_test_run_xdp+0x2f8/0x1150 net/bpf/test_run.c:941
     bpf_prog_test_run kernel/bpf/syscall.c:3356 [inline]
     __sys_bpf+0x1858/0x59a0 kernel/bpf/syscall.c:4658
     __do_sys_bpf kernel/bpf/syscall.c:4744 [inline]
     __se_sys_bpf kernel/bpf/syscall.c:4742 [inline]
     __x64_sys_bpf+0x75/0xb0 kernel/bpf/syscall.c:4742
     do_syscall_x64 arch/x86/entry/common.c:50 [inline]
     do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
     entry_SYSCALL_64_after_hwframe+0x44/0xae
    
    The buggy address belongs to the object at ffff888048c74000
     which belongs to the cache kmalloc-4k of size 4096
    The buggy address is located 0 bytes to the right of
     4096-byte region [ffff888048c74000, ffff888048c75000)
    The buggy address belongs to the page:
    page:ffffea0001231c00 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x48c70
    head:ffffea0001231c00 order:3 compound_mapcount:0 compound_pincount:0
    flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff)
    raw: 00fff00000010200 dead000000000100 dead000000000122 ffff888010c42140
    raw: 0000000000000000 0000000080040004 00000001ffffffff 0000000000000000
    page dumped because: kasan: bad access detected
    page_owner tracks the page as allocated
     prep_new_page mm/page_alloc.c:2434 [inline]
     get_page_from_freelist+0xa72/0x2f50 mm/page_alloc.c:4165
     __alloc_pages+0x1b2/0x500 mm/page_alloc.c:5389
     alloc_pages+0x1aa/0x310 mm/mempolicy.c:2271
     alloc_slab_page mm/slub.c:1799 [inline]
     allocate_slab mm/slub.c:1944 [inline]
     new_slab+0x28a/0x3b0 mm/slub.c:2004
     ___slab_alloc+0x87c/0xe90 mm/slub.c:3018
     __slab_alloc.constprop.0+0x4d/0xa0 mm/slub.c:3105
     slab_alloc_node mm/slub.c:3196 [inline]
     __kmalloc_node_track_caller+0x2cb/0x360 mm/slub.c:4957
     kmalloc_reserve net/core/skbuff.c:354 [inline]
     __alloc_skb+0xde/0x340 net/core/skbuff.c:426
     alloc_skb include/linux/skbuff.h:1159 [inline]
     nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:745 [inline]
     nsim_dev_trap_report drivers/net/netdevsim/dev.c:802 [inline]
     nsim_dev_trap_report_work+0x29a/0xbc0 drivers/net/netdevsim/dev.c:843
     process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307
     worker_thread+0x657/0x1110 kernel/workqueue.c:2454
     kthread+0x2e9/0x3a0 kernel/kthread.c:377
     ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
    page last free stack trace:
     reset_page_owner include/linux/page_owner.h:24 [inline]
     free_pages_prepare mm/page_alloc.c:1352 [inline]
     free_pcp_prepare+0x374/0x870 mm/page_alloc.c:1404
     free_unref_page_prepare mm/page_alloc.c:3325 [inline]
     free_unref_page+0x19/0x690 mm/page_alloc.c:3404
     qlink_free mm/kasan/quarantine.c:157 [inline]
     qlist_free_all+0x6d/0x160 mm/kasan/quarantine.c:176
     kasan_quarantine_reduce+0x180/0x200 mm/kasan/quarantine.c:283
     __kasan_slab_alloc+0xa2/0xc0 mm/kasan/common.c:447
     kasan_slab_alloc include/linux/kasan.h:260 [inline]
     slab_post_alloc_hook mm/slab.h:732 [inline]
     slab_alloc_node mm/slub.c:3230 [inline]
     slab_alloc mm/slub.c:3238 [inline]
     kmem_cache_alloc+0x202/0x3a0 mm/slub.c:3243
     getname_flags.part.0+0x50/0x4f0 fs/namei.c:138
     getname_flags include/linux/audit.h:323 [inline]
     getname+0x8e/0xd0 fs/namei.c:217
     do_sys_openat2+0xf5/0x4d0 fs/open.c:1208
     do_sys_open fs/open.c:1230 [inline]
     __do_sys_openat fs/open.c:1246 [inline]
     __se_sys_openat fs/open.c:1241 [inline]
     __x64_sys_openat+0x13f/0x1f0 fs/open.c:1241
     do_syscall_x64 arch/x86/entry/common.c:50 [inline]
     do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
     entry_SYSCALL_64_after_hwframe+0x44/0xae
    
    Memory state around the buggy address:
     ffff888048c74f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
     ffff888048c74f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
                       ^
     ffff888048c75080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
     ffff888048c75100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
    ==================================================================
    
    Fixes: 1c19499 ("bpf: introduce frags support to bpf_prog_test_run_xdp()")
    Reported-by: syzbot+6d70ca7438345077c549@syzkaller.appspotmail.com
    Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/688c26f9dd6e885e58e8e834ede3f0139bb7fa95.1643835097.git.lorenzo@kernel.org
    LorenzoBianconi authored and Alexei Starovoitov committed Feb 3, 2022
  5. bpf, docs: Better document the atomic instructions

    Use proper tables and RST markup to document the atomic instructions
    in a structured way.
    
    Signed-off-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20220131183638.3934982-6-hch@lst.de
    Christoph Hellwig authored and Alexei Starovoitov committed Feb 3, 2022
  6. bpf, docs: Better document the extended instruction format

    In addition to the normal 64-bit instruction encoding, eBPF also has
    a single instruction that uses a second 64-bit bits for a second
    immediate value.  Instead of only documenting this format deep down
    in the document mention it in the instruction encoding section.
    
    Signed-off-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20220131183638.3934982-5-hch@lst.de
    Christoph Hellwig authored and Alexei Starovoitov committed Feb 3, 2022
  7. bpf, docs: Better document the legacy packet access instruction

    Use consistent terminology and structured RST elements to better document
    these two oddball instructions.
    
    Signed-off-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20220131183638.3934982-4-hch@lst.de
    Christoph Hellwig authored and Alexei Starovoitov committed Feb 3, 2022
  8. bpf, docs: Better document the regular load and store instructions

    Add a separate section and a little intro blurb for the regular load and
    store instructions.
    
    Signed-off-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20220131183638.3934982-3-hch@lst.de
    Christoph Hellwig authored and Alexei Starovoitov committed Feb 3, 2022
  9. bpf, docs: Document the byte swapping instructions

    Add a section to document the byte swapping instructions.
    
    Signed-off-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20220131183638.3934982-2-hch@lst.de
    Christoph Hellwig authored and Alexei Starovoitov committed Feb 3, 2022
  10. Merge branch 'bpf-libbpf-deprecated-cleanup'

    Andrii Nakryiko says:
    
    ====================
    Clean up remaining missed uses of deprecated libbpf APIs across samples/bpf,
    selftests/bpf, libbpf, and bpftool.
    
    Also fix uninit variable warning in bpftool.
    ====================
    
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    borkmann committed Feb 3, 2022
  11. samples/bpf: Get rid of bpf_prog_load_xattr() use

    Remove all the remaining uses of deprecated bpf_prog_load_xattr() API.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Reviewed-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20220202225916.3313522-7-andrii@kernel.org
    anakryiko authored and borkmann committed Feb 3, 2022
  12. selftests/bpf: Redo the switch to new libbpf XDP APIs

    Switch to using new bpf_xdp_*() APIs across all selftests. Take
    advantage of a more straightforward and user-friendly semantics of
    old_prog_fd (0 means "don't care") in few places.
    
    This is a redo of 5443565 ("selftests/bpf: switch to new libbpf XDP
    APIs"), which was previously reverted to minimize conflicts during bpf
    and bpf-next tree merge.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Reviewed-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20220202225916.3313522-6-andrii@kernel.org
    anakryiko authored and borkmann committed Feb 3, 2022
  13. selftests/bpf: Remove usage of deprecated feature probing APIs

    Switch to libbpf_probe_*() APIs instead of the deprecated ones.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Reviewed-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20220202225916.3313522-5-andrii@kernel.org
    anakryiko authored and borkmann committed Feb 3, 2022
  14. bpftool: Fix uninit variable compilation warning

    Newer GCC complains about capturing the address of unitialized variable.
    While there is nothing wrong with the code (the variable is filled out
    by the kernel), initialize the variable anyway to make compiler happy.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Reviewed-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20220202225916.3313522-4-andrii@kernel.org
    anakryiko authored and borkmann committed Feb 3, 2022
  15. bpftool: Stop supporting BPF offload-enabled feature probing

    libbpf 1.0 is not going to support passing ifindex to BPF
    prog/map/helper feature probing APIs. Remove the support for BPF offload
    feature probing.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Reviewed-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20220202225916.3313522-3-andrii@kernel.org
    anakryiko authored and borkmann committed Feb 3, 2022
  16. libbpf: Stop using deprecated bpf_map__is_offload_neutral()

    Open-code bpf_map__is_offload_neutral() logic in one place in
    to-be-deprecated bpf_prog_load_xattr2.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Reviewed-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20220202225916.3313522-2-andrii@kernel.org
    anakryiko authored and borkmann committed Feb 3, 2022
  17. Merge branch 'migrate from bpf_prog_test_run{,_xattr}'

    Delyan Kratunov says:
    
    ====================
    
    Fairly straight-forward mechanical transformation from bpf_prog_test_run
    and bpf_prog_test_run_xattr to the bpf_prog_test_run_opts goodness.
    
    I did a fair amount of drive-by CHECK/CHECK_ATTR cleanups as well, though
    certainly not everything possible. Primarily, I did not want to just change
    arguments to CHECK calls, though I had to do a bit more than that
    in some cases (overall, -119 CHECK calls and all CHECK_ATTR calls).
    
    v2 -> v3:
    Don't introduce CHECK_OPTS, replace CHECK/CHECK_ATTR usages we need to touch
    with ASSERT_* calls instead.
    Don't be prescriptive about the opts var name and keep old names where that would
    minimize unnecessary code churn.
    Drop _xattr-specific checks in prog_run_xattr and rename accordingly.
    
    v1 -> v2:
    Split selftests/bpf changes into two commits to appease the mailing list.
    ====================
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    anakryiko committed Feb 3, 2022
  18. libbpf: Deprecate bpf_prog_test_run_xattr and bpf_prog_test_run

    Deprecate non-extendable bpf_prog_test_run{,_xattr} in favor of
    OPTS-based bpf_prog_test_run_opts ([0]).
    
      [0] Closes: libbpf/libbpf#286
    
    Signed-off-by: Delyan Kratunov <delyank@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220202235423.1097270-5-delyank@fb.com
    BurntBrunch authored and anakryiko committed Feb 3, 2022
  19. bpftool: Migrate from bpf_prog_test_run_xattr

    bpf_prog_test_run is being deprecated in favor of the OPTS-based
    bpf_prog_test_run_opts.
    
    Signed-off-by: Delyan Kratunov <delyank@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220202235423.1097270-4-delyank@fb.com
    BurntBrunch authored and anakryiko committed Feb 3, 2022
  20. selftests/bpf: Migrate from bpf_prog_test_run_xattr

    bpf_prog_test_run_xattr is being deprecated in favor of the OPTS-based
    bpf_prog_test_run_opts.
    We end up unable to use CHECK_ATTR so replace usages with ASSERT_* calls.
    Also, prog_run_xattr is now prog_run_opts.
    
    Signed-off-by: Delyan Kratunov <delyank@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220202235423.1097270-3-delyank@fb.com
    BurntBrunch authored and anakryiko committed Feb 3, 2022
  21. selftests/bpf: Migrate from bpf_prog_test_run

    bpf_prog_test_run is being deprecated in favor of the OPTS-based
    bpf_prog_test_run_opts.
    We end up unable to use CHECK in most cases, so replace usages with
    ASSERT_* calls.
    
    Signed-off-by: Delyan Kratunov <delyank@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220202235423.1097270-2-delyank@fb.com
    BurntBrunch authored and anakryiko committed Feb 3, 2022

Commits on Feb 2, 2022

  1. Merge branch 'bpf-btf-dwarf5'

    Nathan Chancellor says:
    
    ====================
    This series allows CONFIG_DEBUG_INFO_DWARF5 to be selected with
    CONFIG_DEBUG_INFO_BTF=y by checking the pahole version.
    
    The first four patches add CONFIG_PAHOLE_VERSION and
    scripts/pahole-version.sh to clean up all the places that pahole's
    version is transformed into a 3-digit form.
    
    The fourth patch adds a PAHOLE_VERSION dependency to DEBUG_INFO_DWARF5
    so that there are no build errors when it is selected with
    DEBUG_INFO_BTF.
    
    I build tested Fedora's aarch64 and x86_64 config with ToT clang 14.0.0
    and GCC 11 with CONFIG_DEBUG_INFO_DWARF5 enabled with both pahole 1.21
    and 1.23.
    ====================
    
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    borkmann committed Feb 2, 2022
  2. lib/Kconfig.debug: Allow BTF + DWARF5 with pahole 1.21+

    Commit 98cd6f5 ("Kconfig: allow explicit opt in to DWARF v5")
    prevented CONFIG_DEBUG_INFO_DWARF5 from being selected when
    CONFIG_DEBUG_INFO_BTF is enabled because pahole had issues with clang's
    DWARF5 info. This was resolved by [1], which is in pahole v1.21.
    
    Allow DEBUG_INFO_DWARF5 to be selected with DEBUG_INFO_BTF when using
    pahole v1.21 or newer.
    
    [1]: https://git.kernel.org/pub/scm/devel/pahole/pahole.git/commit/?id=7d8e829f636f47aba2e1b6eda57e74d8e31f733c
    
    Signed-off-by: Nathan Chancellor <nathan@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220201205624.652313-6-nathan@kernel.org
    nathanchance authored and borkmann committed Feb 2, 2022
  3. lib/Kconfig.debug: Use CONFIG_PAHOLE_VERSION

    Now that CONFIG_PAHOLE_VERSION exists, use it in the definition of
    CONFIG_PAHOLE_HAS_SPLIT_BTF and CONFIG_PAHOLE_HAS_BTF_TAG to reduce the
    amount of duplication across the tree.
    
    Signed-off-by: Nathan Chancellor <nathan@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220201205624.652313-5-nathan@kernel.org
    nathanchance authored and borkmann committed Feb 2, 2022
  4. scripts/pahole-flags.sh: Use pahole-version.sh

    Use pahole-version.sh to get pahole's version code to reduce the amount
    of duplication across the tree.
    
    Signed-off-by: Nathan Chancellor <nathan@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220201205624.652313-4-nathan@kernel.org
    nathanchance authored and borkmann committed Feb 2, 2022
  5. kbuild: Add CONFIG_PAHOLE_VERSION

    There are a few different places where pahole's version is turned into a
    three digit form with the exact same command. Move this command into
    scripts/pahole-version.sh to reduce the amount of duplication across the
    tree.
    
    Create CONFIG_PAHOLE_VERSION so the version code can be used in Kconfig
    to enable and disable configuration options based on the pahole version,
    which is already done in a couple of places.
    
    Signed-off-by: Nathan Chancellor <nathan@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220201205624.652313-3-nathan@kernel.org
    nathanchance authored and borkmann committed Feb 2, 2022
Older