Skip to content
Permalink
Song-Liu/intro…
Switch branches/tags

Commits on Nov 12, 2021

  1. bpf: introduce btf_tracing_ids

    Similar to btf_sock_ids, btf_tracing_ids provides btf ID for task_struct,
    file, and vm_area_struct via easy to understand format like
    btf_tracing_ids[BTF_TRACING_TYPE_[TASK|file|VMA]].
    
    Suggested-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Song Liu <songliubraving@fb.com>
    liu-song-6 authored and intel-lab-lkp committed Nov 12, 2021
  2. bpf: extend BTF_ID_LIST_GLOBAL with parameter for number of IDs

    syzbot reported the following BUG w/o CONFIG_DEBUG_INFO_BTF
    
    BUG: KASAN: global-out-of-bounds in task_iter_init+0x212/0x2e7 kernel/bpf/task_iter.c:661
    Read of size 4 at addr ffffffff90297404 by task swapper/0/1
    
    CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.15.0-syzkaller #0
    Hardware name: ... Google Compute Engine, BIOS Google 01/01/2011
    Call Trace:
    <TASK>
    __dump_stack lib/dump_stack.c:88 [inline]
    dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
    print_address_description.constprop.0.cold+0xf/0x309 mm/kasan/report.c:256
    __kasan_report mm/kasan/report.c:442 [inline]
    kasan_report.cold+0x83/0xdf mm/kasan/report.c:459
    task_iter_init+0x212/0x2e7 kernel/bpf/task_iter.c:661
    do_one_initcall+0x103/0x650 init/main.c:1295
    do_initcall_level init/main.c:1368 [inline]
    do_initcalls init/main.c:1384 [inline]
    do_basic_setup init/main.c:1403 [inline]
    kernel_init_freeable+0x6b1/0x73a init/main.c:1606
    kernel_init+0x1a/0x1d0 init/main.c:1497
    ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
    </TASK>
    
    This is caused by hard-coded name[1] in BTF_ID_LIST_GLOBAL (w/o
    CONFIG_DEBUG_INFO_BTF). Fix this by adding a parameter n to
    BTF_ID_LIST_GLOBAL. This avoids ifdef CONFIG_DEBUG_INFO_BTF in btf.c and
    filter.c.
    
    Fixes: 7c7e3d3 ("bpf: Introduce helper bpf_find_vma")
    Reported-by: syzbot+e0d81ec552a21d9071aa@syzkaller.appspotmail.com
    Suggested-by: Eric Dumazet <edumazet@google.com>
    Reported-by: Eric Dumazet <edumazet@google.com>
    Signed-off-by: Song Liu <songliubraving@fb.com>
    liu-song-6 authored and intel-lab-lkp committed Nov 12, 2021
  3. Merge branch 'Support BTF_KIND_TYPE_TAG for btf_type_tag attributes'

    Yonghong Song says:
    
    ====================
    
    LLVM patches ([1] for clang, [2] and [3] for BPF backend)
    added support for btf_type_tag attributes. This patch
    added support for the kernel.
    
    The main motivation for btf_type_tag is to bring kernel
    annotations __user, __rcu etc. to btf. With such information
    available in btf, bpf verifier can detect mis-usages
    and reject the program. For example, for __user tagged pointer,
    developers can then use proper helper like bpf_probe_read_kernel()
    etc. to read the data.
    
    BTF_KIND_TYPE_TAG may also useful for other tracing
    facility where instead of to require user to specify
    kernel/user address type, the kernel can detect it
    by itself with btf.
    
    Patch 1 added support in kernel, Patch 2 for libbpf and Patch 3
    for bpftool. Patches 4-9 are for bpf selftests and Patch 10
    updated docs/bpf/btf.rst file with new btf kind.
    
      [1] https://reviews.llvm.org/D111199
      [2] https://reviews.llvm.org/D113222
      [3] https://reviews.llvm.org/D113496
    
    Changelogs:
      v2 -> v3:
        - rebase to resolve merge conflicts.
      v1 -> v2:
        - add more dedup tests.
        - remove build requirement for LLVM=1.
        - remove testing macro __has_attribute in bpf programs
          as it is always defined in recent clang compilers.
    ====================
    
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Alexei Starovoitov committed Nov 12, 2021
  4. docs/bpf: Update documentation for BTF_KIND_TYPE_TAG support

    Add BTF_KIND_TYPE_TAG documentation in btf.rst.
    
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211112012656.1509082-1-yhs@fb.com
    yonghong-song authored and Alexei Starovoitov committed Nov 12, 2021
  5. selftests/bpf: Clarify llvm dependency with btf_tag selftest

    btf_tag selftest needs certain llvm versions (>= llvm14).
    Make it clear in the selftests README.rst file.
    
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211112012651.1508549-1-yhs@fb.com
    yonghong-song authored and Alexei Starovoitov committed Nov 12, 2021
  6. selftests/bpf: Add a C test for btf_type_tag

    The following is the main btf_type_tag usage in the
    C test:
      #define __tag1 __attribute__((btf_type_tag("tag1")))
      #define __tag2 __attribute__((btf_type_tag("tag2")))
      struct btf_type_tag_test {
           int __tag1 * __tag1 __tag2 *p;
      } g;
    
    The bpftool raw dump with related types:
      [4] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED
      [11] STRUCT 'btf_type_tag_test' size=8 vlen=1
              'p' type_id=14 bits_offset=0
      [12] TYPE_TAG 'tag1' type_id=16
      [13] TYPE_TAG 'tag2' type_id=12
      [14] PTR '(anon)' type_id=13
      [15] TYPE_TAG 'tag1' type_id=4
      [16] PTR '(anon)' type_id=15
      [17] VAR 'g' type_id=11, linkage=global
    
    With format C dump, we have
      struct btf_type_tag_test {
            int __attribute__((btf_type_tag("tag1"))) * __attribute__((btf_type_tag("tag1"))) __attribute__((btf_type_tag("tag2"))) *p;
      };
    The result C code is identical to the original definition except macro's are gone.
    
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211112012646.1508231-1-yhs@fb.com
    yonghong-song authored and Alexei Starovoitov committed Nov 12, 2021
  7. selftests/bpf: Rename progs/tag.c to progs/btf_decl_tag.c

    Rename progs/tag.c to progs/btf_decl_tag.c so we can introduce
    progs/btf_type_tag.c in the next patch.
    
    Also create a subtest for btf_decl_tag in prog_tests/btf_tag.c
    so we can introduce btf_type_tag subtest in the next patch.
    
    I also took opportunity to remove the check whether __has_attribute
    is defined or not in progs/btf_decl_tag.c since all recent
    clangs should already support this macro.
    
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211112012641.1507144-1-yhs@fb.com
    yonghong-song authored and Alexei Starovoitov committed Nov 12, 2021
  8. selftests/bpf: Test BTF_KIND_DECL_TAG for deduplication

    Add BTF_KIND_TYPE_TAG duplication unit tests.
    
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211112012635.1506853-1-yhs@fb.com
    yonghong-song authored and Alexei Starovoitov committed Nov 12, 2021
  9. selftests/bpf: Add BTF_KIND_TYPE_TAG unit tests

    Add BTF_KIND_TYPE_TAG unit tests.
    
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211112012630.1506095-1-yhs@fb.com
    yonghong-song authored and Alexei Starovoitov committed Nov 12, 2021
  10. selftests/bpf: Test libbpf API function btf__add_type_tag()

    Add unit tests for btf__add_type_tag().
    
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211112012625.1505748-1-yhs@fb.com
    yonghong-song authored and Alexei Starovoitov committed Nov 12, 2021
  11. bpftool: Support BTF_KIND_TYPE_TAG

    Add bpftool support for BTF_KIND_TYPE_TAG.
    
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211112012620.1505506-1-yhs@fb.com
    yonghong-song authored and Alexei Starovoitov committed Nov 12, 2021
  12. libbpf: Support BTF_KIND_TYPE_TAG

    Add libbpf support for BTF_KIND_TYPE_TAG.
    
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211112012614.1505315-1-yhs@fb.com
    yonghong-song authored and Alexei Starovoitov committed Nov 12, 2021
  13. bpf: Support BTF_KIND_TYPE_TAG for btf_type_tag attributes

    LLVM patches ([1] for clang, [2] and [3] for BPF backend)
    added support for btf_type_tag attributes. This patch
    added support for the kernel.
    
    The main motivation for btf_type_tag is to bring kernel
    annotations __user, __rcu etc. to btf. With such information
    available in btf, bpf verifier can detect mis-usages
    and reject the program. For example, for __user tagged pointer,
    developers can then use proper helper like bpf_probe_read_user()
    etc. to read the data.
    
    BTF_KIND_TYPE_TAG may also useful for other tracing
    facility where instead of to require user to specify
    kernel/user address type, the kernel can detect it
    by itself with btf.
    
      [1] https://reviews.llvm.org/D111199
      [2] https://reviews.llvm.org/D113222
      [3] https://reviews.llvm.org/D113496
    
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211112012609.1505032-1-yhs@fb.com
    yonghong-song authored and Alexei Starovoitov committed Nov 12, 2021
  14. Merge branch 'Future-proof more tricky libbpf APIs'

    Andrii Nakryiko says:
    
    ====================
    
    This patch set continues the work of revamping libbpf APIs that are not
    extensible, as they were added before we figured out all the intricacies of
    building APIs that can preserve ABI compatibility (both backward and forward).
    
    What makes them tricky is that (most of) these APIs are actively used by
    multiple applications, so we need to be careful about refactoring them. See
    individual patches for details, but the general approach is similar to
    previous bpf_prog_load() API revamp. The biggest different and complexity is
    in changing btf_dump__new(), because function overloading through macro magic
    doesn't work based on number of arguments, as both new and old APIs have
    4 arguments. Because of that, another overloading approach is taken; overload
    happens based on argument types.
    
    I've validated manually (by using local test_progs-shared flavor that is
    compiling test_progs against libbpf as a shared library) that compiling "old
    application" (selftests before being adapted to using new variants of revamped
    APIs) are compiled and successfully run against newest libbpf version as well
    as the older libbpf version (provided no new variants are used). All these
    scenarios seem to be working as expected.
    
    v1->v2:
      - add explicit printf_fn NULL check in btf_dump__new() (Alexei);
      - replaced + with || in __builtin_choose_expr() (Alexei);
      - dropped test_progs-shared flavor (Alexei).
    ====================
    
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Alexei Starovoitov committed Nov 12, 2021
  15. bpftool: Update btf_dump__new() and perf_buffer__new_raw() calls

    Use v1.0-compatible variants of btf_dump and perf_buffer "constructors".
    This is also a demonstration of reusing struct perf_buffer_raw_opts as
    OPTS-style option struct for new perf_buffer__new_raw() API.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111053624.190580-10-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Nov 12, 2021
  16. tools/runqslower: Update perf_buffer__new() calls

    Use v1.0+ compatible variant of perf_buffer__new() call to prepare for
    deprecation.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111053624.190580-9-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Nov 12, 2021
  17. selftests/bpf: Update btf_dump__new() uses to v1.0+ variant

    Update to-be-deprecated forms of btf_dump__new().
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111053624.190580-8-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Nov 12, 2021
  18. selftests/bpf: Migrate all deprecated perf_buffer uses

    Migrate all old-style perf_buffer__new() and perf_buffer__new_raw()
    calls to new v1.0+ variants.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111053624.190580-7-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Nov 12, 2021
  19. libbpf: Make perf_buffer__new() use OPTS-based interface

    Add new variants of perf_buffer__new() and perf_buffer__new_raw() that
    use OPTS-based options for future extensibility ([0]). Given all the
    currently used API names are best fits, re-use them and use
    ___libbpf_override() approach and symbol versioning to preserve ABI and
    source code compatibility. struct perf_buffer_opts and struct
    perf_buffer_raw_opts are kept as well, but they are restructured such
    that they are OPTS-based when used with new APIs. For struct
    perf_buffer_raw_opts we keep few fields intact, so we have to also
    preserve the memory location of them both when used as OPTS and for
    legacy API variants. This is achieved with anonymous padding for OPTS
    "incarnation" of the struct.  These pads can be eventually used for new
    options.
    
      [0] Closes: libbpf/libbpf#311
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111053624.190580-6-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Nov 12, 2021
  20. libbpf: Ensure btf_dump__new() and btf_dump_opts are future-proof

    Change btf_dump__new() and corresponding struct btf_dump_ops structure
    to be extensible by using OPTS "framework" ([0]). Given we don't change
    the names, we use a similar approach as with bpf_prog_load(), but this
    time we ended up with two APIs with the same name and same number of
    arguments, so overloading based on number of arguments with
    ___libbpf_override() doesn't work.
    
    Instead, use "overloading" based on types. In this particular case,
    print callback has to be specified, so we detect which argument is
    a callback. If it's 4th (last) argument, old implementation of API is
    used by user code. If not, it must be 2nd, and thus new implementation
    is selected. The rest is handled by the same symbol versioning approach.
    
    btf_ext argument is dropped as it was never used and isn't necessary
    either. If in the future we'll need btf_ext, that will be added into
    OPTS-based struct btf_dump_opts.
    
    struct btf_dump_opts is reused for both old API and new APIs. ctx field
    is marked deprecated in v0.7+ and it's put at the same memory location
    as OPTS's sz field. Any user of new-style btf_dump__new() will have to
    set sz field and doesn't/shouldn't use ctx, as ctx is now passed along
    the callback as mandatory input argument, following the other APIs in
    libbpf that accept callbacks consistently.
    
    Again, this is quite ugly in implementation, but is done in the name of
    backwards compatibility and uniform and extensible future APIs (at the
    same time, sigh). And it will be gone in libbpf 1.0.
    
      [0] Closes: libbpf/libbpf#283
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111053624.190580-5-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Nov 12, 2021
  21. libbpf: Turn btf_dedup_opts into OPTS-based struct

    btf__dedup() and struct btf_dedup_opts were added before we figured out
    OPTS mechanism. As such, btf_dedup_opts is non-extensible without
    breaking an ABI and potentially crashing user application.
    
    Unfortunately, btf__dedup() and btf_dedup_opts are short and succinct
    names that would be great to preserve and use going forward. So we use
    ___libbpf_override() macro approach, used previously for bpf_prog_load()
    API, to define a new btf__dedup() variant that accepts only struct btf *
    and struct btf_dedup_opts * arguments, and rename the old btf__dedup()
    implementation into btf__dedup_deprecated(). This keeps both source and
    binary compatibility with old and new applications.
    
    The biggest problem was struct btf_dedup_opts, which wasn't OPTS-based,
    and as such doesn't have `size_t sz;` as a first field. But btf__dedup()
    is a pretty rarely used API and I believe that the only currently known
    users (besides selftests) are libbpf's own bpf_linker and pahole.
    Neither use case actually uses options and just passes NULL. So instead
    of doing extra hacks, just rewrite struct btf_dedup_opts into OPTS-based
    one, move btf_ext argument into those opts (only bpf_linker needs to
    dedup btf_ext, so it's not a typical thing to specify), and drop never
    used `dont_resolve_fwds` option (it was never used anywhere, AFAIK, it
    makes BTF dedup much less useful and efficient).
    
    Just in case, for old implementation, btf__dedup_deprecated(), detect
    non-NULL options and error out with helpful message, to help users
    migrate, if there are any user playing with btf__dedup().
    
    The last remaining piece is dedup_table_size, which is another
    anachronism from very early days of BTF dedup. Since then it has been
    reduced to the only valid value, 1, to request forced hash collisions.
    This is only used during testing. So instead introduce a bool flag to
    force collisions explicitly.
    
    This patch also adapts selftests to new btf__dedup() and btf_dedup_opts
    use to avoid selftests breakage.
    
      [0] Closes: libbpf/libbpf#281
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111053624.190580-4-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Nov 12, 2021
  22. selftests/bpf: Minor cleanups and normalization of Makefile

    Few clean ups and single-line simplifications. Also split CLEAN command
    into multiple $(RM) invocations as it gets dangerously close to too long
    argument list. Make sure that -o <output.o> is used always as the last
    argument for saner verbose make output.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111053624.190580-3-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Nov 12, 2021
  23. bpftool: Normalize compile rules to specify output file last

    When dealing with verbose Makefile output, it's extremely confusing when
    compiler invocation commands don't specify -o <output.o> as the last
    argument. Normalize bpftool's Makefile to do just that, as most other
    BPF-related Makefiles are already doing that.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111053624.190580-2-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Nov 12, 2021
  24. Merge branch 'selftests/bpf: fix test_progs' log_level logic'

    Andrii Nakryiko says:
    
    ====================
    
    Fix the ability to request verbose (log_level=1) or very verbose (log_level=2)
    logs with test_progs's -vv or -vvv parameters. This ability regressed during
    recent bpf_prog_load() API refactoring. Also add
    bpf_program__set_extra_flags() API to allow setting extra testing flags
    (BPF_F_TEST_RND_HI32), which was also dropped during recent changes.
    ====================
    
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Alexei Starovoitov committed Nov 12, 2021
  25. selftests/bpf: Fix bpf_prog_test_load() logic to pass extra log level

    After recent refactoring bpf_prog_test_load(), used across multiple
    selftests, lost ability to specify extra log_level 1 or 2 (for -vv and
    -vvv, respectively). Fix that problem by using bpf_object__load_xattr()
    API that supports extra log_level flags. Also restore
    BPF_F_TEST_RND_HI32 prog_flags by utilizing new bpf_program__set_extra_flags()
    API.
    
    Fixes: f87c193 ("selftests/bpf: Merge test_stub.c into testing_helpers.c")
    Reported-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111051758.92283-3-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Nov 12, 2021
  26. libbpf: Add ability to get/set per-program load flags

    Add bpf_program__flags() API to retrieve prog_flags that will be (or
    were) supplied to BPF_PROG_LOAD command.
    
    Also add bpf_program__set_extra_flags() API to allow to set *extra*
    flags, in addition to those determined by program's SEC() definition.
    Such flags are logically OR'ed with libbpf-derived flags.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111051758.92283-2-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Nov 12, 2021

Commits on Nov 11, 2021

  1. Merge branch 'Get ingress_ifindex in BPF_SK_LOOKUP prog type'

    Mark Pashmfouroush says:
    
    ====================
    
    BPF_SK_LOOKUP users may want to have access to the ifindex of the skb
    which triggered the socket lookup. This may be useful for selectively
    applying programmable socket lookup logic to packets that arrive on a
    specific interface, or excluding packets from an interface.
    
    v3:
    - Rename ifindex field to ingress_ifindex for consistency. (Yonghong)
    
    v2:
    - Fix inaccurate comment (Alexei)
    - Add more details to commit messages (John)
    ====================
    
    Revieview-by: Lorenz Bauer <lmb@cloudflare.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Alexei Starovoitov committed Nov 11, 2021
  2. selftests/bpf: Add tests for accessing ingress_ifindex in bpf_sk_lookup

    A new field was added to the bpf_sk_lookup data that users can access.
    Add tests that validate that the new ingress_ifindex field contains the
    right data.
    
    Signed-off-by: Mark Pashmfouroush <markpash@cloudflare.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211110111016.5670-3-markpash@cloudflare.com
    markpash authored and Alexei Starovoitov committed Nov 11, 2021
  3. bpf: Add ingress_ifindex to bpf_sk_lookup

    It may be helpful to have access to the ifindex during bpf socket
    lookup. An example may be to scope certain socket lookup logic to
    specific interfaces, i.e. an interface may be made exempt from custom
    lookup code.
    
    Add the ifindex of the arriving connection to the bpf_sk_lookup API.
    
    Signed-off-by: Mark Pashmfouroush <markpash@cloudflare.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211110111016.5670-2-markpash@cloudflare.com
    markpash authored and Alexei Starovoitov committed Nov 11, 2021

Commits on Nov 10, 2021

  1. bpftool: Fix SPDX tag for Makefiles and .gitignore

    Bpftool is dual-licensed under GPLv2 and BSD-2-Clause. In commit
    907b223 ("tools: bpftool: dual license all files") we made sure
    that all its source files were indeed covered by the two licenses, and
    that they had the correct SPDX tags.
    
    However, bpftool's Makefile, the Makefile for its documentation, and the
    .gitignore file were skipped at the time (their GPL-2.0-only tag was
    added later). Let's update the tags.
    
    Signed-off-by: Quentin Monnet <quentin@isovalent.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Tobias Klauser <tklauser@distanz.ch>
    Acked-by: Joe Stringer <joe@cilium.io>
    Acked-by: Song Liu <songliubraving@fb.com>
    Acked-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
    Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
    Acked-by: Jakub Kicinski <kuba@kernel.org>
    Link: https://lore.kernel.org/bpf/20211105221904.3536-1-quentin@isovalent.com
    qmonnet authored and Alexei Starovoitov committed Nov 10, 2021

Commits on Nov 9, 2021

  1. libbpf: Compile using -std=gnu89

    The minimum supported C standard version is C89, with use of GNU
    extensions, hence make sure to catch any instances that would break
    the build for this mode by passing -std=gnu89.
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211105234243.390179-4-memxor@gmail.com
    kkdwivedi authored and Alexei Starovoitov committed Nov 9, 2021

Commits on Nov 8, 2021

  1. selftests/bpf: Add exception handling selftests for tp_bpf program

    Exception handling is triggered in BPF tracing programs when a NULL pointer
    is dereferenced; the exception handler zeroes the target register and
    execution of the BPF program progresses.
    
    To test exception handling then, we need to trigger a NULL pointer dereference
    for a field which should never be zero; if it is, the only explanation is the
    exception handler ran. task->task_works is the NULL pointer chosen (for a new
    task from fork() no work is associated), and the task_works->func field should
    not be zero if task_works is non-NULL. The test verifies that task_works and
    task_works->func are 0.
    
    Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/1636131046-5982-3-git-send-email-alan.maguire@oracle.com
    alan-maguire authored and borkmann committed Nov 8, 2021
  2. arm64/bpf: Remove 128MB limit for BPF JIT programs

    Commit 91fc957 ("arm64/bpf: don't allocate BPF JIT programs in module
    memory") restricts BPF JIT program allocation to a 128MB region to ensure
    BPF programs are still in branching range of each other. However this
    restriction should not apply to the aarch64 JIT, since BPF_JMP | BPF_CALL
    are implemented as a 64-bit move into a register and then a BLR instruction -
    which has the effect of being able to call anything without proximity
    limitation.
    
    The practical reason to relax this restriction on JIT memory is that 128MB of
    JIT memory can be quickly exhausted, especially where PAGE_SIZE is 64KB - one
    page is needed per program. In cases where seccomp filters are applied to
    multiple VMs on VM launch - such filters are classic BPF but converted to
    BPF - this can severely limit the number of VMs that can be launched. In a
    world where we support BPF JIT always on, turning off the JIT isn't always an
    option either.
    
    Fixes: 91fc957 ("arm64/bpf: don't allocate BPF JIT programs in module memory")
    Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
    Signed-off-by: Russell King <russell.king@oracle.com>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Tested-by: Alan Maguire <alan.maguire@oracle.com>
    Link: https://lore.kernel.org/bpf/1636131046-5982-2-git-send-email-alan.maguire@oracle.com
    rmk92 authored and borkmann committed Nov 8, 2021

Commits on Nov 7, 2021

  1. Merge branch 'introduce bpf_find_vma'

    Song Liu says:
    
    ====================
    
    Changes v4 => v5:
    1. Clean up and style change in 2/2. (Andrii)
    
    Changes v3 => v4:
    1. Move mmap_unlock_work to task_iter.c to fix build for .config without
       !CONFIG_PERF_EVENTS. (kernel test robot <lkp@intel.com>)
    
    Changes v2 => v3:
    1. Avoid using x86 only function in selftests. (Yonghong)
    2. Add struct file and struct vm_area_struct to btf_task_struct_ids, and
       use it in bpf_find_vma and stackmap.c. (Yonghong)
    3. Fix inaccurate comments. (Yonghong)
    
    Changes v1 => v2:
    1. Share irq_work with stackmap.c. (Daniel)
    2. Add tests for illegal writes to task/vma from the callback function.
       (Daniel)
    3. Other small fixes.
    
    Add helper bpf_find_vma. This can be used in some profiling use cases. It
    might also be useful for LSM.
    ====================
    
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Alexei Starovoitov committed Nov 7, 2021
  2. selftests/bpf: Add tests for bpf_find_vma

    Add tests for bpf_find_vma in perf_event program and kprobe program. The
    perf_event program is triggered from NMI context, so the second call of
    bpf_find_vma() will return -EBUSY (irq_work busy). The kprobe program,
    on the other hand, does not have this constraint.
    
    Also add tests for illegal writes to task or vma from the callback
    function. The verifier should reject both cases.
    
    Signed-off-by: Song Liu <songliubraving@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/bpf/20211105232330.1936330-3-songliubraving@fb.com
    liu-song-6 authored and Alexei Starovoitov committed Nov 7, 2021
Older