Skip to content
Permalink
Kumar-Kartikey…
Switch branches/tags

Commits on Dec 10, 2021

  1. selftests/bpf: Add test for unstable CT lookup API

    This tests that we return errors as documented, and also that the kfunc
    calls work from both XDP and TC hooks.
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    kkdwivedi authored and intel-lab-lkp committed Dec 10, 2021
  2. selftests/bpf: Extend kfunc selftests

    Use the prog_test kfuncs to test the referenced PTR_TO_BTF_ID kfunc
    support, and PTR_TO_CTX, PTR_TO_MEM argument passing support. Also
    testing the various failure cases.
    
    The failure selftests will test the following cases for kfunc:
    kfunc_call_test_fail1 - Argument struct type has non-scalar member
    kfunc_call_test_fail2 - Nesting depth of type > 8
    kfunc_call_test_fail3 - Struct type has trailing zero-sized FAM
    kfunc_call_test_fail4 - Trying to pass reg->type != PTR_TO_CTX when
    			argument struct type is a ctx type
    kfunc_call_test_fail5 - void * not part of mem, len pair
    kfunc_call_test_fail6 - u64 * not part of mem, len pair
    kfunc_call_test_fail7 - mark_btf_ld_reg copies ref_obj_id
    kfunc_call_test_fail8 - Same type btf_struct_walk reference copy handled
    			correctly during release (i.e. only parent
    			object can be released)
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    kkdwivedi authored and intel-lab-lkp committed Dec 10, 2021
  3. net/netfilter: Add unstable CT lookup helpers for XDP and TC-BPF

    This change adds conntrack lookup helpers using the unstable kfunc call
    interface for the XDP and TC-BPF hooks. The primary usecase is
    implementing a synproxy in XDP, see Maxim's patchset at [0].
    
    Also add acquire/release functions (randomly returning NULL), and also
    exercise the PTR_TO_BTF_ID_OR_NULL path so that BPF program caller has
    to check for NULL before dereferencing the pointer, for the TC hook.
    Introduce kfunc that take various argument types (for PTR_TO_MEM) that
    will pass and fail the verifier checks. These will be used in selftests.
    
    Export get_net_ns_by_id as nf_conntrack needs to call it.
    
    Note that we search for acquire, release, and null returning kfuncs in
    the intersection of those sets and main set.
    
    This implies that the kfunc_btf_id_list acq_set, rel_set, null_set may
    contain BTF ID not in main set, this is explicitly allowed and
    recommended (to save on definining more and more sets), since
    check_kfunc_call verifier operation would filter out the invalid BTF ID
    fairly early, so later checks for acquire, release, and ret_type_null
    kfunc will only consider allowed BTF IDs for that program that are
    allowed in main set. This is why the nf_conntrack_acq_ids set has BTF
    IDs for both xdp and tc hook kfuncs.
    
      [0]: https://lore.kernel.org/bpf/20211019144655.3483197-1-maximmi@nvidia.com
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    kkdwivedi authored and intel-lab-lkp committed Dec 10, 2021
  4. bpf: Track provenance for pointers formed from referenced PTR_TO_BTF_ID

    In the previous commit, we implemented support in the verifier for
    working with referenced PTR_TO_BTF_ID registers. These are invalidated
    when their corresponding release function is called.
    
    However, PTR_TO_BTF_ID is a bit special, in that distinct PTR_TO_BTF_ID
    can be formed by walking pointers in the struct represented by the BTF
    ID. mark_btf_ld_reg will copy the relevant register state to the
    destination register.
    
    However, we cannot simply copy ref_obj_id (such that
    release_reg_references will match and invalidate all pointers formed by
    pointer walking), as we obtain the same BTF ID in the destination
    register, leading to confusion during release. An example is show below:
    
    For a type like so:
    struct foo { struct foo *next; };
    
    r1 = acquire(...); // BTF ID of struct foo
    if (r1) {
    	r2 = r1->next; // BTF ID of struct foo, and we copied ref_obj_id in
    		       // mark_btf_ld_reg.
    	release(r2);
    }
    
    With this logic, the above snippet succeeds. Hence we need to
    distinguish the canonical reference and pointers formed from it.
    
    We introduce a 'parent_ref_obj_id' member in bpf_reg_state, for a
    referenced register, only one of ref_obj_id or parent_ref_obj_id may be
    set, i.e. either a register holds a canonical reference, or it is
    related to a canonical reference for invalidation purposes (contains an
    edge pointing to it by way of having the same ref_obj_id in
    parent_ref_obj_id, in the graph of objects).
    
    When releasing reference, we ensure that both are not set at once, and
    then release if either of them match the requested ref_obj_id to be
    released. This ensures that the example given above will not succeed.
    A test to this end has been added in later patches.
    
    Typically, kernel objects have a nested object lifetime (where the
    parent object 'owns' the objects it holds references to). However, this
    is not always true. For now, we don't need support to hold on to
    references to objects obtained from a refcounted PTR_TO_BTF_ID after its
    release, but this can be relaxed on a case by case basis (i.e. based on
    the BTF ID and program type/attach type) in the future.
    
    The safest assumption for the verifier to make in absence of any other
    hints, is that all such pointers formed from refcounted PTR_TO_BTF_ID
    shall be invalidated.
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    kkdwivedi authored and intel-lab-lkp committed Dec 10, 2021
  5. bpf: Add reference tracking support to kfunc

    This patch adds verifier support for PTR_TO_BTF_ID return type of kfunc
    to be a reference, by reusing acquire_reference_state/release_reference
    support for existing in-kernel bpf helpers.
    
    Verifier ops struct is extended with three callbacks:
    
    - is_acquire_kfunc
      Return true if kfunc_btf_id, module pair is an acquire kfunc.  This
      will acquire_reference_state for the returned PTR_TO_BTF_ID (this is
      the only allow return value). Note that acquire kfunc must always
      return a PTR_TO_BTF_ID{_OR_NULL}, otherwise the program is rejected.
    
    - is_release_kfunc
      Return true if kfunc_btf_id, module pair is a release kfunc.  This
      will release the reference to the passed in PTR_TO_BTF_ID which has a
      reference state (from earlier acquire kfunc).
      The btf_check_func_arg_match returns the regno (of argument register,
      hence > 0) if the kfunc is a release kfunc, and a proper referenced
      PTR_TO_BTF_ID is being passed to it.
      This is similar to how helper call check uses bpf_call_arg_meta to
      store the ref_obj_id that is later used to release the reference.
      Similar to in-kernel helper, we only allow passing one referenced
      PTR_TO_BTF_ID as an argument. It can also be passed in to normal
      kfunc, but in case of release kfunc there must always be one
      PTR_TO_BTF_ID argument that is referenced.
    
    - is_kfunc_ret_type_null
      For kfunc returning PTR_TO_BTF_ID, tells if it can be NULL, hence
      force caller to mark the pointer not null (using check) before
      accessing it. Note that taking into account the case fixed by commit
      93c230e, we assign a non-zero id for mark_ptr_or_null_reg logic.
      Later, if more return types are supported by kfunc, which have a
      _OR_NULL variant, it might be better to move this id generation under
      a common reg_type_may_be_null check, similar to the case in the
      commit.
    
    Later patches will implement these callbacks.
    
    Referenced PTR_TO_BTF_ID is currently only limited to kfunc, but can be
    extended in the future to other BPF helpers as well.  For now, we can
    rely on the btf_struct_ids_match check to ensure we get the pointer to
    the expected struct type. In the future, care needs to be taken to avoid
    ambiguity for reference PTR_TO_BTF_ID passed to release function, in
    case multiple candidates can release same BTF ID.
    
    e.g. there might be two release kfuncs (or kfunc and helper):
    
    foo(struct abc *p);
    bar(struct abc *p);
    
    ... such that both release a PTR_TO_BTF_ID with btf_id of struct abc. In
    this case we would need to track the acquire function corresponding to
    the release function to avoid type confusion, and store this information
    in the register state so that an incorrect program can be rejected. This
    is not a problem right now, hence it is left as an exercise for the
    future patch introducing such a case in the kernel.
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    kkdwivedi authored and intel-lab-lkp committed Dec 10, 2021
  6. bpf: Introduce mem, size argument pair support for kfunc

    BPF helpers can associate two adjacent arguments together to pass memory
    of certain size, using ARG_PTR_TO_MEM and ARG_CONST_SIZE arguments.
    Since we don't use bpf_func_proto for kfunc, we need to leverage BTF to
    implement similar support.
    
    The ARG_CONST_SIZE processing for helpers is refactored into a common
    check_mem_size_reg helper that is shared with kfunc as well. kfunc
    ptr_to_mem support follows logic similar to global functions, where
    verification is done as if pointer is not null, even when it may be
    null.
    
    This leads to a simple to follow rule for writing kfunc: always check
    the argument pointer for NULL, except when it is PTR_TO_CTX.
    
    Currently, we require the size argument to be prefixed with "len__" in
    the parameter name. This information is then recorded in kernel BTF and
    verified during function argument checking. In the future we can use BTF
    tagging instead, and modify the kernel function definitions. This will
    be a purely kernel-side change.
    
    This allows us to have some form of backwards compatibility for
    structures that are passed in to the kernel function with their size,
    and allow variable length structures to be passed in if they are
    accompanied by a size parameter.
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    kkdwivedi authored and intel-lab-lkp committed Dec 10, 2021
  7. bpf: Extend kfunc with PTR_TO_CTX, PTR_TO_MEM argument support

    Allow passing PTR_TO_CTX, if the kfunc expects a matching struct type,
    and punt to PTR_TO_MEM block if reg->type does not fall in one of
    PTR_TO_BTF_ID or PTR_TO_SOCK* types. This will be used by future commits
    to get access to XDP and TC PTR_TO_CTX, and pass various data (flags,
    tuple, netns_id, etc.) encoded in opts struct as a pointer to the kfunc.
    
    For PTR_TO_MEM support, arguments are currently limited to pointer to
    scalar, or pointer to struct composed of scalars. This is done so that
    unsafe scenarios (like passing PTR_TO_MEM where PTR_TO_BTF_ID of
    in-kernel valid structure is expected, which may have pointers) are
    avoided. kfunc argument checking is based on the passed in register type
    and limited argument type matching, hence this limitation is imposed. In
    the future, support for PTR_TO_MEM for kfunc can be extended to serve
    other usecases. struct may have maximum 8 nested structs, all
    recursively composed of scalars or struct with scalars.
    
    Future commits will add negative tests that check whether these
    restrictions imposed for kfunc arguments are duly rejected by BPF
    verifier or not.
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    kkdwivedi authored and intel-lab-lkp committed Dec 10, 2021
  8. bpf: Remove DEFINE_KFUNC_BTF_ID_SET

    The only reason to keep it was to initialize list head, but future
    commits will introduce more members that need to be set, which is more
    convenient to do using designated initializer.
    
    Hence, remove the macro, convert users, and initialize list head inside
    register_kfunc_btf_id_set.
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    kkdwivedi authored and intel-lab-lkp committed Dec 10, 2021
  9. bpf: Refactor bpf_check_mod_kfunc_call

    Future commits adding more callbacks will implement the same pattern of
    matching module owner of kfunc_btf_id_set, and then operating on more
    sets inside the struct.
    
    While the btf_id_set for check_kfunc_call wouldn't have been NULL so
    far, future commits introduce sets that are optional, hence the common
    code also checks whether the pointer is valid.
    
    Note that we must continue search on owner match and btf_id_set_contains
    returning false, since more entries may have same owner (which can be
    NULL for built-in modules). To clarify this case, a comment is added, so
    that future commits don't regress the search.
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    kkdwivedi authored and intel-lab-lkp committed Dec 10, 2021
  10. libbpf: Fix typo in btf__dedup@LIBBPF_0.0.2 definition

    The btf__dedup_deprecated name was misspelled in the definition of the
    compat symbol for btf__dedup. This leads it to be missing from the
    shared library.
    
    This fixes it.
    
    Fixes: 957d350 ("libbpf: Turn btf_dedup_opts into OPTS-based struct")
    Signed-off-by: Vincent Minet <vincent@vincent-minet.net>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211210063112.80047-1-vincent@vincent-minet.net
    vminet authored and anakryiko committed Dec 10, 2021

Commits on Dec 9, 2021

  1. Merge branch 'Enhance and rework logging controls in libbpf'

    Andrii Nakryiko says:
    
    ====================
    
    Add new open options and per-program setters to control BTF and program
    loading log verboseness and allow providing custom log buffers to capture logs
    of interest. Note how custom log_buf and log_level are orthogonal, which
    matches previous (alas less customizable) behavior of libbpf, even though it
    sort of worked by accident: if someone specified log_level = 1 in
    bpf_object__load_xattr(), first attempt to load any BPF program resulted in
    wasted bpf() syscall with -EINVAL due to !!log_buf != !!log_level. Then on
    retry libbpf would allocated log_buffer and try again, after which prog
    loading would succeed and libbpf would print verbose program loading log
    through its print callback.
    
    This behavior is now documented and made more efficient, not wasting
    unnecessary syscall. But additionally, log_level can be controlled globally on
    a per-bpf_object level through bpf_object_open_opts, as well as on
    a per-program basis with bpf_program__set_log_buf() and
    bpf_program__set_log_level() APIs.
    
    Now that we have a more future-proof way to set log_level, deprecate
    bpf_object__load_xattr().
    
    v2->v3:
      - added log_buf selftests for bpf_prog_load() and bpf_btf_load();
      - fix !log_buf in bpf_prog_load (John);
      - fix log_level==0 in bpf_btf_load (thanks selftest!);
    
    v1->v2:
      - fix log_level == 0 handling of bpf_prog_load, add as patch #1 (Alexei);
      - add comments explaining log_buf_size overflow prevention (Alexei).
    ====================
    
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Alexei Starovoitov committed Dec 9, 2021
  2. bpftool: Switch bpf_object__load_xattr() to bpf_object__load()

    Switch all the uses of to-be-deprecated bpf_object__load_xattr() into
    a simple bpf_object__load() calls with optional log_level passed through
    open_opts.kernel_log_level, if -d option is specified.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-13-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Dec 9, 2021
  3. selftests/bpf: Remove the only use of deprecated bpf_object__load_xat…

    …tr()
    
    Switch from bpf_object__load_xattr() to bpf_object__load() and
    kernel_log_level in bpf_object_open_opts.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-12-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Dec 9, 2021
  4. selftests/bpf: Add test for libbpf's custom log_buf behavior

    Add a selftest that validates that per-program and per-object log_buf
    overrides work as expected. Also test same logic for low-level
    bpf_prog_load() and bpf_btf_load() APIs.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-11-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Dec 9, 2021
  5. selftests/bpf: Replace all uses of bpf_load_btf() with bpf_btf_load()

    Switch all selftests uses of to-be-deprecated bpf_load_btf() with
    equivalent bpf_btf_load() calls.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-10-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Dec 9, 2021
  6. libbpf: Deprecate bpf_object__load_xattr()

    Deprecate non-extensible bpf_object__load_xattr() in v0.8 ([0]).
    
    With log_level control through bpf_object_open_opts or
    bpf_program__set_log_level(), we are finally at the point where
    bpf_object__load_xattr() doesn't provide any functionality that can't be
    accessed through other (better) ways. The other feature,
    target_btf_path, is also controllable through bpf_object_open_opts.
    
      [0] Closes: libbpf/libbpf#289
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-9-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Dec 9, 2021
  7. libbpf: Add per-program log buffer setter and getter

    Allow to set user-provided log buffer on a per-program basis ([0]). This
    gives great deal of flexibility in terms of which programs are loaded
    with logging enabled and where corresponding logs go.
    
    Log buffer set with bpf_program__set_log_buf() overrides kernel_log_buf
    and kernel_log_size settings set at bpf_object open time through
    bpf_object_open_opts, if any.
    
    Adjust bpf_object_load_prog_instance() logic to not perform own log buf
    allocation and load retry if custom log buffer is provided by the user.
    
      [0] Closes: libbpf/libbpf#418
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-8-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Dec 9, 2021
  8. libbpf: Preserve kernel error code and remove kprobe prog type guessing

    Instead of rewriting error code returned by the kernel of prog load with
    libbpf-sepcific variants pass through the original error.
    
    There is now also no need to have a backup generic -LIBBPF_ERRNO__LOAD
    fallback error as bpf_prog_load() guarantees that errno will be properly
    set no matter what.
    
    Also drop a completely outdated and pretty useless BPF_PROG_TYPE_KPROBE
    guess logic. It's not necessary and neither it's helpful in modern BPF
    applications.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-7-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Dec 9, 2021
  9. libbpf: Improve logging around BPF program loading

    Add missing "prog '%s': " prefixes in few places and use consistently
    markers for beginning and end of program load logs. Here's an example of
    log output:
    
    libbpf: prog 'handler': BPF program load failed: Permission denied
    libbpf: -- BEGIN PROG LOAD LOG ---
    arg#0 reference type('UNKNOWN ') size cannot be determined: -22
    ; out1 = in1;
    0: (18) r1 = 0xffffc9000cdcc000
    2: (61) r1 = *(u32 *)(r1 +0)
    
    ...
    
    81: (63) *(u32 *)(r4 +0) = r5
     R1_w=map_value(id=0,off=16,ks=4,vs=20,imm=0) R4=map_value(id=0,off=400,ks=4,vs=16,imm=0)
    invalid access to map value, value_size=16 off=400 size=4
    R4 min value is outside of the allowed memory range
    processed 63 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
     -- END PROG LOAD LOG --
    libbpf: failed to load program 'handler'
    libbpf: failed to load object 'test_skeleton'
    
    The entire verifier log, including BEGIN and END markers are now always
    youtput during a single print callback call. This should make it much
    easier to post-process or parse it, if necessary. It's not an explicit
    API guarantee, but it can be reasonably expected to stay like that.
    
    Also __bpf_object__open is renamed to bpf_object_open() as it's always
    an adventure to find the exact function that implements bpf_object's
    open phase, so drop the double underscored and use internal libbpf
    naming convention.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-6-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Dec 9, 2021
  10. libbpf: Allow passing user log setting through bpf_object_open_opts

    Allow users to provide their own custom log_buf, log_size, and log_level
    at bpf_object level through bpf_object_open_opts. This log_buf will be
    used during BTF loading. Subsequent patch will use same log_buf during
    BPF program loading, unless overriden at per-bpf_program level.
    
    When such custom log_buf is provided, libbpf won't be attempting
    retrying loading of BTF to try to provide its own log buffer to capture
    kernel's error log output. User is responsible to provide big enough
    buffer, otherwise they run a risk of getting -ENOSPC error from the
    bpf() syscall.
    
    See also comments in bpf_object_open_opts regarding log_level and
    log_buf interactions.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-5-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Dec 9, 2021
  11. libbpf: Allow passing preallocated log_buf when loading BTF into kernel

    Add libbpf-internal btf_load_into_kernel() that allows to pass
    preallocated log_buf and custom log_level to be passed into kernel
    during BPF_BTF_LOAD call. When custom log_buf is provided,
    btf_load_into_kernel() won't attempt an retry with automatically
    allocated internal temporary buffer to capture BTF validation log.
    
    It's important to note the relation between log_buf and log_level, which
    slightly deviates from stricter kernel logic. From kernel's POV, if
    log_buf is specified, log_level has to be > 0, and vice versa. While
    kernel has good reasons to request such "sanity, this, in practice, is
    a bit unconvenient and restrictive for libbpf's high-level bpf_object APIs.
    
    So libbpf will allow to set non-NULL log_buf and log_level == 0. This is
    fine and means to attempt to load BTF without logging requested, but if
    it failes, retry the load with custom log_buf and log_level 1. Similar
    logic will be implemented for program loading. In practice this means
    that users can provide custom log buffer just in case error happens, but
    not really request slower verbose logging all the time. This is also
    consistent with libbpf behavior when custom log_buf is not set: libbpf
    first tries to load everything with log_level=0, and only if error
    happens allocates internal log buffer and retries with log_level=1.
    
    Also, while at it, make BTF validation log more obvious and follow the log
    pattern libbpf is using for dumping BPF verifier log during
    BPF_PROG_LOAD. BTF loading resulting in an error will look like this:
    
    libbpf: BTF loading error: -22
    libbpf: -- BEGIN BTF LOAD LOG ---
    magic: 0xeb9f
    version: 1
    flags: 0x0
    hdr_len: 24
    type_off: 0
    type_len: 1040
    str_off: 1040
    str_len: 2063598257
    btf_total_size: 1753
    Total section length too long
    -- END BTF LOAD LOG --
    libbpf: Error loading .BTF into kernel: -22. BTF is optional, ignoring.
    
    This makes it much easier to find relevant parts in libbpf log output.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-4-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Dec 9, 2021
  12. libbpf: Add OPTS-based bpf_btf_load() API

    Similar to previous bpf_prog_load() and bpf_map_create() APIs, add
    bpf_btf_load() API which is taking optional OPTS struct. Schedule
    bpf_load_btf() for deprecation in v0.8 ([0]).
    
    This makes naming consistent with BPF_BTF_LOAD command, sets up an API
    for extensibility in the future, moves options parameters (log-related
    fields) into optional options, and also allows to pass log_level
    directly.
    
    It also removes log buffer auto-allocation logic from low-level API
    (consistent with bpf_prog_load() behavior), but preserves a special
    treatment of log_level == 0 with non-NULL log_buf, which matches
    low-level bpf_prog_load() and high-level libbpf APIs for BTF and program
    loading behaviors.
    
      [0] Closes: libbpf/libbpf#419
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-3-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Dec 9, 2021
  13. libbpf: Fix bpf_prog_load() log_buf logic for log_level 0

    To unify libbpf APIs behavior w.r.t. log_buf and log_level, fix
    bpf_prog_load() to follow the same logic as bpf_btf_load() and
    high-level bpf_object__load() API will follow in the subsequent patches:
      - if log_level is 0 and non-NULL log_buf is provided by a user, attempt
        load operation initially with no log_buf and log_level set;
      - if successful, we are done, return new FD;
      - on error, retry the load operation with log_level bumped to 1 and
        log_buf set; this way verbose logging will be requested only when we
        are sure that there is a failure, but will be fast in the
        common/expected success case.
    
    Of course, user can still specify log_level > 0 from the very beginning
    to force log collection.
    
    Suggested-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-2-andrii@kernel.org
    anakryiko authored and Alexei Starovoitov committed Dec 9, 2021
  14. samples/bpf: xdpsock: Fix swap.cocci warning

    Fix following swap.cocci warning:
    ./samples/bpf/xdpsock_user.c:528:22-23:
    WARNING opportunity for swap()
    
    Signed-off-by: Yihao Han <hanyihao@vivo.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: John Fastabend <john.fastabend@gmail.com>
    Link: https://lore.kernel.org/bpf/20211209092250.56430-1-hanyihao@vivo.com
    Yihao Han authored and anakryiko committed Dec 9, 2021
  15. samples/bpf: Remove unneeded variable

    Return value directly instead of taking this in another redundant variable.
    
    Reported-by: Zeal Robot <zealci@zte.com.cm>
    Signed-off-by: Minghao Chi <chi.minghao@zte.com.cn>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209080051.421844-1-chi.minghao@zte.com.cn
    Minghao Chi authored and anakryiko committed Dec 9, 2021
  16. bpf: Remove redundant assignment to pointer t

    The pointer t is being initialized with a value that is never read. The
    pointer is re-assigned a value a littler later on, hence the initialization
    is redundant and can be removed.
    
    Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211207224718.59593-1-colin.i.king@gmail.com
    ColinIanKing authored and anakryiko committed Dec 9, 2021
  17. selftests/bpf: Fix a compilation warning

    The following warning is triggered when I used clang compiler
    to build the selftest.
    
      /.../prog_tests/btf_dedup_split.c:368:6: warning: variable 'btf2' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
            if (!ASSERT_OK(err, "btf_dedup"))
                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
      /.../prog_tests/btf_dedup_split.c:424:12: note: uninitialized use occurs here
            btf__free(btf2);
                      ^~~~
      /.../prog_tests/btf_dedup_split.c:368:2: note: remove the 'if' if its condition is always false
            if (!ASSERT_OK(err, "btf_dedup"))
            ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      /.../prog_tests/btf_dedup_split.c:343:25: note: initialize the variable 'btf2' to silence this warning
            struct btf *btf1, *btf2;
                                   ^
                                    = NULL
    
    Initialize local variable btf2 = NULL and the warning is gone.
    
    Fixes: 9a49afe ("selftests/bpf: Add btf_dedup case with duplicated structs within CU")
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209050403.1770836-1-yhs@fb.com
    yonghong-song authored and anakryiko committed Dec 9, 2021

Commits on Dec 8, 2021

  1. perf/bpf_counter: Use bpf_map_create instead of bpf_create_map

    bpf_create_map is deprecated. Replace it with bpf_map_create. Also add a
    __weak bpf_map_create() so that when older version of libbpf is linked as
    a shared library, it falls back to bpf_create_map().
    
    Fixes: 992c422 ("libbpf: Unify low-level map creation APIs w/ new bpf_map_create()")
    Signed-off-by: Song Liu <song@kernel.org>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211207232340.2561471-1-song@kernel.org
    Song Liu authored and anakryiko committed Dec 8, 2021

Commits on Dec 7, 2021

  1. Merge branch 'samples: bpf: fix build issues with Clang/LLVM'

    Alexander Lobakin says:
    
    ====================
    
    Samples, at least XDP ones, can be built only with the compiler used
    to build the kernel itself.
    However, XDP sample infra introduced in Aug'21 was probably tested
    with GCC/Binutils only as it's not really compilable for now with
    Clang/LLVM.
    These two are trivial fixes addressing this.
    ====================
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    anakryiko committed Dec 7, 2021
  2. samples: bpf: Fix 'unknown warning group' build warning on Clang

    Clang doesn't have 'stringop-truncation' group like GCC does, and
    complains about it when building samples which use xdp_sample_user
    infra:
    
     samples/bpf/xdp_sample_user.h:48:32: warning: unknown warning group '-Wstringop-truncation', ignored [-Wunknown-warning-option]
     #pragma GCC diagnostic ignored "-Wstringop-truncation"
                                    ^
    [ repeat ]
    
    Those are harmless, but avoidable when guarding it with ifdef.
    I could guard push/pop as well, but this would require one more
    ifdef cruft around a single line which I don't think is reasonable.
    
    Fixes: 156f886 ("samples: bpf: Add basic infrastructure for XDP samples")
    Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    Link: https://lore.kernel.org/bpf/20211203195004.5803-3-alexandr.lobakin@intel.com
    alobakin authored and anakryiko committed Dec 7, 2021
  3. samples: bpf: Fix xdp_sample_user.o linking with Clang

    Clang (13) doesn't get the jokes about specifying libraries to link in
    cclags of individual .o objects:
    
    clang-13: warning: -lm: 'linker' input unused [-Wunused-command-line-argument]
    [ ... ]
      LD  samples/bpf/xdp_redirect_cpu
      LD  samples/bpf/xdp_redirect_map_multi
      LD  samples/bpf/xdp_redirect_map
      LD  samples/bpf/xdp_redirect
      LD  samples/bpf/xdp_monitor
    /usr/bin/ld: samples/bpf/xdp_sample_user.o: in function `sample_summary_print':
    xdp_sample_user.c:(.text+0x84c): undefined reference to `floor'
    /usr/bin/ld: xdp_sample_user.c:(.text+0x870): undefined reference to `ceil'
    /usr/bin/ld: xdp_sample_user.c:(.text+0x8cf): undefined reference to `floor'
    /usr/bin/ld: xdp_sample_user.c:(.text+0x8f3): undefined reference to `ceil'
    [ more ]
    
    Specify '-lm' as ldflags for all xdp_sample_user.o users in the main
    Makefile and remove it from ccflags of ^ in Makefile.target -- just
    like it's done for all other samples. This works with all compilers.
    
    Fixes: 6e1051a ("samples: bpf: Convert xdp_monitor to XDP samples helper")
    Fixes: b926c55 ("samples: bpf: Convert xdp_redirect to XDP samples helper")
    Fixes: e531a22 ("samples: bpf: Convert xdp_redirect_cpu to XDP samples helper")
    Fixes: bbe6586 ("samples: bpf: Convert xdp_redirect_map to XDP samples helper")
    Fixes: 594a116 ("samples: bpf: Convert xdp_redirect_map_multi to XDP samples helper")
    Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    Link: https://lore.kernel.org/bpf/20211203195004.5803-2-alexandr.lobakin@intel.com
    alobakin authored and anakryiko committed Dec 7, 2021
  4. bpf: Silence purge_cand_cache build warning.

    When CONFIG_DEBUG_INFO_BTF_MODULES is not set
    the following warning can be seen:
    kernel/bpf/btf.c:6588:13: warning: 'purge_cand_cache' defined but not used [-Wunused-function]
    Fix it.
    
    Fixes: 1e89106 ("bpf: Add bpf_core_add_cands() and wire it into bpf_core_apply_relo_insn().")
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211207014839.6976-1-alexei.starovoitov@gmail.com
    Alexei Starovoitov authored and anakryiko committed Dec 7, 2021

Commits on Dec 6, 2021

  1. libbpf: Add doc comments in libbpf.h

    This adds comments above functions in libbpf.h which document
    their uses. These comments are of a format that doxygen and sphinx
    can pick up and render. These are rendered by libbpf.readthedocs.org
    
    These doc comments are for:
    
    - bpf_object__open_file()
    - bpf_object__open_mem()
    - bpf_program__attach_uprobe()
    - bpf_program__attach_uprobe_opts()
    
    Signed-off-by: Grant Seltzer <grantseltzer@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211206203709.332530-1-grantseltzer@gmail.com
    grantseltzer authored and anakryiko committed Dec 6, 2021
  2. libbpf: Fix trivial typo

    Fix typo in comment from 'bpf_skeleton_map' to 'bpf_map_skeleton'
    and from 'bpf_skeleton_prog' to 'bpf_prog_skeleton'.
    
    Signed-off-by: huangxuesen <huangxuesen@kuaishou.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/1638755236-3851199-1-git-send-email-hxseverything@gmail.com
    huangxuesen authored and anakryiko committed Dec 6, 2021
  3. bpf: Remove config check to enable bpf support for branch records

    Branch data available to BPF programs can be very useful to get stack traces
    out of userspace application.
    
    Commit fff7b64 ("bpf: Add bpf_read_branch_records() helper") added BPF
    support to capture branch records in x86. Enable this feature also for other
    architectures as well by removing checks specific to x86.
    
    If an architecture doesn't support branch records, bpf_read_branch_records()
    still has appropriate checks and it will return an -EINVAL in that scenario.
    Based on UAPI helper doc in include/uapi/linux/bpf.h, unsupported architectures
    should return -ENOENT in such case. Hence, update the appropriate check to
    return -ENOENT instead.
    
    Selftest 'perf_branches' result on power9 machine which has the branch stacks
    support:
    
     - Before this patch:
    
      [command]# ./test_progs -t perf_branches
       torvalds#88/1 perf_branches/perf_branches_hw:FAIL
       torvalds#88/2 perf_branches/perf_branches_no_hw:OK
       torvalds#88 perf_branches:FAIL
      Summary: 0/1 PASSED, 0 SKIPPED, 1 FAILED
    
     - After this patch:
    
      [command]# ./test_progs -t perf_branches
       torvalds#88/1 perf_branches/perf_branches_hw:OK
       torvalds#88/2 perf_branches/perf_branches_no_hw:OK
       torvalds#88 perf_branches:OK
      Summary: 1/2 PASSED, 0 SKIPPED, 0 FAILED
    
    Selftest 'perf_branches' result on power9 machine which doesn't have branch
    stack report:
    
     - After this patch:
    
      [command]# ./test_progs -t perf_branches
       torvalds#88/1 perf_branches/perf_branches_hw:SKIP
       torvalds#88/2 perf_branches/perf_branches_no_hw:OK
       torvalds#88 perf_branches:OK
      Summary: 1/1 PASSED, 1 SKIPPED, 0 FAILED
    
    Fixes: fff7b64 ("bpf: Add bpf_read_branch_records() helper")
    Suggested-by: Peter Zijlstra <peterz@infradead.org>
    Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Link: https://lore.kernel.org/bpf/20211206073315.77432-1-kjain@linux.ibm.com
    kjain101 authored and borkmann committed Dec 6, 2021
Older