Skip to content
Permalink
Masami-Hiramat…
Switch branches/tags

Commits on Oct 15, 2021

  1. arm64: kprobes: Detect error of kretprobe return address fixup

    Add kretprobe_next_ret_addr() which can detect errors in
    the given parameter or the kretprobe_instance list, and call
    it from arm64 stacktrace.
    
    This kretprobe_next_ret_addr() will return following errors
    when it detects;
    
     - -EINVAL if @Cur is NULL (caller issue)
     - -ENOENT if there is no next correct return address
       (either kprobes or caller issue)
     - -EILSEQ if the next currect return address is there
       but doesn't match the framepointer (maybe caller issue)
    
    Thus the caller must check the error and handle it. On arm64,
    this tries to handle the errors and show it on the log.
    
    Suggested-by: Mark Rutland <mark.rutland@arm.com>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    mhiramat authored and intel-lab-lkp committed Oct 15, 2021
  2. ARM: Recover kretprobe modified return address in stacktrace

    Since the kretprobe replaces the function return address with
    the kretprobe_trampoline on the stack, arm unwinder shows it
    instead of the correct return address.
    
    This finds the correct return address from the per-task
    kretprobe_instances list and verify it is in between the
    caller fp and callee fp.
    
    Note that this supports both GCC and clang if CONFIG_FRAME_POINTER=y
    and CONFIG_ARM_UNWIND=n. For the ARM unwinder, this is still
    not working correctly.
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    mhiramat authored and intel-lab-lkp committed Oct 15, 2021
  3. ARM: kprobes: Make a frame pointer on __kretprobe_trampoline

    Currently kretprobe on ARM just fills r0-r11 of pt_regs, but
    that is not enough for the stacktrace. Moreover, from the user
    kretprobe handler, stacktrace needs a frame pointer on the
    __kretprobe_trampoline.
    
    This adds a frame pointer on __kretprobe_trampoline for both gcc
    and clang case. Those have different frame pointer so we need
    different but similar stack on pt_regs.
    
    Gcc makes the frame pointer (fp) to point the 'pc' address of
    the {fp, ip (=sp), lr, pc}, this means {r11, r13, r14, r15}.
    Thus if we save the r11 (fp) on pt_regs->r12, we can make this
    set on the end of pt_regs.
    
    On the other hand, Clang makes the frame pointer to point the
    'fp' address of {fp, lr} on stack. Since the next to the
    pt_regs->lr is pt_regs->sp, I reused the pair of pt_regs->fp
    and pt_regs->ip.
    So this stores the 'lr' on pt_regs->ip and make the fp to point
    pt_regs->fp.
    
    For both cases, saves __kretprobe_trampoline address to
    pt_regs->lr, so that the stack tracer can identify this frame
    pointer has been made by the __kretprobe_trampoline.
    
    Note that if the CONFIG_FRAME_POINTER is not set, this keeps
    fp as is.
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
    mhiramat authored and intel-lab-lkp committed Oct 15, 2021
  4. ARM: clang: Do not rely on lr register for stacktrace

    Currently the stacktrace on clang compiled arm kernel uses the 'lr'
    register to find the first frame address from pt_regs. However, that
    is wrong after calling another function, because the 'lr' register
    is used by 'bl' instruction and never be recovered.
    
    As same as gcc arm kernel, directly use the frame pointer (r11) of
    the pt_regs to find the first frame address.
    
    Note that this fixes kretprobe stacktrace issue only with
    CONFIG_UNWINDER_FRAME_POINTER=y. For the CONFIG_UNWINDER_ARM,
    we need another fix.
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
    mhiramat authored and intel-lab-lkp committed Oct 15, 2021
  5. arm64: Recover kretprobe modified return address in stacktrace

    Since the kretprobe replaces the function return address with
    the kretprobe_trampoline on the stack, stack unwinder shows it
    instead of the correct return address.
    
    This checks whether the next return address is the
    __kretprobe_trampoline(), and if so, try to find the correct
    return address from the kretprobe instance list. For this purpose
    this adds 'kr_cur' loop cursor to memorize the current kretprobe
    instance.
    
    With this fix, now arm64 can enable
    CONFIG_ARCH_CORRECT_STACKTRACE_ON_KRETPROBE, and pass the
    kprobe self tests.
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    mhiramat authored and intel-lab-lkp committed Oct 15, 2021
  6. arm64: kprobes: Make a frame pointer on __kretprobe_trampoline

    Make a frame pointer (make the x29 register points the
    address of pt_regs->regs[29]) on __kretprobe_trampoline.
    
    This frame pointer will be used by the stacktracer when it is
    called from the kretprobe handlers. In this case, the stack
    tracer will unwind stack to trampoline_probe_handler() and
    find the next frame pointer in the stack frame of the
    __kretprobe_trampoline().
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Acked-by: Will Deacon <will@kernel.org>
    mhiramat authored and intel-lab-lkp committed Oct 15, 2021
  7. arm64: kprobes: Record frame pointer with kretprobe instance

    Record the frame pointer instead of stack address with kretprobe
    instance as the identifier on the instance list.
    Since arm64 always enable CONFIG_FRAME_POINTER, we can use the
    actual frame pointer (x29).
    
    This will allow the stacktrace code to find the original return
    address from the FP alone.
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Acked-by: Will Deacon <will@kernel.org>
    Acked-by: Mark Rutland <mark.rutland@arm.com>
    mhiramat authored and intel-lab-lkp committed Oct 15, 2021
  8. x86/unwind: Compile kretprobe fixup code only if CONFIG_KRETPROBES=y

    Compile kretprobe related stacktrace entry recovery code and
    unwind_state::kr_cur field only when CONFIG_KRETPROBES=y.
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    mhiramat authored and intel-lab-lkp committed Oct 15, 2021
  9. kprobes: Add a test case for stacktrace from kretprobe handler

    Add a test case for stacktrace from kretprobe handler and
    nested kretprobe handlers.
    
    This test checks both of stack trace inside kretprobe handler
    and stack trace from pt_regs. Those stack trace must include
    actual function return address instead of kretprobe trampoline.
    The nested kretprobe stacktrace test checks whether the unwinder
    can correctly unwind the call frame on the stack which has been
    modified by the kretprobe.
    
    Since the stacktrace on kretprobe is correctly fixed only on x86,
    this introduces a meta kconfig ARCH_CORRECT_STACKTRACE_ON_KRETPROBE
    which tells user that the stacktrace on kretprobe is correct or not.
    
    The test results will be shown like below;
    
     TAP version 14
     1..1
         # Subtest: kprobes_test
         1..6
         ok 1 - test_kprobe
         ok 2 - test_kprobes
         ok 3 - test_kretprobe
         ok 4 - test_kretprobes
         ok 5 - test_stacktrace_on_kretprobe
         ok 6 - test_stacktrace_on_nested_kretprobe
     # kprobes_test: pass:6 fail:0 skip:0 total:6
     # Totals: pass:6 fail:0 skip:0 total:6
     ok 1 - kprobes_test
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    mhiramat authored and intel-lab-lkp committed Oct 15, 2021
  10. kprobes: convert tests to kunit

    This converts the kprobes testcases to use the kunit framework.
    It adds a dependency on CONFIG_KUNIT, and the output will change
    to TAP:
    
    TAP version 14
    1..1
        # Subtest: kprobes_test
        1..4
    random: crng init done
        ok 1 - test_kprobe
        ok 2 - test_kprobes
        ok 3 - test_kretprobe
        ok 4 - test_kretprobes
    ok 1 - kprobes_test
    
    Note that the kprobes testcases are no longer run immediately after
    kprobes initialization, but as a late initcall when kunit is
    initialized. kprobes itself is initialized with an early initcall,
    so the order is still correct.
    
    Signed-off-by: Sven Schnelle <svens@linux.ibm.com>
    Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    svens-s390 authored and intel-lab-lkp committed Oct 15, 2021

Commits on Oct 5, 2021

  1. tracing: Create a sparse bitmask for pid filtering

    When the trace_pid_list was created, the default pid max was 32768.
    Creating a bitmask that can hold one bit for all 32768 took up 4096 (one
    page). Having a one page bitmask was not much of a problem, and that was
    used for mapping pids. But today, systems are bigger and can run more
    tasks, and now the default pid_max is usually set to 4194304. Which means
    to handle that many pids requires 524288 bytes. Worse yet, the pid_max can
    be set to 2^30 (1073741824 or 1G) which would take 134217728 (128M) of
    memory to store this array.
    
    Since the pid_list array is very sparsely populated, it is a huge waste of
    memory to store all possible bits for each pid when most will not be set.
    
    Instead, use a page table scheme to store the array, and allow this to
    handle up to 30 bit pids.
    
    The pid_mask will start out with 256 entries for the first 8 MSB bits.
    This will cost 1K for 32 bit architectures and 2K for 64 bit. Each of
    these will have a 256 array to store the next 8 bits of the pid (another
    1 or 2K). These will hold an 2K byte bitmask (which will cover the LSB
    14 bits or 16384 pids).
    
    When the trace_pid_list is allocated, it will have the 1/2K upper bits
    allocated, and then it will allocate a cache for the next upper chunks and
    the lower chunks (default 6 of each). Then when a bit is "set", these
    chunks will be pulled from the free list and added to the array. If the
    free list gets down to a lever (default 2), it will trigger an irqwork
    that will refill the cache back up.
    
    On clearing a bit, if the clear causes the bitmask to be zero, that chunk
    will then be placed back into the free cache for later use, keeping the
    need to allocate more down to a minimum.
    
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    rostedt committed Oct 5, 2021
  2. tracing: Place trace_pid_list logic into abstract functions

    Instead of having the logic that does trace_pid_list open coded, wrap it in
    abstract functions. This will allow a rewrite of the logic that implements
    the trace_pid_list without affecting the users.
    
    Note, this causes a change in behavior. Every time a pid is written into
    the set_*_pid file, it creates a new list and uses RCU to update it. If
    pid_max is lowered, but there was a pid currently in the list that was
    higher than pid_max, those pids will now be removed on updating the list.
    The old behavior kept that from happening.
    
    The rewrite of the pid_list logic will no longer depend on pid_max,
    and will return the old behavior.
    
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    rostedt committed Oct 5, 2021

Commits on Oct 1, 2021

  1. x86/kprobes: Fixup return address in generic trampoline handler

    In x86, the fake return address on the stack saved by
    __kretprobe_trampoline() will be replaced with the real return
    address after returning from trampoline_handler(). Before fixing
    the return address, the real return address can be found in the
    'current->kretprobe_instances'.
    
    However, since there is a window between updating the
    'current->kretprobe_instances' and fixing the address on the stack,
    if an interrupt happens at that timing and the interrupt handler
    does stacktrace, it may fail to unwind because it can not get
    the correct return address from 'current->kretprobe_instances'.
    
    This will eliminate that window by fixing the return address
    right before updating 'current->kretprobe_instances'.
    
    Link: https://lkml.kernel.org/r/163163057094.489837.9044470370440745866.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  2. tracing: Show kretprobe unknown indicator only for kretprobe_trampoline

    ftrace shows "[unknown/kretprobe'd]" indicator all addresses in the
    kretprobe_trampoline, but the modified address by kretprobe should
    be only kretprobe_trampoline+0.
    
    Link: https://lkml.kernel.org/r/163163056044.489837.794883849706638013.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  3. x86/unwind: Recover kretprobe trampoline entry

    Since the kretprobe replaces the function return address with
    the kretprobe_trampoline on the stack, x86 unwinders can not
    continue the stack unwinding at that point, or record
    kretprobe_trampoline instead of correct return address.
    
    To fix this issue, find the correct return address from task's
    kretprobe_instances as like as function-graph tracer does.
    
    With this fix, the unwinder can correctly unwind the stack
    from kretprobe event on x86, as below.
    
               <...>-135     [003] ...1     6.722338: r_full_proxy_read_0: (vfs_read+0xab/0x1a0 <- full_proxy_read)
               <...>-135     [003] ...1     6.722377: <stack trace>
     => kretprobe_trace_func+0x209/0x2f0
     => kretprobe_dispatcher+0x4a/0x70
     => __kretprobe_trampoline_handler+0xca/0x150
     => trampoline_handler+0x44/0x70
     => kretprobe_trampoline+0x2a/0x50
     => vfs_read+0xab/0x1a0
     => ksys_read+0x5f/0xe0
     => do_syscall_64+0x33/0x40
     => entry_SYSCALL_64_after_hwframe+0x44/0xae
    
    Link: https://lkml.kernel.org/r/163163055130.489837.5161749078833497255.stgit@devnote2
    
    Reported-by: Daniel Xu <dxu@dxuuu.xyz>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  4. x86/kprobes: Push a fake return address at kretprobe_trampoline

    Change __kretprobe_trampoline() to push the address of the
    __kretprobe_trampoline() as a fake return address at the bottom
    of the stack frame. This fake return address will be replaced
    with the correct return address in the trampoline_handler().
    
    With this change, the ORC unwinder can check whether the return
    address is modified by kretprobes or not.
    
    Link: https://lkml.kernel.org/r/163163054185.489837.14338744048957727386.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  5. kprobes: Enable stacktrace from pt_regs in kretprobe handler

    Since the ORC unwinder from pt_regs requires setting up regs->ip
    correctly, set the correct return address to the regs->ip before
    calling user kretprobe handler.
    
    This allows the kretrprobe handler to trace stack from the
    kretprobe's pt_regs by stack_trace_save_regs() (eBPF will do
    this), instead of stack tracing from the handler context by
    stack_trace_save() (ftrace will do this).
    
    Link: https://lkml.kernel.org/r/163163053237.489837.4272653874525136832.stgit@devnote2
    
    Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  6. arm: kprobes: Make space for instruction pointer on stack

    Since arm's __kretprobe_trampoline() saves partial 'pt_regs' on the
    stack, 'regs->ARM_pc' (instruction pointer) is not accessible from
    the kretprobe handler. This means if instruction_pointer_set() is
    used from kretprobe handler, it will break the data on the stack.
    
    Make space for instruction pointer (ARM_pc) on the stack in the
    __kretprobe_trampoline() for fixing this problem.
    
    Link: https://lkml.kernel.org/r/163163052262.489837.10327621053231461255.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  7. ia64: Add instruction_pointer_set() API

    Add instruction_pointer_set() API for ia64.
    
    Link: https://lkml.kernel.org/r/163163051195.489837.1039597819838213481.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  8. ARC: Add instruction_pointer_set() API

    Add instruction_pointer_set() API for arc.
    
    Link: https://lkml.kernel.org/r/163163050148.489837.15187799269793560256.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  9. x86/kprobes: Add UNWIND_HINT_FUNC on kretprobe_trampoline()

    Add UNWIND_HINT_FUNC on __kretprobe_trampoline() code so that ORC
    information is generated on the __kretprobe_trampoline() correctly.
    Also, this uses STACK_FRAME_NON_STANDARD_FP(), CONFIG_FRAME_POINTER-
    -specific version of STACK_FRAME_NON_STANDARD().
    
    Link: https://lkml.kernel.org/r/163163049242.489837.11970969750993364293.stgit@devnote2
    
    Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    jpoimboe authored and rostedt committed Oct 1, 2021
  10. objtool: Ignore unwind hints for ignored functions

    If a function is ignored, also ignore its hints.  This is useful for the
    case where the function ignore is conditional on frame pointers, e.g.
    STACK_FRAME_NON_STANDARD_FP().
    
    Link: https://lkml.kernel.org/r/163163048317.489837.10988954983369863209.stgit@devnote2
    
    Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
    Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    jpoimboe authored and rostedt committed Oct 1, 2021
  11. objtool: Add frame-pointer-specific function ignore

    Add a CONFIG_FRAME_POINTER-specific version of
    STACK_FRAME_NON_STANDARD() for the case where a function is
    intentionally missing frame pointer setup, but otherwise needs
    objtool/ORC coverage when frame pointers are disabled.
    
    Link: https://lkml.kernel.org/r/163163047364.489837.17377799909553689661.stgit@devnote2
    
    Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
    Tested-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    jpoimboe authored and rostedt committed Oct 1, 2021
  12. kprobes: Add kretprobe_find_ret_addr() for searching return address

    Introduce kretprobe_find_ret_addr() and is_kretprobe_trampoline().
    These APIs will be used by the ORC stack unwinder and ftrace, so that
    they can check whether the given address points kretprobe trampoline
    code and query the correct return address in that case.
    
    Link: https://lkml.kernel.org/r/163163046461.489837.1044778356430293962.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  13. kprobes: treewide: Make it harder to refer kretprobe_trampoline directly

    Since now there is kretprobe_trampoline_addr() for referring the
    address of kretprobe trampoline code, we don't need to access
    kretprobe_trampoline directly.
    
    Make it harder to refer by renaming it to __kretprobe_trampoline().
    
    Link: https://lkml.kernel.org/r/163163045446.489837.14510577516938803097.stgit@devnote2
    
    Suggested-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  14. kprobes: treewide: Remove trampoline_address from kretprobe_trampolin…

    …e_handler()
    
    The __kretprobe_trampoline_handler() callback, called from low level
    arch kprobes methods, has the 'trampoline_address' parameter, which is
    entirely superfluous as it basically just replicates:
    
      dereference_kernel_function_descriptor(kretprobe_trampoline)
    
    In fact we had bugs in arch code where it wasn't replicated correctly.
    
    So remove this superfluous parameter and use kretprobe_trampoline_addr()
    instead.
    
    Link: https://lkml.kernel.org/r/163163044546.489837.13505751885476015002.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  15. kprobes: treewide: Replace arch_deref_entry_point() with dereference_…

    …symbol_descriptor()
    
    ~15 years ago kprobes grew the 'arch_deref_entry_point()' __weak function:
    
      3d7e338: ("jprobes: make jprobes a little safer for users")
    
    But this is just open-coded dereference_symbol_descriptor() in essence, and
    its obscure nature was causing bugs.
    
    Just use the real thing and remove arch_deref_entry_point().
    
    Link: https://lkml.kernel.org/r/163163043630.489837.7924988885652708696.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  16. ia64: kprobes: Fix to pass correct trampoline address to the handler

    The following commit:
    
       Commit e792ff8 ("ia64: kprobes: Use generic kretprobe trampoline handler")
    
    Passed the wrong trampoline address to __kretprobe_trampoline_handler(): it
    passes the descriptor address instead of function entry address.
    
    Pass the right parameter.
    
    Also use correct symbol dereference function to get the function address
    from 'kretprobe_trampoline' - an IA64 special.
    
    Link: https://lkml.kernel.org/r/163163042696.489837.12551102356265354730.stgit@devnote2
    
    Fixes: e792ff8 ("ia64: kprobes: Use generic kretprobe trampoline handler")
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Ingo Molnar <mingo@kernel.org>
    Cc: X86 ML <x86@kernel.org>
    Cc: Daniel Xu <dxu@dxuuu.xyz>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Abhishek Sagar <sagar.abhishek@gmail.com>
    Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com>
    Cc: Paul McKenney <paulmck@kernel.org>
    Cc: stable@vger.kernel.org
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  17. kprobes: Use bool type for functions which returns boolean value

    Use the 'bool' type instead of 'int' for the functions which
    returns a boolean value, because this makes clear that those
    functions don't return any error code.
    
    Link: https://lkml.kernel.org/r/163163041649.489837.17311187321419747536.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  18. kprobes: treewide: Use 'kprobe_opcode_t *' for the code address in ge…

    …t_optimized_kprobe()
    
    Since get_optimized_kprobe() is only used inside kprobes,
    it doesn't need to use 'unsigned long' type for 'addr' parameter.
    Make it use 'kprobe_opcode_t *' for the 'addr' parameter and
    subsequent call of arch_within_optimized_kprobe() also should use
    'kprobe_opcode_t *'.
    
    Note that MAX_OPTIMIZED_LENGTH and RELATIVEJUMP_SIZE are defined
    by byte-size, but the size of 'kprobe_opcode_t' depends on the
    architecture. Therefore, we must be careful when calculating
    addresses using those macros.
    
    Link: https://lkml.kernel.org/r/163163040680.489837.12133032364499833736.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  19. kprobes: Add assertions for required lock

    Add assertions for required locks instead of comment it
    so that the lockdep can inspect locks automatically.
    
    Link: https://lkml.kernel.org/r/163163039572.489837.18011973177537476885.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  20. kprobes: Use IS_ENABLED() instead of kprobes_built_in()

    Use IS_ENABLED(CONFIG_KPROBES) instead of kprobes_built_in().
    This inline function is introduced only for avoiding #ifdef.
    But since now we have IS_ENABLED(), it is no longer needed.
    
    Link: https://lkml.kernel.org/r/163163038581.489837.2805250706507372658.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  21. kprobes: Fix coding style issues

    Fix coding style issues reported by checkpatch.pl and update
    comments to quote variable names and add "()" to function
    name.
    One TODO comment in __disarm_kprobe() is removed because
    it has been done by following commit.
    
    Link: https://lkml.kernel.org/r/163163037468.489837.4282347782492003960.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  22. kprobes: treewide: Cleanup the error messages for kprobes

    This clean up the error/notification messages in kprobes related code.
    Basically this defines 'pr_fmt()' macros for each files and update
    the messages which describes
    
     - what happened,
     - what is the kernel going to do or not do,
     - is the kernel fine,
     - what can the user do about it.
    
    Also, if the message is not needed (e.g. the function returns unique
    error code, or other error message is already shown.) remove it,
    and replace the message with WARN_*() macros if suitable.
    
    Link: https://lkml.kernel.org/r/163163036568.489837.14085396178727185469.stgit@devnote2
    
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    mhiramat authored and rostedt committed Oct 1, 2021
  23. kprobes: Make arch_check_ftrace_location static

    arch_check_ftrace_location() was introduced as a weak function in
    commit f7f242f ("kprobes: introduce weak
    arch_check_ftrace_location() helper function") to allow architectures
    to handle kprobes call site on their own.
    
    Recently, the only architecture (csky) to implement
    arch_check_ftrace_location() was migrated to using the common
    version.
    
    As a result, further cleanup the code to drop the weak attribute and
    rename the function to remove the architecture specific
    implementation.
    
    Link: https://lkml.kernel.org/r/163163035673.489837.2367816318195254104.stgit@devnote2
    
    Signed-off-by: Punit Agrawal <punitagrawal@gmail.com>
    Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    punitagrawal authored and rostedt committed Oct 1, 2021
Older