Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove preempt_enable_no_resched() from powerpc kprobes code #440

Closed
npiggin opened this issue Oct 18, 2022 · 4 comments
Closed

Remove preempt_enable_no_resched() from powerpc kprobes code #440

npiggin opened this issue Oct 18, 2022 · 4 comments
Milestone

Comments

@npiggin
Copy link

npiggin commented Oct 18, 2022

It looks like this has proliferated via copy and paste and its original purpose long-forgotten.

x86 cleaned out their preempt_enable_no_resched calls from kprobes with the likes of commit 2e62024c265aa6 ("kprobes/x86: Use preempt_enable() in optimized_callback()") and commit 2bbda764d720aa ("kprobes/x86: Do not disable preempt on int3 path").

powerpc might be able to do the same. preempt_enable_no_resched() is not required when called with irqs or preempt already disabled, because preempt_enable will do the right thing if returning to a non preemptible context. If we were in a preemptible context, it is unclear why interrupt-based preemption would be okay but an explicit schedule in preempt_enable() would be a problem.

@rnav
Copy link
Member

rnav commented Oct 19, 2022

You're right. We should be able to convert all of those to preempt_enable().

I wonder if we should also follow x86 and keep interrupts disabled while single stepping. Can we rely on the task not being scheduled out if we are returning to an irq disabled context for single stepping?

Something like the below - not sure if we need to also save/restore the irq_soft_mask:

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 6667fb2f69b491..23ccf2297b8b3d 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -218,6 +218,7 @@ static nokprobe_inline void prepare_singlestep(struct kprobe *p, struct pt_regs
         * if the trap is taken or not
         */
        regs_set_return_ip(regs, (unsigned long)p->ainsn.insn);
+       irq_soft_mask_regs_set_state(regs, IRQS_DISABLED);
 }
 
 static nokprobe_inline void save_previous_kprobe(struct kprobe_ctlblk *kcb)
@@ -306,11 +307,6 @@ int kprobe_handler(struct pt_regs *regs)
            (!(regs->msr & MSR_IR) || !(regs->msr & MSR_DR)))
                return 0;
 
-       /*
-        * We don't want to be preempted for the entire
-        * duration of kprobe processing
-        */
-       preempt_disable();
        kcb = get_kprobe_ctlblk();
 
        p = get_kprobe(addr);
@@ -367,7 +363,6 @@ int kprobe_handler(struct pt_regs *regs)
 
                        if (ret > 0) {
                                restore_previous_kprobe(kcb);
-                               preempt_enable_no_resched();
                                return 1;
                        }
                }
@@ -380,7 +375,6 @@ int kprobe_handler(struct pt_regs *regs)
        if (p->pre_handler && p->pre_handler(p, regs)) {
                /* handler changed execution path, so skip ss setup */
                reset_current_kprobe();
-               preempt_enable_no_resched();
                return 1;
        }
 
@@ -393,7 +387,6 @@ int kprobe_handler(struct pt_regs *regs)
 
                        kcb->kprobe_status = KPROBE_HIT_SSDONE;
                        reset_current_kprobe();
-                       preempt_enable_no_resched();
                        return 1;
                }
        }
@@ -402,7 +395,6 @@ int kprobe_handler(struct pt_regs *regs)
        return 1;
 
 no_kprobe:
-       preempt_enable_no_resched();
        return ret;
 }
 NOKPROBE_SYMBOL(kprobe_handler);
@@ -488,8 +480,6 @@ int kprobe_post_handler(struct pt_regs *regs)
        }
        reset_current_kprobe();
 out:
-       preempt_enable_no_resched();
-
        /*
         * if somebody else is singlestepping across a probe point, msr
         * will have DE/SE set, in which case, continue the remaining processing
@@ -527,7 +517,6 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
                        restore_previous_kprobe(kcb);
                else
                        reset_current_kprobe();
-               preempt_enable_no_resched();
                break;
        case KPROBE_HIT_ACTIVE:
        case KPROBE_HIT_SSDONE:

@npiggin
Copy link
Author

npiggin commented Oct 20, 2022

Yes we can rely on the task not being scheduled if it had interrupts disabled. We might need something slightly more than that (as you say we'd at least have to save the previous soft mask and restore it when single stepping ends, and we might want to disable MSR[EE] as well during that period so we don't have to worry about pending interrupts that have to be replayed.

So what is the problem with single-stepping with interrupts now? We return to an instruction with MSR[SE]=1 and MSR[EE]=1, and we can take an asynchronous interrupt at that point, before executing the instruction? I guess we could suppress that, but then the execution might no longer be architecturally correct. Is it possible to also say take a kernel page fault on the instruction (let's say if we traced a copy_from_user load?) Taking another kind of fault with interrupts disabled here could be a problem (although even having preepmt disabled there is unusual too).

So the reason for preemption is that after you prepare singlestep for a context, you don't want it to change to another context during that time until you take the trace interrupt? What happens if we did? Kprobes could get itself confused?

@rnav
Copy link
Member

rnav commented Oct 20, 2022

We save the kprobe structure being handled in a per-cpu variable to detect if we recurse. This requires us to prevent the task from being scheduled out, so we keep preemption disabled across the return from the initial program check through single stepping the original instruction.

Looking at the git history, it looks like x86_64 always disabled interrupts when going back for single stepping, while also disabling preemption (485a76815bd661 ("[PATCH] kprobes: kprobes ported to x86_64")). Commit 2bbda764d720aa ("kprobes/x86: Do not disable preempt on int3 path") removed preempt disable code from that path.

On powerpc, we only ever disabled preemption while single stepping (5bee076d0208a4 ("[PATCH] ppc64: kprobes implementation")).

Today, we don't disable MSR[EE], so we should be ok to take asynchronous interrupts, I think. Page faults are also fine, and they are caught via ___do_page_fault()->kprobe_page_fault().

@npiggin
Copy link
Author

npiggin commented Mar 27, 2023

Fixed:

torvalds/linux@2fa9482334b05 ("powerpc/kprobes: Remove preempt disable around call to get_kprobe() in arch_
prepare_kprobe()")
torvalds/linux@266b1991a433c ("powerpc/kprobes: Use preempt_enable() rather than the no_resched variant")

arch/powerpc contains no more preempt_enable_no_resched() calls that are not commented as to why they are needed and where/how the subsequent resched check is made.

@npiggin npiggin closed this as completed Mar 27, 2023
@mpe mpe modified the milestones: v6.3, v6.2 Mar 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

3 participants