New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove preempt_enable_no_resched() from powerpc kprobes code #440
Comments
You're right. We should be able to convert all of those to I wonder if we should also follow x86 and keep interrupts disabled while single stepping. Can we rely on the task not being scheduled out if we are returning to an irq disabled context for single stepping? Something like the below - not sure if we need to also save/restore the irq_soft_mask:
|
Yes we can rely on the task not being scheduled if it had interrupts disabled. We might need something slightly more than that (as you say we'd at least have to save the previous soft mask and restore it when single stepping ends, and we might want to disable MSR[EE] as well during that period so we don't have to worry about pending interrupts that have to be replayed. So what is the problem with single-stepping with interrupts now? We return to an instruction with MSR[SE]=1 and MSR[EE]=1, and we can take an asynchronous interrupt at that point, before executing the instruction? I guess we could suppress that, but then the execution might no longer be architecturally correct. Is it possible to also say take a kernel page fault on the instruction (let's say if we traced a copy_from_user load?) Taking another kind of fault with interrupts disabled here could be a problem (although even having preepmt disabled there is unusual too). So the reason for preemption is that after you prepare singlestep for a context, you don't want it to change to another context during that time until you take the trace interrupt? What happens if we did? Kprobes could get itself confused? |
We save the kprobe structure being handled in a per-cpu variable to detect if we recurse. This requires us to prevent the task from being scheduled out, so we keep preemption disabled across the return from the initial program check through single stepping the original instruction. Looking at the git history, it looks like x86_64 always disabled interrupts when going back for single stepping, while also disabling preemption (485a76815bd661 ("[PATCH] kprobes: kprobes ported to x86_64")). Commit 2bbda764d720aa ("kprobes/x86: Do not disable preempt on int3 path") removed preempt disable code from that path. On powerpc, we only ever disabled preemption while single stepping (5bee076d0208a4 ("[PATCH] ppc64: kprobes implementation")). Today, we don't disable MSR[EE], so we should be ok to take asynchronous interrupts, I think. Page faults are also fine, and they are caught via ___do_page_fault()->kprobe_page_fault(). |
Fixed: torvalds/linux@2fa9482334b05 ("powerpc/kprobes: Remove preempt disable around call to get_kprobe() in arch_ arch/powerpc contains no more preempt_enable_no_resched() calls that are not commented as to why they are needed and where/how the subsequent resched check is made. |
It looks like this has proliferated via copy and paste and its original purpose long-forgotten.
x86 cleaned out their preempt_enable_no_resched calls from kprobes with the likes of commit 2e62024c265aa6 ("kprobes/x86: Use preempt_enable() in optimized_callback()") and commit 2bbda764d720aa ("kprobes/x86: Do not disable preempt on int3 path").
powerpc might be able to do the same. preempt_enable_no_resched() is not required when called with irqs or preempt already disabled, because preempt_enable will do the right thing if returning to a non preemptible context. If we were in a preemptible context, it is unclear why interrupt-based preemption would be okay but an explicit schedule in preempt_enable() would be a problem.
The text was updated successfully, but these errors were encountered: