Skip to content

GH-126910: Add gdb support for unwinding JIT frames#146071

Open
diegorusso wants to merge 9 commits intopython:mainfrom
diegorusso:add-gdb-support
Open

GH-126910: Add gdb support for unwinding JIT frames#146071
diegorusso wants to merge 9 commits intopython:mainfrom
diegorusso:add-gdb-support

Conversation

@diegorusso
Copy link
Contributor

@diegorusso diegorusso commented Mar 17, 2026

The PR adds the support to GDB for unwinding JIT frames by emitting eh frames.
It reuses part of the existent infrastructure for the perf_jit from @pablogsal.

This is part of the overall plan laid out here: #126910 (comment)

The output in GDB looks like:

Program received signal SIGINT, Interrupt.
0x0000fffff7fb50f8 in py::jit_executor:<jit> ()
(gdb) bt
#0  0x0000fffff7fb50f8 in py::jit_executor:<jit> ()
#1  0x0000fffff7fb4050 in py::jit_shim:<jit> ()
#2  0x0000aaaaaad5e314 in _PyEval_EvalFrameDefault (tstate=0xfffff7fb80f0, frame=0xfffff774bab0, throwflag=6, throwflag@entry=0)
    at ../../Python/generated_cases.c.h:5711
#3  0x0000aaaaaad61350 in _PyEval_EvalFrame (tstate=0xaaaaab1d57b0 <_PyRuntime+344632>, frame=0xfffff7fb8020, throwflag=0)
    at ../../Include/internal/pycore_ceval.h:122
...

@diegorusso diegorusso added the 🔨 test-with-buildbots Test PR w/ buildbots; report in status section label Mar 18, 2026
@bedevere-bot
Copy link

🤖 New build scheduled with the buildbot fleet by @diegorusso for commit ac018d6 🤖

Results will be shown at:

https://buildbot.python.org/all/#/grid?branch=refs%2Fpull%2F146071%2Fmerge

If you want to schedule another build, you need to add the 🔨 test-with-buildbots label again.

@bedevere-bot bedevere-bot removed the 🔨 test-with-buildbots Test PR w/ buildbots; report in status section label Mar 18, 2026
@pablogsal
Copy link
Member

I have some questions about the EH frame generation and how it applies to the different code regions.

Looking at jit_record_code, it's called in two places:

  1. For jit_shim (line 811): the entry shim compiled from Tools/jit/shim.c
  2. For jit_executor (line 757): the full executor code region (code_size + state.trampolines.size)

Both end up calling _PyJitUnwind_GdbRegisterCode, which builds the same EH frame via _PyJitUnwind_BuildEhFrame.

The EH frame in elf_init_ehframe describes a specific prologue/epilogue sequence. On x86_64 for example:

push %rbp          (1 byte)
mov %rsp, %rbp     (3 bytes)
call *%rcx         (2 bytes)
pop %rbp           (1 byte)
ret

I understand how this is correct for jit_shim. Looking at Tools/jit/shim.c, it's a normal C function that calls into the executor:

_Py_CODEUNIT *
_JIT_ENTRY(...) {
    jit_func_preserve_none jitted = (jit_func_preserve_none)exec->jit_code;
    return jitted(exec, frame, stack_pointer, tstate, ...);
}

The compiler will emit exactly the prologue/epilogue the EH frame describes.

But I don't understand how the same EH frame is correct for jit_executor. The executor code region is a concatenation of many stencils, each compiled from Tools/jit/template.c with __attribute__((preserve_none)), chaining together via __attribute__((musttail)) tail calls. These stencils don't have the push rbp / mov rsp,rbp prologue that the EH frame describes. They use a completely different calling convention.

The FDE covers the full code_size + trampolines.size range but the CFI instructions only describe ~7 bytes of prologue/epilogue. DWARF will apply the last rule (CFA = RSP + 8 on x86_64) to all remaining addresses in the range. I don't understand why that rule would be correct at arbitrary points within the stencil code. Is it guaranteed that preserve_none stencils never modify RSP? Or is there something else going on that makes this work?

The test (test_jit.py) sets a breakpoint at id(42) which hits in the interpreter, not in the middle of a stencil. So the test verifies that the symbols appear in GDB's backtrace, but I don't think it exercises unwinding from an arbitrary point within the executor code region. Could we add a test that triggers unwinding from inside JIT code (e.g., via a signal or Ctrl+C while executing JIT code)?

Am I missing something about how the stencils interact with the stack, or is the EH frame intentionally approximate for the executor region?

Copy link
Member

@pablogsal pablogsal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bunch of questions I have from reading the code so far

struct jit_code_entry *first_entry;
};

static volatile struct jit_descriptor __jit_debug_descriptor = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should these be non-static? The GDB JIT interface spec says GDB locates __jit_debug_descriptor and __jit_debug_register_code by name in the symbol table. With static linkage they would be invisible in .dynsym on stripped builds and when CPython is loaded as a shared library via dlopen. Am I missing something, or would this silently break in release/packaged builds where .symtab is stripped?

Maybe also worth adding __attribute__((used)) to prevent the linker from eliding them?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are right. Instead of removing the static I've exported with the macro Py_EXPORTED_SYMBOL

id(42)
return

warming_up = True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this loop hang? When warming_up=True, the call passes warming_up_caller=True which returns immediately at line 8, so the recursive body never actually executes. If the JIT does not activate via some other path, would this not spin forever until the timeout kills it? Should there be a max iteration count as a safety net?

Also, line 16 uses bitwise & instead of and. Was that intentional? It means is_active() is always evaluated even when is_enabled() is False.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've simplified the test, the loop is not more controlled and deterministic.

Python/jit.c Outdated
return;
}
_PyJitUnwind_GdbRegisterCode(
code_addr, (unsigned int)code_size, entry, filename);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code_size comes in as size_t but gets cast to unsigned int here. I know JIT regions will not be 4GB, but should the API just take size_t throughout for consistency?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now done.

@diegorusso
Copy link
Contributor Author

I have some questions about the EH frame generation and how it applies to the different code regions.

Looking at jit_record_code, it's called in two places:

  1. For jit_shim (line 811): the entry shim compiled from Tools/jit/shim.c
  2. For jit_executor (line 757): the full executor code region (code_size + state.trampolines.size)

Both end up calling _PyJitUnwind_GdbRegisterCode, which builds the same EH frame via _PyJitUnwind_BuildEhFrame.

The EH frame in elf_init_ehframe describes a specific prologue/epilogue sequence. On x86_64 for example:

push %rbp          (1 byte)
mov %rsp, %rbp     (3 bytes)
call *%rcx         (2 bytes)
pop %rbp           (1 byte)
ret

I understand how this is correct for jit_shim. Looking at Tools/jit/shim.c, it's a normal C function that calls into the executor:

_Py_CODEUNIT *
_JIT_ENTRY(...) {
    jit_func_preserve_none jitted = (jit_func_preserve_none)exec->jit_code;
    return jitted(exec, frame, stack_pointer, tstate, ...);
}

The compiler will emit exactly the prologue/epilogue the EH frame describes.

But I don't understand how the same EH frame is correct for jit_executor. The executor code region is a concatenation of many stencils, each compiled from Tools/jit/template.c with __attribute__((preserve_none)), chaining together via __attribute__((musttail)) tail calls. These stencils don't have the push rbp / mov rsp,rbp prologue that the EH frame describes. They use a completely different calling convention.

The FDE covers the full code_size + trampolines.size range but the CFI instructions only describe ~7 bytes of prologue/epilogue. DWARF will apply the last rule (CFA = RSP + 8 on x86_64) to all remaining addresses in the range. I don't understand why that rule would be correct at arbitrary points within the stencil code. Is it guaranteed that preserve_none stencils never modify RSP? Or is there something else going on that makes this work?

The test (test_jit.py) sets a breakpoint at id(42) which hits in the interpreter, not in the middle of a stencil. So the test verifies that the symbols appear in GDB's backtrace, but I don't think it exercises unwinding from an arbitrary point within the executor code region. Could we add a test that triggers unwinding from inside JIT code (e.g., via a signal or Ctrl+C while executing JIT code)?

Am I missing something about how the stencils interact with the stack, or is the EH frame intentionally approximate for the executor region?

What this change synthesises for jit_executor is one unwind description for the executor as a whole, not compiler-emitted per-stencil CFI. Because the stencils are musttail-chained, the jumps between stencils do not add extra native call frames. The unwind job here is just to recover the caller of the executor frame. We don't want to describe each stencil as its own frame.

When GDB stops at a PC inside py::jit_executor:<jit>:

  • it finds the FDE whose range covers that PC
  • takes the CFI row for that PC,
  • computes the CFA from that row
  • uses the CFA rules to recover the caller registers and return PC.

On AArch64, for most of the covered executor range, the synthetic CFI says:

  • CFA = x29 + 16
  • saved x29 at CFA - 16
  • saved x30 at CFA - 8.
    That is enough for GDB to recover the caller frame in py::jit_shim:<jit>, and then continue unwinding into _PyEval_*.

Good catch for the testing gap. I’ve now added a new test that breaks inside the jit executor. It sill breaks at the builtin_id but GDB then finishes out through the C helper frames until the selected frame is py::jit_executor:<jit> (thanks to some GDB-python scripting), single-steps twice inside the executor, and only then runs bt.
The backtrace is now taken with the current PC in executor code itself, and it unwinds through py::jit_shim:<jit> and then back into _PyEval_*.

@Fidget-Spinner
Copy link
Member

Fidget-Spinner commented Mar 25, 2026

@diegorusso @pablogsal I think I may have come up with a solution that works.

EDIT: I think I gdb doesn't only use backtrace. So we're still stuck. Sorry for the noise!

Background info (skip if not interested):

  1. glibc seems to call out to libgcc on linux when needing to unwind in backtrace from execinfo.h.
  2. libgcc does not seem to implement proper frame pointer backchaining for x86_64 and AArch64, only PPC 1.
  3. So we need eh_frames it seems for backtrace.

The current issue:

  1. In DWARF, you can specify in the eh_frame the canonical frame address (CFA). Traditionally, it's defined as rsp + offset.
  2. The problem with the current PR however, is that each stencil changes rsp. That means the CFA is usually wrong for the executor.
  3. A possible solution is to generate DWARF opcodes for each stencil that say "bump rsp", but that's slow and complicated.

The solution:

  1. Notice: we already have frame pointers in prologue and preserve them! That means you can just say it's tied to rbp at fixed offset of rbp + 16 all the time instead of tying it to an rsp that changes. I got this idea, and also checked with cranelift's Chris Fallin, who said they do it. Thanks a lot Chris!
  2. So that means, with frame pointers, eh frame creation is a lot simpler. You can have just one eh frame for the whole JIT code still, we just need the eh_frame to point to rbp + 16.
  3. This ensures correctness while also allowing for a simple implementation.

This should work with backtrace from execinfo.h. without issues. It should even work with gdb step debugging/bt, with the exception that at the function prologue, it might be broken. However the most important thing is that we unbreak all C extension code that uses backtrace! Also, backtrace should be fast as our DWARF would be tiny and simple.

TLDR: frame pointers = eh_frame is simple.

@pablogsal
Copy link
Member

pablogsal commented Mar 25, 2026

I have some questions about the EH frame generation and how it applies to the different code regions.
Looking at jit_record_code, it's called in two places:

  1. For jit_shim (line 811): the entry shim compiled from Tools/jit/shim.c
  2. For jit_executor (line 757): the full executor code region (code_size + state.trampolines.size)

Both end up calling _PyJitUnwind_GdbRegisterCode, which builds the same EH frame via _PyJitUnwind_BuildEhFrame.
The EH frame in elf_init_ehframe describes a specific prologue/epilogue sequence. On x86_64 for example:

push %rbp          (1 byte)
mov %rsp, %rbp     (3 bytes)
call *%rcx         (2 bytes)
pop %rbp           (1 byte)
ret

I understand how this is correct for jit_shim. Looking at Tools/jit/shim.c, it's a normal C function that calls into the executor:

_Py_CODEUNIT *
_JIT_ENTRY(...) {
    jit_func_preserve_none jitted = (jit_func_preserve_none)exec->jit_code;
    return jitted(exec, frame, stack_pointer, tstate, ...);
}

The compiler will emit exactly the prologue/epilogue the EH frame describes.
But I don't understand how the same EH frame is correct for jit_executor. The executor code region is a concatenation of many stencils, each compiled from Tools/jit/template.c with __attribute__((preserve_none)), chaining together via __attribute__((musttail)) tail calls. These stencils don't have the push rbp / mov rsp,rbp prologue that the EH frame describes. They use a completely different calling convention.
The FDE covers the full code_size + trampolines.size range but the CFI instructions only describe ~7 bytes of prologue/epilogue. DWARF will apply the last rule (CFA = RSP + 8 on x86_64) to all remaining addresses in the range. I don't understand why that rule would be correct at arbitrary points within the stencil code. Is it guaranteed that preserve_none stencils never modify RSP? Or is there something else going on that makes this work?
The test (test_jit.py) sets a breakpoint at id(42) which hits in the interpreter, not in the middle of a stencil. So the test verifies that the symbols appear in GDB's backtrace, but I don't think it exercises unwinding from an arbitrary point within the executor code region. Could we add a test that triggers unwinding from inside JIT code (e.g., via a signal or Ctrl+C while executing JIT code)?
Am I missing something about how the stencils interact with the stack, or is the EH frame intentionally approximate for the executor region?

What this change synthesises for jit_executor is one unwind description for the executor as a whole, not compiler-emitted per-stencil CFI. Because the stencils are musttail-chained, the jumps between stencils do not add extra native call frames. The unwind job here is just to recover the caller of the executor frame. We don't want to describe each stencil as its own frame.

When GDB stops at a PC inside py::jit_executor:<jit>:

* it finds the FDE whose range covers that PC

* takes the CFI row for that PC,

* computes the CFA from that row

* uses the CFA rules to recover the caller registers and return PC.

On AArch64, for most of the covered executor range, the synthetic CFI says:

* CFA = x29 + 16

* saved x29 at CFA - 16

* saved x30 at CFA - 8.
  That is enough for GDB to recover the caller frame in `py::jit_shim:<jit>`, and then continue unwinding into `_PyEval_*`.

Good catch for the testing gap. I’ve now added a new test that breaks inside the jit executor. It sill breaks at the builtin_id but GDB then finishes out through the C helper frames until the selected frame is py::jit_executor:<jit> (thanks to some GDB-python scripting), single-steps twice inside the executor, and only then runs bt. The backtrace is now taken with the current PC in executor code itself, and it unwinds through py::jit_shim:<jit> and then back into _PyEval_*.

@diegorusso I have to say that I am tremendously confused here.

If GDB or backtrace() stops at an arbitrary PC inside py::jit_executor:<jit>, the unwind info for that exact PC should let the unwinder reconstruct the caller frame (py::jit_shim:<jit>) and then continue into _PyEval_*.

So the real question is not “does the FDE cover the address range?” and it is not “do the stencils form one logical frame?”. The real question is: does the CFI row that applies at that PC actually describe the machine state there?

That is the part I do not think has been explained.

I agree with the narrow musttail point: tail-chaining the stencils means you do not accumulate one native call frame per stencil. Fine. But that only tells us that we want to unwind the executor as one logical frame. It does not tell us that one fixed synthetic unwind recipe is valid everywhere inside the executor blob.

And that is exactly where I think the argument goes off the rails.

jit_executor is not one ordinary C function with one stable prologue/epilogue. It is a concatenation of many preserve_none stencils, glued together with musttail. For a single synthetic FDE to be correct across the whole region, there has to be some invariant that says “for any PC in executor code, the CFA and saved return state look like this”. I do not see that invariant stated anywhere, and the current explanation seems to jump from “musttail” straight to “the unwind is correct”, which are not the same thing.

A concrete x86_64 example of why this seems wrong to me:

with the same sort of flags used for executor stencils, a preserve_none + musttail function can compile to something as trivial as

jmp callee

or, if it needs temporary stack space / spills, something more like

subq $24, %rsp
...
addq $24, %rsp
jmp callee

In the first case there is no %rbp frame at all. In the second case the CFA is temporarily %rsp-relative and changes inside the body. So I do not understand how one synthetic %rbp-based description for the entire covered executor range is supposed to be generally correct.

For jit_shim I can at least see the intended story, because it is one ordinary non-tail C function that calls into JIT code. For jit_executor, I still do not see what makes the unwind recipe valid for arbitrary PCs inside the blob.

Also, I rebuilt the branch locally and tried the exact “finish to py::jit_executor:<jit>, step twice, then bt” flow. On x86_64 I still get:

#0  py::jit_executor:<jit> ()
#1  ?? ()
...
Backtrace stopped: previous frame inner to this frame (corrupt stack?)

So this is not just a theoretical concern for me. I still do not understand why the model being described here is supposed to work.I am of course not objecting to the goal. I am saying I still do not see the correctness argument. If the claim is that this is actually a correct unwind description for jit_executor as a whole, then I think what is missing from the discussion is the key invariant: what exactly is guaranteed to be true about the CFA / saved FP / saved return address at an arbitrary PC inside executor code that makes this one synthetic FDE valid?

@diegorusso
Copy link
Contributor Author

@diegorusso @pablogsal I think I may have come up with a solution that works.

EDIT: I think I gdb doesn't only use backtrace. So we're still stuck. Sorry for the noise!

Background info (skip if not interested):

  1. glibc seems to call out to libgcc on linux when needing to unwind in backtrace from execinfo.h.
  2. libgcc does not seem to implement proper frame pointer backchaining for x86_64 and AArch64, only PPC 1.
  3. So we need eh_frames it seems for backtrace.

The current issue:

  1. In DWARF, you can specify in the eh_frame the canonical frame address (CFA). Traditionally, it's defined as rsp + offset.
  2. The problem with the current PR however, is that each stencil changes rsp. That means the CFA is usually wrong for the executor.
  3. A possible solution is to generate DWARF opcodes for each stencil that say "bump rsp", but that's slow and complicated.

The solution:

  1. Notice: we already have frame pointers in prologue and preserve them! That means you can just say it's tied to rbp at fixed offset of rbp + 16 all the time instead of tying it to an rsp that changes. I got this idea, and also checked with cranelift's Chris Fallin, who said they do it. Thanks a lot Chris!
  2. So that means, with frame pointers, eh frame creation is a lot simpler. You can have just one eh frame for the whole JIT code still, we just need the eh_frame to point to rbp + 16.
  3. This ensures correctness while also allowing for a simple implementation.

This should work with backtrace from execinfo.h. without issues. It should even work with gdb step debugging/bt, with the exception that at the function prologue, it might be broken. However the most important thing is that we unbreak all C extension code that uses backtrace! Also, backtrace should be fast as our DWARF would be tiny and simple.

TLDR: frame pointers = eh_frame is simple.

Thanks, thanks for the comment. I regenerated the x86_64 and AArch64 stencils after the recent frame-pointer changes. What we have today is that shim gets a real frame-pointer prologue, but the executor stencils still are not uniformly rbp/x29-framed, so I don’t think the current generated code is enough to justify a single executor-wide CFA = rbp + 16 / x29 + const rule for arbitrary PCs in the blob.
If we want to go to that direction, we would need to force frame pointers for the executor stencils too, not just for the shim (which it doesn't make any sense as we moved away from them!)

The current implementation is still one synthetic executor-wide FDE. The unwinder uses the current PC to select that FDE and apply its CFI to recover the caller frame. That works where the actual machine state matches the synthetic rule at the stop PC, but it is still approximate executor-wide unwind metadata, not exact per-stencil CFI.

Separately, once this PR lands, wiring up libgcc-backed backtrace should be fairly easy. We already synthesise .eh_frame; the remaining work is to call the appropriate __register_frame* and deregistration API for that blob so the unwinder can see it.

@diegorusso
Copy link
Contributor Author

I have some questions about the EH frame generation and how it applies to the different code regions.
Looking at jit_record_code, it's called in two places:

  1. For jit_shim (line 811): the entry shim compiled from Tools/jit/shim.c
  2. For jit_executor (line 757): the full executor code region (code_size + state.trampolines.size)

Both end up calling _PyJitUnwind_GdbRegisterCode, which builds the same EH frame via _PyJitUnwind_BuildEhFrame.
The EH frame in elf_init_ehframe describes a specific prologue/epilogue sequence. On x86_64 for example:

push %rbp          (1 byte)
mov %rsp, %rbp     (3 bytes)
call *%rcx         (2 bytes)
pop %rbp           (1 byte)
ret

I understand how this is correct for jit_shim. Looking at Tools/jit/shim.c, it's a normal C function that calls into the executor:

_Py_CODEUNIT *
_JIT_ENTRY(...) {
    jit_func_preserve_none jitted = (jit_func_preserve_none)exec->jit_code;
    return jitted(exec, frame, stack_pointer, tstate, ...);
}

The compiler will emit exactly the prologue/epilogue the EH frame describes.
But I don't understand how the same EH frame is correct for jit_executor. The executor code region is a concatenation of many stencils, each compiled from Tools/jit/template.c with __attribute__((preserve_none)), chaining together via __attribute__((musttail)) tail calls. These stencils don't have the push rbp / mov rsp,rbp prologue that the EH frame describes. They use a completely different calling convention.
The FDE covers the full code_size + trampolines.size range but the CFI instructions only describe ~7 bytes of prologue/epilogue. DWARF will apply the last rule (CFA = RSP + 8 on x86_64) to all remaining addresses in the range. I don't understand why that rule would be correct at arbitrary points within the stencil code. Is it guaranteed that preserve_none stencils never modify RSP? Or is there something else going on that makes this work?
The test (test_jit.py) sets a breakpoint at id(42) which hits in the interpreter, not in the middle of a stencil. So the test verifies that the symbols appear in GDB's backtrace, but I don't think it exercises unwinding from an arbitrary point within the executor code region. Could we add a test that triggers unwinding from inside JIT code (e.g., via a signal or Ctrl+C while executing JIT code)?
Am I missing something about how the stencils interact with the stack, or is the EH frame intentionally approximate for the executor region?

What this change synthesises for jit_executor is one unwind description for the executor as a whole, not compiler-emitted per-stencil CFI. Because the stencils are musttail-chained, the jumps between stencils do not add extra native call frames. The unwind job here is just to recover the caller of the executor frame. We don't want to describe each stencil as its own frame.
When GDB stops at a PC inside py::jit_executor:<jit>:

* it finds the FDE whose range covers that PC

* takes the CFI row for that PC,

* computes the CFA from that row

* uses the CFA rules to recover the caller registers and return PC.

On AArch64, for most of the covered executor range, the synthetic CFI says:

* CFA = x29 + 16

* saved x29 at CFA - 16

* saved x30 at CFA - 8.
  That is enough for GDB to recover the caller frame in `py::jit_shim:<jit>`, and then continue unwinding into `_PyEval_*`.

Good catch for the testing gap. I’ve now added a new test that breaks inside the jit executor. It sill breaks at the builtin_id but GDB then finishes out through the C helper frames until the selected frame is py::jit_executor:<jit> (thanks to some GDB-python scripting), single-steps twice inside the executor, and only then runs bt. The backtrace is now taken with the current PC in executor code itself, and it unwinds through py::jit_shim:<jit> and then back into _PyEval_*.

@diegorusso I have to say that I am tremendously confused here.

My understanding of what this code is supposed to do is pretty simple: if GDB or backtrace() stops at an arbitrary PC inside py::jit_executor:<jit>, the unwind info for that exact PC should let the unwinder reconstruct the caller frame (py::jit_shim:<jit>) and then continue into _PyEval_*.

So the real question is not “does the FDE cover the address range?” and it is not “do the stencils form one logical frame?”. The real question is: does the CFI row that applies at that PC actually describe the machine state there?

That is the part I do not think has been explained.

I agree with the narrow musttail point: tail-chaining the stencils means you do not accumulate one native call frame per stencil. Fine. But that only tells us that we want to unwind the executor as one logical frame. It does not tell us that one fixed synthetic unwind recipe is valid everywhere inside the executor blob.

And that is exactly where I think the argument goes off the rails.

jit_executor is not one ordinary C function with one stable prologue/epilogue. It is a concatenation of many preserve_none stencils, glued together with musttail. For a single synthetic FDE to be correct across the whole region, there has to be some invariant that says “for any PC in executor code, the CFA and saved return state look like this”. I do not see that invariant stated anywhere, and the current explanation seems to jump from “musttail” straight to “the unwind is correct”, which are not the same thing.

A concrete x86_64 example of why this seems wrong to me:

with the same sort of flags used for executor stencils, a preserve_none + musttail function can compile to something as trivial as

jmp callee

or, if it needs temporary stack space / spills, something more like

subq $24, %rsp
...
addq $24, %rsp
jmp callee

In the first case there is no %rbp frame at all. In the second case the CFA is temporarily %rsp-relative and changes inside the body. So I do not understand how one synthetic %rbp-based description for the entire covered executor range is supposed to be generally correct.

For jit_shim I can at least see the intended story, because it is one ordinary non-tail C function that calls into JIT code. For jit_executor, I still do not see what makes the unwind recipe valid for arbitrary PCs inside the blob.

Also, I rebuilt the branch locally and tried the exact “finish to py::jit_executor:<jit>, step twice, then bt” flow. On x86_64 I still get:

#0  py::jit_executor:<jit> ()
#1  ?? ()
...
Backtrace stopped: previous frame inner to this frame (corrupt stack?)

So this is not just a theoretical concern for me. I still do not understand why the model being described here is supposed to work.I am of course not objecting to the goal. I am saying I still do not see the correctness argument. If the claim is that this is actually a correct unwind description for jit_executor as a whole, then I think what is missing from the discussion is the key invariant: what exactly is guaranteed to be true about the CFA / saved FP / saved return address at an arbitrary PC inside executor code that makes this one synthetic FDE valid?

Ok, I think now I understand. After re-checking the generated stencils I agree the current explanation was too bold.

musttail only establishes the narrow point that the stencil-to-stencil transitions do not accumulate one native call frame per stencil and it does not by itself establish the stronger property needed for unwinding: that for an arbitrary PC inside jit_executor, the CFA and saved return state always have a shape described by one executor-wide FDE.

That stronger property is the missing invariant here.

After looking again at the regenerated x86_64 and AArch64 stencils, I don't think we have that invariant today:

  • only jit_shim gets a guaranteed frame-pointer-based prologue
  • executor stencils are not uniform
    • on x86_64, many executor stencils are frameless and/or adjust rsp
    • on AArch64, many executor stencils just save x30 and adjust sp, without establishing x29
    • only a small subset of executor stencils actually materialise a conventional rbp/x29 frame

I cannot justify the current synthetic executor-wide FDE as being correct for arbitrary PCs in the executor blob. The new test I added is still useful, but it proves something narrower: that the synthetic FDE works for the exercised in-executor stop. It does not prove that the same CFI is exact for every interior PC in the region (like you did in your example)

I think the real options are:

  1. Make the invariant true in codegen/stencils by forcing all executor stencils to follow one documented frame-layout rule, so one FDE is actually justified.
    For example:
  • x86_64: every executor stencil establishes the same rbp-based frame shape
  • AArch64: every executor stencil establishes the same x29/x30-based frame shape, or at least guarantees that x29 is stable and the original return-to-shim state is always recoverable in one fixed way
  1. Emit finer-grained unwind metadata: keep the current mixed stencil shapes, but stop pretending one unwind recipe covers the whole executor. That means multiple FDEs or per-range metadata.

  2. Narrow the claim: keep executor symbolisation, but do not claim a correct executor-wide unwind description until we have either (1) or (2).

  3. Something else?

The current implementation does not yet have the invariant needed to justify one executor-wide FDE for jit_executor but at the same time I don't really like the suggestions above.

Let me think about it

@Fidget-Spinner
Copy link
Member

the executor stencils still are not uniformly rbp/x29-framed,

The current generation reserve the rbp. So all current stencils assume an rbp. Do you think it would fix it if we emitted our own prologue for the very first JIT executor uop ie (push %rbp; movq %rsp, %rbp) , and teardown (popq %rbp) at all rets ? I have a working branch that does that. FWIW, it can be done quite easily using the assembly manipulator we have in the JIT. Will that make it appropiate rbp/x29-framed?

call the appropriate __register_frame* and deregistration API for that blob so the unwinder can see it.

Unfortunately, it seems you're right here. I dug around libgcc a little more and that's the only interface I see that intercepts _Unwind_Find_FDE. The function is public but undocumented, which is annoying. I'm just shocked that libgcc does not seem to use frame pointers as a fallback for x86_64 or AArch64 when I looked around it.

@diegorusso
Copy link
Contributor Author

The current generation reserve the rbp. So all current stencils assume an rbp.

Not all of them. See _SET_IP family. But you can see others as well. On AArch64 if we reserve the frame pointer, it will be barely touched (just a few uops set it). If we don't reserve it, then we have the standard prologue/epilogue for the majority of the uops.

I'm not entirely sure your statement is true.

@Fidget-Spinner
Copy link
Member

_SET_IP

Huh that's surprising! On x86_64, the current main produces code that doesn't touch rbp at all (from manual inspection at least). I wonder why it's different on AArch64, thanks for reporting back.

@Fidget-Spinner
Copy link
Member

_SET_IP

Huh that's surprising! On x86_64, the current main produces code that doesn't touch rbp at all (from manual inspection at least). I wonder why it's different on AArch64, thanks for reporting back.

Oh sorry I'm wrong, not main, my branch, I had to pass the usual:

            "-fno-omit-frame-pointer",
            "-mno-omit-leaf-frame-pointer",

to it to get things like that.

@diegorusso
Copy link
Contributor Author

Also reserving rbp/x29 by itself is not enough. What we need is an actual executor-wide frame record.

If your branch emits one prologue at executor entry and one matching epilogue on every path that leaves the executor, then yes, I think that gives the invariant we are missing. It means that for any PC in executor code after the prologue, the caller state is recoverable from one fixed frame layout.

The tricky point is that we need to apply the epilogue to all executor entry/exit paths normal return, error/cold exits, trampolines, deopt paths, etc.). Can we do that?

@Fidget-Spinner
Copy link
Member

Fidget-Spinner commented Mar 26, 2026

If your branch emits one prologue at executor entry and one matching epilogue on every path that leaves the executor, then yes, I think that gives the invariant we are missing. It means that for any PC in executor code after the prologue, the caller state is recoverable from one fixed frame layout.

Yes it does that. The key is to use the assembly optimizer to rewrite all retq to pop %rbp; retq and remove the per-function prologue, we then introduce a single prologue uop---say, _EXECUTOR_PROLOGUE, that does the push %rbp dance for the entire executor.

Tier2-Tier2 transitions can also be handled: since it's treated as a normal jmp, we just skip the prologue by holding a pointer to the executor without it.

return NULL;
}

int ret = PyUnstable_WritePerfMapEntry(code_addr, code_size, entry_name);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Silence 64-bit Windows warnings

'function': conversion from 'size_t' to 'unsigned int', possible loss of data 
Suggested change
int ret = PyUnstable_WritePerfMapEntry(
code_addr, (unsigned int)code_size, entry_name);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, it is a left over. I'll fix it.

@diegorusso
Copy link
Contributor Author

diegorusso commented Mar 26, 2026

Today @Fidget-Spinner and @markshannon had a call and we think we have a way forward to have a good story around the invariant: the key point is for the stencils not to touch the frame pointer at all, except for the shim of course.

In order to get there we need to implement a couple of things:

  • Enable the frame_pointer for AArch64 architectures. This is enough not to include x29 in prologue/epilogue of stencils.
  • Change _Py_get_machine_stack_pointer not to use __builtin_frame_address

With these 2 changes, no x29/rbp is ever touched (written or read) by any stencil.

@Fidget-Spinner on my machines I see no differences between AARch64 and x86_64 stencils in terms of x29/rbp touched. Can you please double check that is the case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants