-
-
Notifications
You must be signed in to change notification settings - Fork 34.6k
JIT hang in xml.etree.iterparse #149481
Copy link
Copy link
Open
Labels
3.15new features, bugs and security fixesnew features, bugs and security fixesinterpreter-core(Objects, Python, Grammar, and Parser dirs)(Objects, Python, Grammar, and Parser dirs)topic-JITtype-bugAn unexpected behavior, bug, or errorAn unexpected behavior, bug, or error
Metadata
Metadata
Assignees
Labels
3.15new features, bugs and security fixesnew features, bugs and security fixesinterpreter-core(Objects, Python, Grammar, and Parser dirs)(Objects, Python, Grammar, and Parser dirs)topic-JITtype-bugAn unexpected behavior, bug, or errorAn unexpected behavior, bug, or error
It looks like there's a JIT hang that's been blocking my nightly benchmark runs across all machines for the past two nights (see here and here), as well as @diegorusso's runs on his runner (see here and here).
These runs always hang at the bm_xml_etree.iterparse benchmark on JIT builds.
A minimum reproducer:
After bisecting, it looks like this was introduced by #148745. I'm not sure about the exact mechanism, but two observations that may or may not be related:
xml.etree.iterparsehastp_iternext==slot_tp_iternext.PyGen_Typebut doesn't have a corresponding exclusion for theslot_tp_iternextcase:Empirically, also excluding the case where
tp_iternext == slot_tp_iternextmakes the hang go away in my local builds, but I don't know whether that's the real fix or just an incidental one that happens to side-step the trigger.Would appreciate a closer look from someone who knows the trace semantics here.
cc: @markshannon @NekoAsakura