You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ACPICA: Events: Add parallel GPE handling support to fix potential redundant _Exx evaluations
There is a risk that a GPE method/handler may be invoked twice. Let's
consider a case, both GPE0(RAW_HANDLER) and GPE1(_Exx) is triggered.
acpi_ev_detect_gpe()
LOCK()
READ (GPE0-7 enable/status registers)
^^^^^^^^^^^^ROOT CAUSE^^^^^^^^^^^^^^^
Walk GPE0
UNLOCK()
Invoke GPE0 RAW_HANDLER READ (GPE1 enable/status bit)
LOCK() acpi_ev_gpe_dispatch()
Walk GPE1 CLEAR (GPE1 enable bit)
acpi_ev_gpe_dispatch() CLEAR (GPE1 status bit)
CLEAR (GPE1 enable bit) Evaluate GPE1 _Exx
CLEAR (GPE1 status bit)
Evaluate GPE1 _Exx
fi
UNLOCK()
If acpi_ev_detect_gpe() is only invoked from the IRQ context, this situation
may not be triggered if the IRQ handlers are controlled by the IRQ
chip/driver to be run in serial. But it will be a real problem if _Exx will
be evaluated from the task context due to "polling after enabling GPEs".
And the above figure just uses edge-triggered GPEs demonstrated such an
issue.
As a conclusion, there is now an increased chance of evaluating _Lxx/_Exx
more than once for one status bit flagging. This is not a problem if the
_Lxx/_Exx checks underlying hardware IRQ reasoning and finally just changes
the 2nd and the follow-up evaluations into no-ops. Note that _Lxx should
always be written in this way as a level-trigger GPE could have it's status
be wrongly duplicated by the underlying IRQ delivery mechanisms. But _Exx
may have very low quality BIOS by BIOS to trigger issues. For example,
trigger duplicated button notifications.
To solve this issue, we need to stop reading a bunch of enable/status
register bits, but read only one GPE's enable/status bit. And GPE status
register's W1C nature ensures that acknowledging one GPE won't affect
another GPE's handling process. Thus the hardware GPE architecture has
already provided us with the mechanism of implementing such parallelism.
So we can actually only lock around one GPE handling to parallelize GPE
handling processes:
1. If we can incorporate GPE enable bit check in detection and ensure the
atomicity of the following process (top-half IRQ handler):
READ (enable/status bit)
if (enabled && raised)
CLEAR (enable bit)
and handle the GPE after this process, we can ensure that we will only
invoke GPE handler once for one status bit flagging.
2. In addtion for edge-triggered GPEs, if we can ensure the atomicity of
the following process (top-half IRQ handler):
READ (enable/status bit)
if (enabled && raised)
CLEAR (enable bit)
CLEAR (status bit)
and handle the GPE after this process, we can ensure that we will only
invoke GPE handler once for one status bit flagging.
By doing a cleanup in this way, we can remove duplicate GPE handling code
and ensure that all logics are collected in 1 function. And the function
will be safe for both IRQ interrupt and IRQ polling, and will be safe for
us to release and re-acquire acpi_gbl_gpe_lock at any time other than the
above "top-half IRQ handler" process. Lv Zheng.
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
0 commit comments