Skip to content

Commit 53df288

Browse files
Dapeng Migregkh
authored andcommitted
perf/x86/intel: Disable PMI for self-reloaded ACR events
commit 1271aec upstream. On platforms with Auto Counter Reload (ACR) support, such as NVL, a "NMI received for unknown reason 30" warning is observed when running multiple events in a group with ACR enabled: $ perf record -e '{instructions/period=20000,acr_mask=0x2/u,\ cycles/period=40000,acr_mask=0x3/u}' ./test The warning occurs because the Performance Monitoring Interrupt (PMI) is enabled for the self-reloaded event (the cycles event in this case). According to the Intel SDM, the overflow bit (IA32_PERF_GLOBAL_STATUS.PMCn_OVF) is never set for self-reloaded events. Since the bit is not set, the perf NMI handler cannot identify the source of the interrupt, leading to the "unknown reason" message. Furthermore, enabling PMI for self-reloaded events is unnecessary and can lead to extraneous records that pollute the user's requested data. Disable the interrupt bit for all events configured with ACR self-reload. Fixes: ec980e4 ("perf/x86/intel: Support auto counter reload") Reported-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20260430002558.712334-4-dapeng1.mi@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent 6761bc1 commit 53df288

2 files changed

Lines changed: 23 additions & 4 deletions

File tree

arch/x86/events/intel/core.c

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3118,11 +3118,11 @@ static void intel_pmu_enable_fixed(struct perf_event *event)
31183118
intel_set_masks(event, idx);
31193119

31203120
/*
3121-
* Enable IRQ generation (0x8), if not PEBS,
3122-
* and enable ring-3 counting (0x2) and ring-0 counting (0x1)
3123-
* if requested:
3121+
* Enable IRQ generation (0x8), if not PEBS or self-reloaded
3122+
* ACR event, and enable ring-3 counting (0x2) and ring-0
3123+
* counting (0x1) if requested:
31243124
*/
3125-
if (!event->attr.precise_ip)
3125+
if (!event->attr.precise_ip && !is_acr_self_reload_event(event))
31263126
bits |= INTEL_FIXED_0_ENABLE_PMI;
31273127
if (hwc->config & ARCH_PERFMON_EVENTSEL_USR)
31283128
bits |= INTEL_FIXED_0_USER;
@@ -3306,6 +3306,15 @@ static void intel_pmu_enable_event(struct perf_event *event)
33063306
intel_set_masks(event, idx);
33073307
static_call_cond(intel_pmu_enable_acr_event)(event);
33083308
static_call_cond(intel_pmu_enable_event_ext)(event);
3309+
/*
3310+
* For self-reloaded ACR event, don't enable PMI since
3311+
* HW won't set overflow bit in GLOBAL_STATUS. Otherwise,
3312+
* the PMI would be recognized as a suspicious NMI.
3313+
*/
3314+
if (is_acr_self_reload_event(event))
3315+
hwc->config &= ~ARCH_PERFMON_EVENTSEL_INT;
3316+
else if (!event->attr.precise_ip)
3317+
hwc->config |= ARCH_PERFMON_EVENTSEL_INT;
33093318
__x86_pmu_enable_event(hwc, enable_mask);
33103319
break;
33113320
case INTEL_PMC_IDX_FIXED ... INTEL_PMC_IDX_FIXED_BTS - 1:

arch/x86/events/perf_event.h

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -137,6 +137,16 @@ static inline bool is_acr_event_group(struct perf_event *event)
137137
return check_leader_group(event->group_leader, PERF_X86_EVENT_ACR);
138138
}
139139

140+
static inline bool is_acr_self_reload_event(struct perf_event *event)
141+
{
142+
struct hw_perf_event *hwc = &event->hw;
143+
144+
if (hwc->idx < 0)
145+
return false;
146+
147+
return test_bit(hwc->idx, (unsigned long *)&hwc->config1);
148+
}
149+
140150
struct amd_nb {
141151
int nb_id; /* NorthBridge id */
142152
int refcnt; /* reference count */

0 commit comments

Comments
 (0)