Skip to content

Commit 7a277c2

Browse files
committed
KVM: x86/pmu: Get eventsel for fixed counters from perf
Get the event selectors used to effectively request fixed counters for perf events from perf itself instead of hardcoding them in KVM and hoping that they match the underlying hardware. While fixed counters 0 and 1 use architectural events, as of ffbe4ab ("perf/x86/intel: Extend the ref-cycles event to GP counters") fixed counter 2 (reference TSC cycles) may use a software-defined pseudo-encoding or a real hardware-defined encoding. Reported-by: Kan Liang <kan.liang@linux.intel.com> Closes: https://lkml.kernel.org/r/4281eee7-6423-4ec8-bb18-c6aeee1faf2c%40linux.intel.com Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://lore.kernel.org/r/20240109230250.424295-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
1 parent 61bb2ad commit 7a277c2

File tree

1 file changed

+17
-13
lines changed

1 file changed

+17
-13
lines changed

arch/x86/kvm/vmx/pmu_intel.c

Lines changed: 17 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -404,24 +404,28 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
404404
* result is the same (ignoring the fact that using a general purpose counter
405405
* will likely exacerbate counter contention).
406406
*
407-
* Note, reference cycles is counted using a perf-defined "psuedo-encoding",
408-
* as there is no architectural general purpose encoding for reference cycles.
407+
* Forcibly inlined to allow asserting on @index at build time, and there should
408+
* never be more than one user.
409409
*/
410-
static u64 intel_get_fixed_pmc_eventsel(int index)
410+
static __always_inline u64 intel_get_fixed_pmc_eventsel(unsigned int index)
411411
{
412-
const struct {
413-
u8 event;
414-
u8 unit_mask;
415-
} fixed_pmc_events[] = {
416-
[0] = { 0xc0, 0x00 }, /* Instruction Retired / PERF_COUNT_HW_INSTRUCTIONS. */
417-
[1] = { 0x3c, 0x00 }, /* CPU Cycles/ PERF_COUNT_HW_CPU_CYCLES. */
418-
[2] = { 0x00, 0x03 }, /* Reference Cycles / PERF_COUNT_HW_REF_CPU_CYCLES*/
412+
const enum perf_hw_id fixed_pmc_perf_ids[] = {
413+
[0] = PERF_COUNT_HW_INSTRUCTIONS,
414+
[1] = PERF_COUNT_HW_CPU_CYCLES,
415+
[2] = PERF_COUNT_HW_REF_CPU_CYCLES,
419416
};
417+
u64 eventsel;
420418

421-
BUILD_BUG_ON(ARRAY_SIZE(fixed_pmc_events) != KVM_PMC_MAX_FIXED);
419+
BUILD_BUG_ON(ARRAY_SIZE(fixed_pmc_perf_ids) != KVM_PMC_MAX_FIXED);
420+
BUILD_BUG_ON(index >= KVM_PMC_MAX_FIXED);
422421

423-
return (fixed_pmc_events[index].unit_mask << 8) |
424-
fixed_pmc_events[index].event;
422+
/*
423+
* Yell if perf reports support for a fixed counter but perf doesn't
424+
* have a known encoding for the associated general purpose event.
425+
*/
426+
eventsel = perf_get_hw_event_config(fixed_pmc_perf_ids[index]);
427+
WARN_ON_ONCE(!eventsel && index < kvm_pmu_cap.num_counters_fixed);
428+
return eventsel;
425429
}
426430

427431
static void intel_pmu_refresh(struct kvm_vcpu *vcpu)

0 commit comments

Comments
 (0)