Skip to content

Commit

Permalink
x86/tdx: Use direct paravirt call for wrmsrl
Browse files Browse the repository at this point in the history
TDX normally handles MSR writes using the #VE exception, or directly
for some special MSRs. But there is at least one performance critical
MSR which triggers #VE, which is the TSC deadline MSR.  It gets
reprogrammed every timer interrupt, and also every idle exit. There
are noticeable slow downs by relying on #VE for this, since a #VE
requires at least 3 exits to the TDX module, which adds up in overhead.

Use a direct paravirt call for MSR writes. This will only be used for
wrmsrl(), some of the other MSR write paths still use #VE (but these
don't seem to be performance critical, so it shouldn't matter)

There is one complication that TDX has both context switched MSRs
(which always need to use WRMSR) and host supported MSRs (which need to
use TDCALL). Unfortunately the list of both is quite long and it would
be difficult to maintain a switch statement to distinguish them. For
most MSRs it doesn't really matter if there is an extra VE exception or
not because they are not performance critical. But it's important for a
few critical ones like TSC_DEADLINE, which needs to use TDCALL. So
enable the TDCALL fast path only for TSC_DEADLINE and keep using WRMSR
for all the others, which may or may not result in an extra VE
exception. If there are other performance critical host controlled MSRs
it can be added to the switch statement here later.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
  • Loading branch information
Andi Kleen authored and virtuoso committed Jul 6, 2022
1 parent 4a852dc commit d1a7216
Showing 1 changed file with 43 additions and 0 deletions.
43 changes: 43 additions & 0 deletions arch/x86/coco/tdx/tdx.c
Expand Up @@ -268,6 +268,47 @@ static int write_msr(struct pt_regs *regs, struct ve_info *ve)
return ve_instr_len(ve);
}

/*
* TDX has context switched MSRs and emulated MSRs. The emulated MSRs
* normally trigger a #VE, but that is expensive, which can be avoided
* by doing a direct TDCALL. Unfortunately, this cannot be done for all
* because some MSRs are "context switched" and need WRMSR.
*
* The list for this is unfortunately quite long. To avoid maintaining
* very long switch statements just do a fast path for the few critical
* MSRs that need TDCALL, currently only TSC_DEADLINE.
*
* More can be added as needed.
*
* The others will be handled by the #VE handler as needed.
* See 18.1 "MSR virtualization" in the TDX Module EAS
*/
static bool tdx_fast_tdcall_path_msr(unsigned int msr)
{
switch (msr) {
case MSR_IA32_TSC_DEADLINE:
return true;
default:
return false;

}
}

void notrace tdx_write_msr(unsigned int msr, u32 low, u32 high)
{
struct tdx_hypercall_args args = {
.r10 = TDX_HYPERCALL_STANDARD,
.r11 = hcall_func(EXIT_REASON_MSR_WRITE),
.r12 = msr,
.r13 = (u64)high << 32 | low,
};

if (tdx_fast_tdcall_path_msr(msr))
__tdx_hypercall(&args, 0);
else
native_write_msr(msr, low, high);
}

static int handle_cpuid(struct pt_regs *regs, struct ve_info *ve)
{
struct tdx_hypercall_args args = {
Expand Down Expand Up @@ -784,6 +825,8 @@ void __init tdx_early_init(void)
/* Set restricted memory access for virtio. */
platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);

pv_ops.cpu.write_msr = tdx_write_msr;

cc_set_vendor(CC_VENDOR_INTEL);
cc_mask = get_cc_mask();
cc_set_mask(cc_mask);
Expand Down

0 comments on commit d1a7216

Please sign in to comment.