Skip to content

Commit d1bc004

Browse files
committed
KVM: x86: Use "checked" versions of get_user() and put_user()
Use the normal, checked versions for get_user() and put_user() instead of the double-underscore versions that omit range checks, as the checked versions are actually measurably faster on modern CPUs (12%+ on Intel, 25%+ on AMD). The performance hit on the unchecked versions is almost entirely due to the added LFENCE on CPUs where LFENCE is serializing (which is effectively all modern CPUs), which was added by commit 304ec1b ("x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec"). The small optimizations done by commit b19b74b ("x86/mm: Rework address range check in get_user() and put_user()") likely shave a few cycles off, but the bulk of the extra latency comes from the LFENCE. Don't bother trying to open-code an equivalent for performance reasons, as the loss of inlining (e.g. see commit ea6f043 ("x86: Make __get_user() generate an out-of-line call") is largely a non-factor (ignoring setups where RET is something entirely different), As measured across tens of millions of calls of guest PTE reads in FNAME(walk_addr_generic): __get_user() get_user() open-coded open-coded, no LFENCE Intel (EMR) 75.1 67.6 75.3 65.5 AMD (Turin) 68.1 51.1 67.5 49.3 Note, Hyper-V MSR emulation is not a remotely hot path, but convert it anyways for consistency, and because there is a general desire to remove __{get,put}_user() entirely. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Closes: https://lore.kernel.org/all/CAHk-=wimh_3jM9Xe8Zx0rpuf8CPDu6DkRCGb44azk0Sz5yqSnw@mail.gmail.com Cc: Borislav Petkov <bp@alien8.de> Link: https://patch.msgid.link/20251106210206.221558-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
1 parent 7df3021 commit d1bc004

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

arch/x86/kvm/hyperv.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1568,7 +1568,7 @@ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
15681568
* only, there can be valuable data in the rest which needs
15691569
* to be preserved e.g. on migration.
15701570
*/
1571-
if (__put_user(0, (u32 __user *)addr))
1571+
if (put_user(0, (u32 __user *)addr))
15721572
return 1;
15731573
hv_vcpu->hv_vapic = data;
15741574
kvm_vcpu_mark_page_dirty(vcpu, gfn);

arch/x86/kvm/mmu/paging_tmpl.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -402,7 +402,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
402402
goto error;
403403

404404
ptep_user = (pt_element_t __user *)((void *)host_addr + offset);
405-
if (unlikely(__get_user(pte, ptep_user)))
405+
if (unlikely(get_user(pte, ptep_user)))
406406
goto error;
407407
walker->ptep_user[walker->level - 1] = ptep_user;
408408

0 commit comments

Comments
 (0)