Skip to content

Commit

Permalink
KVM: x86: Drop dedicated logic for direct MMUs in reexecute_instructi…
Browse files Browse the repository at this point in the history
…on()

Now that KVM doesn't pointlessly acquire mmu_lock for direct MMUs, drop
the dedicated path entirely and always query indirect_shadow_pages when
deciding whether or not to try unprotecting the gfn.  For indirect, a.k.a.
shadow MMUs, checking indirect_shadow_pages is harmless; unless *every*
shadow page was somehow zapped while KVM was attempting to emulate the
instruction, indirect_shadow_pages is guaranteed to be non-zero.

Well, unless the instruction used a direct hugepage with 2-level paging
for its code page, but in that case, there's obviously nothing to
unprotect.  And in the extremely unlikely case all shadow pages were
zapped, there's again obviously nothing to unprotect.

Link: https://lore.kernel.org/r/20240203002343.383056-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
  • Loading branch information
sean-jc committed Feb 23, 2024
1 parent 474b99e commit 515c18a
Showing 1 changed file with 16 additions and 16 deletions.
32 changes: 16 additions & 16 deletions arch/x86/kvm/x86.c
Original file line number Diff line number Diff line change
Expand Up @@ -8787,27 +8787,27 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,

kvm_release_pfn_clean(pfn);

/* The instructions are well-emulated on direct mmu. */
if (vcpu->arch.mmu->root_role.direct) {
if (vcpu->kvm->arch.indirect_shadow_pages)
kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa));

return true;
}

/*
* if emulation was due to access to shadowed page table
* and it failed try to unshadow page and re-enter the
* guest to let CPU execute the instruction.
* If emulation may have been triggered by a write to a shadowed page
* table, unprotect the gfn (zap any relevant SPTEs) and re-enter the
* guest to let the CPU re-execute the instruction in the hope that the
* CPU can cleanly execute the instruction that KVM failed to emulate.
*/
kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa));
if (vcpu->kvm->arch.indirect_shadow_pages)
kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa));

/*
* If the access faults on its page table, it can not
* be fixed by unprotecting shadow page and it should
* be reported to userspace.
* If the failed instruction faulted on an access to page tables that
* are used to translate any part of the instruction, KVM can't resolve
* the issue by unprotecting the gfn, as zapping the shadow page will
* result in the instruction taking a !PRESENT page fault and thus put
* the vCPU into an infinite loop of page faults. E.g. KVM will create
* a SPTE and write-protect the gfn to resolve the !PRESENT fault, and
* then zap the SPTE to unprotect the gfn, and then do it all over
* again. Report the error to userspace.
*/
return !(emulation_type & EMULTYPE_WRITE_PF_TO_SP);
return vcpu->arch.mmu->root_role.direct ||
!(emulation_type & EMULTYPE_WRITE_PF_TO_SP);
}

static bool retry_instruction(struct x86_emulate_ctxt *ctxt,
Expand Down

0 comments on commit 515c18a

Please sign in to comment.