Skip to content

Commit

Permalink
KVM: x86/mmu: Don't allow TDP MMU to yield when recovering NX pages
Browse files Browse the repository at this point in the history
[ Upstream commit 33a3164 ]

Prevent the TDP MMU from yielding when zapping a gfn range during NX
page recovery.  If a flush is pending from a previous invocation of the
zapping helper, either in the TDP MMU or the legacy MMU, but the TDP MMU
has not accumulated a flush for the current invocation, then yielding
will release mmu_lock with stale TLB entries.

That being said, this isn't technically a bug fix in the current code, as
the TDP MMU will never yield in this case.  tdp_mmu_iter_cond_resched()
will yield if and only if it has made forward progress, as defined by the
current gfn vs. the last yielded (or starting) gfn.  Because zapping a
single shadow page is guaranteed to (a) find that page and (b) step
sideways at the level of the shadow page, the TDP iter will break its loop
before getting a chance to yield.

But that is all very, very subtle, and will break at the slightest sneeze,
e.g. zapping while holding mmu_lock for read would break as the TDP MMU
wouldn't be guaranteed to see the present shadow page, and thus could step
sideways at a lower level.

Cc: Ben Gardon <bgardon@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210325200119.1359384-4-seanjc@google.com>
[Add lockdep assertion. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
  • Loading branch information
sean-jc authored and gregkh committed Apr 14, 2021
1 parent be2c527 commit 25fc773
Show file tree
Hide file tree
Showing 3 changed files with 22 additions and 7 deletions.
6 changes: 2 additions & 4 deletions arch/x86/kvm/mmu/mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -5973,7 +5973,6 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
unsigned int ratio;
LIST_HEAD(invalid_list);
bool flush = false;
gfn_t gfn_end;
ulong to_zap;

rcu_idx = srcu_read_lock(&kvm->srcu);
Expand All @@ -5994,9 +5993,8 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
struct kvm_mmu_page,
lpage_disallowed_link);
WARN_ON_ONCE(!sp->lpage_disallowed);
if (sp->tdp_mmu_page)
gfn_end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level);
flush = kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn, gfn_end);
if (sp->tdp_mmu_page) {
flush = kvm_tdp_mmu_zap_sp(kvm, sp);
} else {
kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
WARN_ON_ONCE(sp->lpage_disallowed);
Expand Down
5 changes: 3 additions & 2 deletions arch/x86/kvm/mmu/tdp_mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -495,13 +495,14 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
* SPTEs have been cleared and a TLB flush is needed before releasing the
* MMU lock.
*/
bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end)
bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end,
bool can_yield)
{
struct kvm_mmu_page *root;
bool flush = false;

for_each_tdp_mmu_root_yield_safe(kvm, root)
flush = zap_gfn_range(kvm, root, start, end, true, flush);
flush = zap_gfn_range(kvm, root, start, end, can_yield, flush);

return flush;
}
Expand Down
18 changes: 17 additions & 1 deletion arch/x86/kvm/mmu/tdp_mmu.h
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,23 @@ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t root);
hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root);

bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end);
bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end,
bool can_yield);
static inline bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start,
gfn_t end)
{
return __kvm_tdp_mmu_zap_gfn_range(kvm, start, end, true);
}
static inline bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
{
gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level);

/*
* Don't allow yielding, as the caller may have pending pages to zap
* on the shadow MMU.
*/
return __kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn, end, false);
}
void kvm_tdp_mmu_zap_all(struct kvm *kvm);

int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
Expand Down

0 comments on commit 25fc773

Please sign in to comment.