Skip to content

Commit 68c35f8

Browse files
Maxim Levitskysean-jc
authored andcommitted
KVM: x86: Fix a semi theoretical bug in kvm_arch_async_page_present_queued()
Fix a semi theoretical race condition related to a lack of memory barriers when dealing with vcpu->arch.apf.pageready_pending. In theory, the "ready" side could see a stale pageready_pending and neglect to kick the vCPU, and thus allow the vCPU to enter the guest with a pending KVM_REQ_APF_READY and no kick/IPI on the way, in which case the KVM would fail to deliver a completed async #PF event to the guest in a timely manner as the request would be recognized only on the next (coincidental) VM-Exit. kvm_arch_async_page_present_queued() running in workqueue context: kvm_make_request(KVM_REQ_APF_READY, vcpu); /* memory barrier is missing here*/ if (!vcpu->arch.apf.pageready_pending) kvm_vcpu_kick(vcpu); kvm_set_msr_common() running in task context: vcpu->arch.apf.pageready_pending = false; /* memory barrier is missing here*/ And later, vcpu_enter_guest() running in task context: if (kvm_check_request(KVM_REQ_APF_READY, vcpu)) kvm_check_async_pf_completion(vcpu) Add missing full memory barriers in both cases to avoid theoretical case of not kicking the vCPU thread. Note that the bug is mostly theoretical because kvm_make_request() uses an atomic operation, which is always serializing on x86, requiring only for documentation purposes the smp_mb__after_atomic() after it (smp_mb__after_atomic() is a NOP on x86). The second missing barrier, between kvm_set_msr_common() and vcpu_enter_guest(), isn't strictly needed because KVM executes several barriers in between calling these functions, however it still makes sense to have an explicit barrier to be on the safe side and to document the ordering dependencies. Finally, also use READ_ONCE/WRITE_ONCE. Thanks a lot to Paolo for the help with this patch. Link: https://lore.kernel.org/all/7c7a5a75-a786-4a05-a836-4368582ca4c2@redhat.com Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://patch.msgid.link/20251015033258.50974-3-mlevitsk@redhat.com [sean: explain the race and its impact in more detail] Signed-off-by: Sean Christopherson <seanjc@google.com>
1 parent 65a7016 commit 68c35f8

File tree

1 file changed

+12
-3
lines changed

1 file changed

+12
-3
lines changed

arch/x86/kvm/x86.c

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4183,7 +4183,12 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
41834183
if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF_INT))
41844184
return 1;
41854185
if (data & 0x1) {
4186-
vcpu->arch.apf.pageready_pending = false;
4186+
/*
4187+
* Pairs with the smp_mb__after_atomic() in
4188+
* kvm_arch_async_page_present_queued().
4189+
*/
4190+
smp_store_mb(vcpu->arch.apf.pageready_pending, false);
4191+
41874192
kvm_check_async_pf_completion(vcpu);
41884193
}
41894194
break;
@@ -13890,7 +13895,7 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
1389013895
if ((work->wakeup_all || work->notpresent_injected) &&
1389113896
kvm_pv_async_pf_enabled(vcpu) &&
1389213897
!apf_put_user_ready(vcpu, work->arch.token)) {
13893-
vcpu->arch.apf.pageready_pending = true;
13898+
WRITE_ONCE(vcpu->arch.apf.pageready_pending, true);
1389413899
kvm_apic_set_irq(vcpu, &irq, NULL);
1389513900
}
1389613901

@@ -13901,7 +13906,11 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
1390113906
void kvm_arch_async_page_present_queued(struct kvm_vcpu *vcpu)
1390213907
{
1390313908
kvm_make_request(KVM_REQ_APF_READY, vcpu);
13904-
if (!vcpu->arch.apf.pageready_pending)
13909+
13910+
/* Pairs with smp_store_mb() in kvm_set_msr_common(). */
13911+
smp_mb__after_atomic();
13912+
13913+
if (!READ_ONCE(vcpu->arch.apf.pageready_pending))
1390513914
kvm_vcpu_kick(vcpu);
1390613915
}
1390713916

0 commit comments

Comments
 (0)