Skip to content

Commit 0fe3e8d

Browse files
committed
KVM: x86: Move INIT_RECEIVED vs. INIT/SIPI blocked check to KVM_RUN
Check for the should-be-impossible scenario of a vCPU being in Wait-For-SIPI with INIT/SIPI blocked during KVM_RUN instead of trying to detect and prevent illegal combinations in every ioctl that sets relevant state. Attempting to handle every possible "set" path is a losing game of whack-a-mole, and risks breaking userspace. E.g. INIT/SIPI are blocked on Intel if the vCPU is in VMX Root mode (post-VMXON), and on AMD if GIF=0. Handling those scenarios would require potentially breaking changes to {vmx,svm}_set_nested_state(). Moving the check to KVM_RUN fixes a syzkaller-induced splat due to the aforementioned VMXON case, and in theory should close the hole once and for all. Note, kvm_x86_vcpu_pre_run() already handles SIPI_RECEIVED, only the WFS case needs additional attention. Reported-by: syzbot+c1cbaedc2613058d5194@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?id=490ae63d8d89cb82c5d462d16962cf371df0e476 Link: https://lore.kernel.org/r/20250605195018.539901-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
1 parent 16777eb commit 0fe3e8d

File tree

1 file changed

+8
-16
lines changed

1 file changed

+8
-16
lines changed

arch/x86/kvm/x86.c

Lines changed: 8 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -5487,12 +5487,6 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
54875487
(events->exception.nr > 31 || events->exception.nr == NMI_VECTOR))
54885488
return -EINVAL;
54895489

5490-
/* INITs are latched while in SMM */
5491-
if (events->flags & KVM_VCPUEVENT_VALID_SMM &&
5492-
(events->smi.smm || events->smi.pending) &&
5493-
vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED)
5494-
return -EINVAL;
5495-
54965490
process_nmi(vcpu);
54975491

54985492
/*
@@ -11579,6 +11573,14 @@ static int kvm_x86_vcpu_pre_run(struct kvm_vcpu *vcpu)
1157911573
if (WARN_ON_ONCE(vcpu->arch.mp_state == KVM_MP_STATE_SIPI_RECEIVED))
1158011574
return -EINVAL;
1158111575

11576+
/*
11577+
* Disallow running the vCPU if userspace forced it into an impossible
11578+
* MP_STATE, e.g. if the vCPU is in WFS but SIPI is blocked.
11579+
*/
11580+
if (vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED &&
11581+
!kvm_apic_init_sipi_allowed(vcpu))
11582+
return -EINVAL;
11583+
1158211584
return kvm_x86_call(vcpu_pre_run)(vcpu);
1158311585
}
1158411586

@@ -11927,16 +11929,6 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
1192711929
goto out;
1192811930
}
1192911931

11930-
/*
11931-
* Pending INITs are reported using KVM_SET_VCPU_EVENTS, disallow
11932-
* forcing the guest into INIT/SIPI if those events are supposed to be
11933-
* blocked.
11934-
*/
11935-
if (!kvm_apic_init_sipi_allowed(vcpu) &&
11936-
(mp_state->mp_state == KVM_MP_STATE_SIPI_RECEIVED ||
11937-
mp_state->mp_state == KVM_MP_STATE_INIT_RECEIVED))
11938-
goto out;
11939-
1194011932
if (mp_state->mp_state == KVM_MP_STATE_SIPI_RECEIVED) {
1194111933
kvm_set_mp_state(vcpu, KVM_MP_STATE_INIT_RECEIVED);
1194211934
set_bit(KVM_APIC_SIPI, &vcpu->arch.apic->pending_events);

0 commit comments

Comments
 (0)