Skip to content
Permalink
Kechen-Lu/KVM-…
Switch branches/tags

Commits on Dec 14, 2021

  1. KVM: x86: add kvm per-vCPU exits disable capability

    Introduce new bit KVM_X86_DISABLE_EXITS_PER_VCPU and second arg of
    KVM_CAP_X86_DISABLE_EXITS cap as vCPU mask for disabling exits to
    enable finer-grained VM exits disabling on per vCPU scales instead
    of whole guest. This exits_disable_vcpu_mask default is 0, i.e.
    disable exits on all vCPUs, if it is 0x5, i.e. enable exits on vCPU0
    and vCPU2, disable exits on all other vCPUs. This patch only enabled
    this per-vCPU disable on HLT VM-exits.
    
    In use cases like Windows guest running heavy CPU-bound
    workloads, disabling HLT VM-exits could mitigate host sched ctx switch
    overhead. Simply HLT disabling on all vCPUs could bring
    performance benefits, but if no pCPUs reserved for host threads, could
    happened to the forced preemption as host does not know the time to do
    the schedule for other host threads want to run. With this patch, we
    could only disable part of vCPUs HLT exits for one guest, this still
    keeps performance benefits, and also shows resiliency to host stressing
    workload running at the same time.
    
    In the host stressing workload experiment with Windows guest heavy
    CPU-bound workloads, it shows good resiliency and having the ~3%
    performance improvement.
    
    Signed-off-by: Kechen Lu <kechenl@nvidia.com>
    Kechen Lu authored and intel-lab-lkp committed Dec 14, 2021

Commits on Dec 10, 2021

  1. KVM: SVM: Nullify vcpu_(un)blocking() hooks if AVIC is disabled

    Nullify svm_x86_ops.vcpu_(un)blocking if AVIC/APICv is disabled as the
    hooks are necessary only to clear the vCPU's IsRunning entry in the
    Physical APIC and to update IRTE entries if the VM has a pass-through
    device attached.
    
    Opportunistically rename the helpers to clarify their AVIC relationship.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Message-Id: <20211208015236.1616697-24-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  2. KVM: SVM: Move svm_hardware_setup() and its helpers below svm_x86_ops

    Move svm_hardware_setup() below svm_x86_ops so that KVM can modify ops
    during setup, e.g. the vcpu_(un)blocking hooks can be nullified if AVIC
    is disabled or unsupported.
    
    No functional change intended.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Message-Id: <20211208015236.1616697-23-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  3. KVM: SVM: Drop AVIC's intermediate avic_set_running() helper

    Drop avic_set_running() in favor of calling avic_vcpu_{load,put}()
    directly, and modify the block+put path to use preempt_disable/enable()
    instead of get/put_cpu(), as it doesn't actually care about the current
    pCPU associated with the vCPU.  Opportunistically add lockdep assertions
    as being preempted in avic_vcpu_put() would lead to consuming stale data,
    even though doing so _in the current code base_ would not be fatal.
    
    Add a much needed comment explaining why svm_vcpu_blocking() needs to
    unload the AVIC and update the IRTE _before_ the vCPU starts blocking.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Message-Id: <20211208015236.1616697-22-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  4. KVM: VMX: Don't do full kick when handling posted interrupt wakeup

    When waking vCPUs in the posted interrupt wakeup handling, do exactly
    that and no more.  There is no need to kick the vCPU as the wakeup
    handler just needs to get the vCPU task running, and if it's in the guest
    then it's definitely running.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    Message-Id: <20211208015236.1616697-21-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  5. KVM: VMX: Fold fallback path into triggering posted IRQ helper

    Move the fallback "wake_up" path into the helper to trigger posted
    interrupt helper now that the nested and non-nested paths are identical.
    
    No functional change intended.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    Message-Id: <20211208015236.1616697-20-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  6. KVM: VMX: Pass desired vector instead of bool for triggering posted IRQ

    Refactor the posted interrupt helper to take the desired notification
    vector instead of a bool so that the callers are self-documenting.
    
    No functional change intended.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    Message-Id: <20211208015236.1616697-19-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  7. KVM: VMX: Wake vCPU when delivering posted IRQ even if vCPU == this vCPU

    Drop a check that guards triggering a posted interrupt on the currently
    running vCPU, and more importantly guards waking the target vCPU if
    triggering a posted interrupt fails because the vCPU isn't IN_GUEST_MODE.
    The "do nothing" logic when "vcpu == running_vcpu" works only because KVM
    doesn't have a path to ->deliver_posted_interrupt() from asynchronous
    context, e.g. if apic_timer_expired() were changed to always go down the
    posted interrupt path for APICv, or if the IN_GUEST_MODE check in
    kvm_use_posted_timer_interrupt() were dropped, and the hrtimer fired in
    kvm_vcpu_block() after the final kvm_vcpu_check_block() check, the vCPU
    would be scheduled() out without being awakened, i.e. would "miss" the
    timer interrupt.
    
    One could argue that invoking kvm_apic_local_deliver() from (soft) IRQ
    context for the current running vCPU should be illegal, but nothing in
    KVM actually enforces that rules.  There's also no strong obvious benefit
    to making such behavior illegal, e.g. checking IN_GUEST_MODE and calling
    kvm_vcpu_wake_up() is at worst marginally more costly than querying the
    current running vCPU.
    
    Lastly, this aligns the non-nested and nested usage of triggering posted
    interrupts, and will allow for additional cleanups.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    Message-Id: <20211208015236.1616697-18-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  8. KVM: VMX: Don't do full kick when triggering posted interrupt "fails"

    Replace the full "kick" with just the "wake" in the fallback path when
    triggering a virtual interrupt via a posted interrupt fails because the
    guest is not IN_GUEST_MODE.  If the guest transitions into guest mode
    between the check and the kick, then it's guaranteed to see the pending
    interrupt as KVM syncs the PIR to IRR (and onto GUEST_RVI) after setting
    IN_GUEST_MODE.  Kicking the guest in this case is nothing more than an
    unnecessary VM-Exit (and host IRQ).
    
    Opportunistically update comments to explain the various ordering rules
    and barriers at play.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Message-Id: <20211208015236.1616697-17-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  9. KVM: SVM: Skip AVIC and IRTE updates when loading blocking vCPU

    Don't bother updating the Physical APIC table or IRTE when loading a vCPU
    that is blocking, i.e. won't be marked IsRun{ning}=1, as the pCPU is
    queried if and only if IsRunning is '1'.  If the vCPU was migrated, the
    new pCPU will be picked up when avic_vcpu_load() is called by
    svm_vcpu_unblocking().
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Message-Id: <20211208015236.1616697-15-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  10. KVM: SVM: Use kvm_vcpu_is_blocking() in AVIC load to handle preemption

    Use kvm_vcpu_is_blocking() to determine whether or not the vCPU should be
    marked running during avic_vcpu_load().  Drop avic_is_running, which
    really should have been named "vcpu_is_not_blocking", as it tracked if
    the vCPU was blocking, not if it was actually running, e.g. it was set
    during svm_create_vcpu() when the vCPU was obviously not running.
    
    This is technically a teeny tiny functional change, as the vCPU will be
    marked IsRunning=1 on being reloaded if the vCPU is preempted between
    svm_vcpu_blocking() and prepare_to_rcuwait().  But that's a benign change
    as the vCPU will be marked IsRunning=0 when KVM voluntarily schedules out
    the vCPU.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Message-Id: <20211208015236.1616697-14-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  11. KVM: SVM: Remove unnecessary APICv/AVIC update in vCPU unblocking path

    Remove handling of KVM_REQ_APICV_UPDATE from svm_vcpu_unblocking(), it's
    no longer needed as it was made obsolete by commit df7e482 ("KVM:
    SVM: call avic_vcpu_load/avic_vcpu_put when enabling/disabling AVIC").
    Prior to that commit, the manual check was necessary to ensure the AVIC
    stuff was updated by avic_set_running() when a request to enable APICv
    became pending while the vCPU was blocking, as the request handling
    itself would not do the update.  But, as evidenced by the commit, that
    logic was flawed and subject to various races.
    
    Now that svm_refresh_apicv_exec_ctrl() does avic_vcpu_load/put() in
    response to an APICv status change, drop the manual check in the
    unblocking path.
    
    Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Message-Id: <20211208015236.1616697-13-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  12. KVM: SVM: Don't bother checking for "running" AVIC when kicking for IPIs

    Drop the avic_vcpu_is_running() check when waking vCPUs in response to a
    VM-Exit due to incomplete IPI delivery.  The check isn't wrong per se, but
    it's not 100% accurate in the sense that it doesn't guarantee that the vCPU
    was one of the vCPUs that didn't receive the IPI.
    
    The check isn't required for correctness as blocking == !running in this
    context.
    
    From a performance perspective, waking a live task is not expensive as the
    only moderately costly operation is a locked operation to temporarily
    disable preemption.  And if that is indeed a performance issue,
    kvm_vcpu_is_blocking() would be a better check than poking into the AVIC.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    Message-Id: <20211208015236.1616697-12-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  13. KVM: SVM: Signal AVIC doorbell iff vCPU is in guest mode

    Signal the AVIC doorbell iff the vCPU is running in the guest.  If the vCPU
    is not IN_GUEST_MODE, it's guaranteed to pick up any pending IRQs on the
    next VMRUN, which unconditionally processes the vIRR.
    
    Add comments to document the logic.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Message-Id: <20211208015236.1616697-11-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  14. KVM: x86: Remove defunct pre_block/post_block kvm_x86_ops hooks

    Drop kvm_x86_ops' pre/post_block() now that all implementations are nops.
    
    No functional change intended.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    Message-Id: <20211208015236.1616697-10-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  15. KVM: x86: Unexport LAPIC's switch_to_{hv,sw}_timer() helpers

    Unexport switch_to_{hv,sw}_timer() now that common x86 handles the
    transitions.
    
    No functional change intended.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    Message-Id: <20211208015236.1616697-9-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  16. KVM: VMX: Move preemption timer <=> hrtimer dance to common x86

    Handle the switch to/from the hypervisor/software timer when a vCPU is
    blocking in common x86 instead of in VMX.  Even though VMX is the only
    user of a hypervisor timer, the logic and all functions involved are
    generic x86 (unless future CPUs do something completely different and
    implement a hypervisor timer that runs regardless of mode).
    
    Handling the switch in common x86 will allow for the elimination of the
    pre/post_blocks hooks, and also lets KVM switch back to the hypervisor
    timer if and only if it was in use (without additional params).  Add a
    comment explaining why the switch cannot be deferred to kvm_sched_out()
    or kvm_vcpu_block().
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    Message-Id: <20211208015236.1616697-8-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  17. KVM: Move x86 VMX's posted interrupt list_head to vcpu_vmx

    Move the seemingly generic block_vcpu_list from kvm_vcpu to vcpu_vmx, and
    rename the list and all associated variables to clarify that it tracks
    the set of vCPU that need to be poked on a posted interrupt to the wakeup
    vector.  The list is not used to track _all_ vCPUs that are blocking, and
    the term "blocked" can be misleading as it may refer to a blocking
    condition in the host or the guest, where as the PI wakeup case is
    specifically for the vCPUs that are actively blocking from within the
    guest.
    
    No functional change intended.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    Message-Id: <20211208015236.1616697-7-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  18. KVM: Drop unused kvm_vcpu.pre_pcpu field

    Remove kvm_vcpu.pre_pcpu as it no longer has any users.  No functional
    change intended.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    Message-Id: <20211208015236.1616697-6-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  19. KVM: VMX: Handle PI descriptor updates during vcpu_put/load

    Move the posted interrupt pre/post_block logic into vcpu_put/load
    respectively, using the kvm_vcpu_is_blocking() to determining whether or
    not the wakeup handler needs to be set (and unset).  This avoids updating
    the PI descriptor if halt-polling is successful, reduces the number of
    touchpoints for updating the descriptor, and eliminates the confusing
    behavior of intentionally leaving a "stale" PI.NDST when a blocking vCPU
    is scheduled back in after preemption.
    
    The downside is that KVM will do the PID update twice if the vCPU is
    preempted after prepare_to_rcuwait() but before schedule(), but that's a
    rare case (and non-existent on !PREEMPT kernels).
    
    The notable wart is the need to send a self-IPI on the wakeup vector if
    an outstanding notification is pending after configuring the wakeup
    vector.  Ideally, KVM would just do a kvm_vcpu_wake_up() in this case,
    but the scheduler doesn't support waking a task from its preemption
    notifier callback, i.e. while the task is right in the middle of
    being scheduled out.
    
    Note, setting the wakeup vector before halt-polling is not necessary:
    once the pending IRQ will be recorded in the PIR, kvm_vcpu_has_events()
    will detect this (via kvm_cpu_get_interrupt(), kvm_apic_get_interrupt(),
    apic_has_interrupt_for_ppr() and finally vmx_sync_pir_to_irr()) and
    terminate the polling.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    Message-Id: <20211208015236.1616697-5-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 10, 2021
  20. KVM: x86/pmu: Reuse pmc_perf_hw_id() and drop find_fixed_event()

    Since we set the same semantic event value for the fixed counter in
    pmc->eventsel, returning the perf_hw_id for the fixed counter via
    find_fixed_event() can be painlessly replaced by pmc_perf_hw_id()
    with the help of pmc_is_fixed() check.
    
    Signed-off-by: Like Xu <likexu@tencent.com>
    Message-Id: <20211130074221.93635-4-likexu@tencent.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Like Xu authored and bonzini committed Dec 10, 2021
  21. KVM: x86/pmu: Refactoring find_arch_event() to pmc_perf_hw_id()

    The find_arch_event() returns a "unsigned int" value,
    which is used by the pmc_reprogram_counter() to
    program a PERF_TYPE_HARDWARE type perf_event.
    
    The returned value is actually the kernel defined generic
    perf_hw_id, let's rename it to pmc_perf_hw_id() with simpler
    incoming parameters for better self-explanation.
    
    Signed-off-by: Like Xu <likexu@tencent.com>
    Message-Id: <20211130074221.93635-3-likexu@tencent.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Like Xu authored and bonzini committed Dec 10, 2021
  22. KVM: x86/pmu: Setup pmc->eventsel for fixed PMCs

    The current pmc->eventsel for fixed counter is underutilised. The
    pmc->eventsel can be setup for all known available fixed counters
    since we have mapping between fixed pmc index and
    the intel_arch_events array.
    
    Either gp or fixed counter, it will simplify the later checks for
    consistency between eventsel and perf_hw_id.
    
    Signed-off-by: Like Xu <likexu@tencent.com>
    Message-Id: <20211130074221.93635-2-likexu@tencent.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Like Xu authored and bonzini committed Dec 10, 2021
  23. KVM: x86: avoid out of bounds indices for fixed performance counters

    Because IceLake has 4 fixed performance counters but KVM only
    supports 3, it is possible for reprogram_fixed_counters to pass
    to reprogram_fixed_counter an index that is out of bounds for the
    fixed_pmc_events array.
    
    Ultimately intel_find_fixed_event, which is the only place that uses
    fixed_pmc_events, handles this correctly because it checks against the
    size of fixed_pmc_events anyway.  Every other place operates on the
    fixed_counters[] array which is sized according to INTEL_PMC_MAX_FIXED.
    However, it is cleaner if the unsupported performance counters are culled
    early on in reprogram_fixed_counters.
    
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    bonzini committed Dec 10, 2021
  24. selftests: KVM: sev_migrate_tests: Add mirror command tests

    Add tests to confirm mirror vms can only run correct subset of commands.
    
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Sean Christopherson <seanjc@google.com>
    Cc: Marc Orr <marcorr@google.com>
    Signed-off-by: Peter Gonda <pgonda@google.com>
    Message-Id: <20211208191642.3792819-4-pgonda@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    pgonda authored and bonzini committed Dec 10, 2021
  25. selftests: KVM: sev_migrate_tests: Fix sev_ioctl()

    TEST_ASSERT in SEV ioctl was allowing errors because it checked return
    value was good OR the FW error code was OK. This TEST_ASSERT should
    require both (aka. AND) values are OK. Removes the LAUNCH_START from the
    mirror VM because this call correctly fails because mirror VMs cannot
    call this command. Currently issues with the PSP driver functions mean
    the firmware error is not always reset to SEV_RET_SUCCESS when a call is
    successful. Mainly sev_platform_init() doesn't correctly set the fw
    error if the platform has already been initialized.
    
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Sean Christopherson <seanjc@google.com>
    Cc: Marc Orr <marcorr@google.com>
    Signed-off-by: Peter Gonda <pgonda@google.com>
    Message-Id: <20211208191642.3792819-3-pgonda@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    pgonda authored and bonzini committed Dec 10, 2021
  26. selftests: KVM: sev_migrate_tests: Fix test_sev_mirror()

    Mirrors should not be able to call LAUNCH_START. Remove the call on the
    mirror to correct the test before fixing sev_ioctl() to correctly assert
    on this failed ioctl.
    
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Sean Christopherson <seanjc@google.com>
    Cc: Marc Orr <marcorr@google.com>
    Signed-off-by: Peter Gonda <pgonda@google.com>
    Message-Id: <20211208191642.3792819-2-pgonda@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    pgonda authored and bonzini committed Dec 10, 2021

Commits on Dec 9, 2021

  1. KVM: arm64: Use Makefile.kvm for common files

    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Acked-by: Marc Zyngier <maz@kernel.org>
    Message-Id: <20211121125451.9489-8-dwmw2@infradead.org>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    dwmw2 authored and bonzini committed Dec 9, 2021
  2. KVM: powerpc: Use Makefile.kvm for common files

    It's all fairly baroque but in the end, I don't think there's any reason
    for $(KVM)/irqchip.o to have been handled differently, as they all end
    up in $(kvm-y) in the end anyway, regardless of whether they get there
    via $(common-objs-y) and the CPU-specific object lists.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
    Message-Id: <20211121125451.9489-7-dwmw2@infradead.org>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    dwmw2 authored and bonzini committed Dec 9, 2021
  3. KVM: RISC-V: Use Makefile.kvm for common files

    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Message-Id: <20211121125451.9489-6-dwmw2@infradead.org>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    dwmw2 authored and bonzini committed Dec 9, 2021
  4. KVM: mips: Use Makefile.kvm for common files

    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Message-Id: <20211121125451.9489-5-dwmw2@infradead.org>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    dwmw2 authored and bonzini committed Dec 9, 2021
  5. KVM: s390: Use Makefile.kvm for common files

    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
    Message-Id: <20211121125451.9489-4-dwmw2@infradead.org>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    dwmw2 authored and bonzini committed Dec 9, 2021
  6. KVM: Add Makefile.kvm for common files, use it for x86

    Splitting kvm_main.c out into smaller and better-organized files is
    slightly non-trivial when it involves editing a bunch of per-arch
    KVM makefiles. Provide virt/kvm/Makefile.kvm for them to include.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Acked-by: Marc Zyngier <maz@kernel.org>
    Message-Id: <20211121125451.9489-3-dwmw2@infradead.org>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    dwmw2 authored and bonzini committed Dec 9, 2021
  7. KVM: Introduce CONFIG_HAVE_KVM_DIRTY_RING

    I'd like to make the build include dirty_ring.c based on whether the
    arch wants it or not. That's a whole lot simpler if there's a config
    symbol instead of doing it implicitly on KVM_DIRTY_LOG_PAGE_OFFSET
    being set to something non-zero.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Message-Id: <20211121125451.9489-2-dwmw2@infradead.org>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    dwmw2 authored and bonzini committed Dec 9, 2021
  8. KVM: VMX: Clean up PI pre/post-block WARNs

    Move the WARN sanity checks out of the PI descriptor update loop so as
    not to spam the kernel log if the condition is violated and the update
    takes multiple attempts due to another writer.  This also eliminates a
    few extra uops from the retry path.
    
    Technically not checking every attempt could mean KVM will now fail to
    WARN in a scenario that would have failed before, but any such failure
    would be inherently racy as some other agent (CPU or device) would have
    to concurrent modify the PI descriptor.
    
    Add a helper to handle the actual write and more importantly to document
    why the write may need to be retried.
    
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Message-Id: <20211208015236.1616697-4-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Dec 9, 2021
Older