Skip to content

Commit ce7b569

Browse files
yanzhao56sean-jc
authored andcommitted
KVM: TDX: Drop superfluous page pinning in S-EPT management
Don't explicitly pin pages when mapping pages into the S-EPT, guest_memfd doesn't support page migration in any capacity, i.e. there are no migrate callbacks because guest_memfd pages *can't* be migrated. See the WARN in kvm_gmem_migrate_folio(). Eliminating TDX's explicit pinning will also enable guest_memfd to support in-place conversion between shared and private memory[1][2]. Because KVM cannot distinguish between speculative/transient refcounts and the intentional refcount for TDX on private pages[3], failing to release private page refcount in TDX could cause guest_memfd to indefinitely wait on decreasing the refcount for the splitting. Under normal conditions, not holding an extra page refcount in TDX is safe because guest_memfd ensures pages are retained until its invalidation notification to KVM MMU is completed. However, if there're bugs in KVM/TDX module, not holding an extra refcount when a page is mapped in S-EPT could result in a page being released from guest_memfd while still mapped in the S-EPT. But, doing work to make a fatal error slightly less fatal is a net negative when that extra work adds complexity and confusion. Several approaches were considered to address the refcount issue, including - Attempting to modify the KVM unmap operation to return a failure, which was deemed too complex and potentially incorrect[4]. - Increasing the folio reference count only upon S-EPT zapping failure[5]. - Use page flags or page_ext to indicate a page is still used by TDX[6], which does not work for HVO (HugeTLB Vmemmap Optimization). - Setting HWPOISON bit or leveraging folio_set_hugetlb_hwpoison()[7]. Due to the complexity or inappropriateness of these approaches, and the fact that S-EPT zapping failure is currently only possible when there are bugs in the KVM or TDX module, which is very rare in a production kernel, a straightforward approach of simply not holding the page reference count in TDX was chosen[8]. When S-EPT zapping errors occur, KVM_BUG_ON() is invoked to kick off all vCPUs and mark the VM as dead. Although there is a potential window that a private page mapped in the S-EPT could be reallocated and used outside the VM, the loud warning from KVM_BUG_ON() should provide sufficient debug information. To be robust against bugs, the user can enable panic_on_warn as normal. Link: https://lore.kernel.org/all/cover.1747264138.git.ackerleytng@google.com [1] Link: https://youtu.be/UnBKahkAon4 [2] Link: https://lore.kernel.org/all/CAGtprH_ypohFy9TOJ8Emm_roT4XbQUtLKZNFcM6Fr+fhTFkE0Q@mail.gmail.com [3] Link: https://lore.kernel.org/all/aEEEJbTzlncbRaRA@yzhao56-desk.sh.intel.com [4] Link: https://lore.kernel.org/all/aE%2Fq9VKkmaCcuwpU@yzhao56-desk.sh.intel.com [5] Link: https://lore.kernel.org/all/aFkeBtuNBN1RrDAJ@yzhao56-desk.sh.intel.com [6] Link: https://lore.kernel.org/all/diqzy0tikran.fsf@ackerleytng-ctop.c.googlers.com [7] Link: https://lore.kernel.org/all/53ea5239f8ef9d8df9af593647243c10435fd219.camel@intel.com [8] Suggested-by: Vishal Annapurve <vannapurve@google.com> Suggested-by: Ackerley Tng <ackerleytng@google.com> Suggested-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Kai Huang <kai.huang@intel.com> [sean: extract out of hugepage series, massage changelog accordingly] Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Reviewed-by: Yan Zhao <yan.y.zhao@intel.com> Tested-by: Yan Zhao <yan.y.zhao@intel.com> Tested-by: Kai Huang <kai.huang@intel.com> Link: https://patch.msgid.link/20251030200951.3402865-9-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
1 parent 6de2fb0 commit ce7b569

File tree

1 file changed

+4
-24
lines changed

1 file changed

+4
-24
lines changed

arch/x86/kvm/vmx/tdx.c

Lines changed: 4 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1583,29 +1583,22 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level)
15831583
td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa);
15841584
}
15851585

1586-
static void tdx_unpin(struct kvm *kvm, struct page *page)
1587-
{
1588-
put_page(page);
1589-
}
1590-
15911586
static int tdx_mem_page_aug(struct kvm *kvm, gfn_t gfn,
1592-
enum pg_level level, struct page *page)
1587+
enum pg_level level, kvm_pfn_t pfn)
15931588
{
15941589
int tdx_level = pg_level_to_tdx_sept_level(level);
15951590
struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
1591+
struct page *page = pfn_to_page(pfn);
15961592
gpa_t gpa = gfn_to_gpa(gfn);
15971593
u64 entry, level_state;
15981594
u64 err;
15991595

16001596
err = tdh_mem_page_aug(&kvm_tdx->td, gpa, tdx_level, page, &entry, &level_state);
1601-
if (unlikely(tdx_operand_busy(err))) {
1602-
tdx_unpin(kvm, page);
1597+
if (unlikely(tdx_operand_busy(err)))
16031598
return -EBUSY;
1604-
}
16051599

16061600
if (KVM_BUG_ON(err, kvm)) {
16071601
pr_tdx_error_2(TDH_MEM_PAGE_AUG, err, entry, level_state);
1608-
tdx_unpin(kvm, page);
16091602
return -EIO;
16101603
}
16111604

@@ -1639,29 +1632,18 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
16391632
enum pg_level level, kvm_pfn_t pfn)
16401633
{
16411634
struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
1642-
struct page *page = pfn_to_page(pfn);
16431635

16441636
/* TODO: handle large pages. */
16451637
if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm))
16461638
return -EINVAL;
16471639

1648-
/*
1649-
* Because guest_memfd doesn't support page migration with
1650-
* a_ops->migrate_folio (yet), no callback is triggered for KVM on page
1651-
* migration. Until guest_memfd supports page migration, prevent page
1652-
* migration.
1653-
* TODO: Once guest_memfd introduces callback on page migration,
1654-
* implement it and remove get_page/put_page().
1655-
*/
1656-
get_page(page);
1657-
16581640
/*
16591641
* Read 'pre_fault_allowed' before 'kvm_tdx->state'; see matching
16601642
* barrier in tdx_td_finalize().
16611643
*/
16621644
smp_rmb();
16631645
if (likely(kvm_tdx->state == TD_STATE_RUNNABLE))
1664-
return tdx_mem_page_aug(kvm, gfn, level, page);
1646+
return tdx_mem_page_aug(kvm, gfn, level, pfn);
16651647

16661648
return tdx_mem_page_record_premap_cnt(kvm, gfn, level, pfn);
16671649
}
@@ -1712,7 +1694,6 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
17121694
return -EIO;
17131695
}
17141696
tdx_quirk_reset_page(page);
1715-
tdx_unpin(kvm, page);
17161697
return 0;
17171698
}
17181699

@@ -1792,7 +1773,6 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
17921773
if (tdx_is_sept_zap_err_due_to_premap(kvm_tdx, err, entry, level) &&
17931774
!KVM_BUG_ON(!atomic64_read(&kvm_tdx->nr_premapped), kvm)) {
17941775
atomic64_dec(&kvm_tdx->nr_premapped);
1795-
tdx_unpin(kvm, page);
17961776
return 0;
17971777
}
17981778

0 commit comments

Comments
 (0)