Skip to content

Commit ba0e586

Browse files
eugkoiragregkh
authored andcommitted
iommu/vt-d: Fix __domain_mapping()'s usage of switch_to_super_page()
commit dce043c upstream. switch_to_super_page() assumes the memory range it's working on is aligned to the target large page level. Unfortunately, __domain_mapping() doesn't take this into account when using it, and will pass unaligned ranges ultimately freeing a PTE range larger than expected. Take for example a mapping with the following iov_pfn range [0x3fe400, 0x4c0600), which should be backed by the following mappings: iov_pfn [0x3fe400, 0x3fffff] covered by 2MiB pages iov_pfn [0x400000, 0x4bffff] covered by 1GiB pages iov_pfn [0x4c0000, 0x4c05ff] covered by 2MiB pages Under this circumstance, __domain_mapping() will pass [0x400000, 0x4c05ff] to switch_to_super_page() at a 1 GiB granularity, which will in turn free PTEs all the way to iov_pfn 0x4fffff. Mitigate this by rounding down the iov_pfn range passed to switch_to_super_page() in __domain_mapping() to the target large page level. Additionally add range alignment checks to switch_to_super_page. Fixes: 9906b93 ("iommu/vt-d: Avoid duplicate removing in __domain_mapping()") Signed-off-by: Eugene Koira <eugkoira@amazon.com> Cc: stable@vger.kernel.org Reviewed-by: Nicolas Saenz Julienne <nsaenz@amazon.com> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> Link: https://lore.kernel.org/r/20250826143816.38686-1-eugkoira@amazon.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent 2605cf8 commit ba0e586

File tree

1 file changed

+6
-1
lines changed

1 file changed

+6
-1
lines changed

drivers/iommu/intel/iommu.c

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2205,6 +2205,10 @@ static void switch_to_super_page(struct dmar_domain *domain,
22052205
struct dma_pte *pte = NULL;
22062206
unsigned long i;
22072207

2208+
if (WARN_ON(!IS_ALIGNED(start_pfn, lvl_pages) ||
2209+
!IS_ALIGNED(end_pfn + 1, lvl_pages)))
2210+
return;
2211+
22082212
while (start_pfn <= end_pfn) {
22092213
if (!pte)
22102214
pte = pfn_to_dma_pte(domain, start_pfn, &level);
@@ -2272,7 +2276,8 @@ __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
22722276
unsigned long pages_to_remove;
22732277

22742278
pteval |= DMA_PTE_LARGE_PAGE;
2275-
pages_to_remove = min_t(unsigned long, nr_pages,
2279+
pages_to_remove = min_t(unsigned long,
2280+
round_down(nr_pages, lvl_pages),
22762281
nr_pte_to_next_page(pte) * lvl_pages);
22772282
end_pfn = iov_pfn + pages_to_remove - 1;
22782283
switch_to_super_page(domain, iov_pfn, end_pfn, largepage_lvl);

0 commit comments

Comments
 (0)