Skip to content

Conversation

@jserv
Copy link
Collaborator

@jserv jserv commented Oct 31, 2025

This optimizes RFENCE.VMA to use range-based cache invalidation instead of unconditionally flushing all MMU caches. It adds mmu_invalidate_range that selectively invalidates only cache entries within the specified virtual address range, reducing cache flushes by 75-100% for single-page operations.

SBI compliance: size==0 and size==-1 trigger full flush per specification.


Summary by cubic

Scope RFENCE.VMA cache invalidation to the requested virtual address range to reduce unnecessary flushes and speed up single-page operations. Full flush still occurs when size is 0 or -1 per SBI, reducing flushes by 75–100% on single-page cases.

  • New Features
    • Added mmu_invalidate_range(hart, start_addr, size) and used it for RFENCE.VMA/VMA.ASID with hart mask selection.
    • Computed end of range with 64‑bit arithmetic and clamped to RV32 to avoid overflow.

Written for commit fb11de8. Summary will update automatically on new commits.

This optimizes RFENCE.VMA to use range-based cache invalidation instead
of unconditionally flushing all MMU caches. It adds mmu_invalidate_range
that selectively invalidates only cache entries within the specified
virtual address range, reducing cache flushes by 75-100% for single-page
operations.

SBI compliance: size==0 and size==-1 trigger full flush per specification.
cubic-dev-ai[bot]

This comment was marked as outdated.

@jserv jserv merged commit 40f7e60 into master Nov 2, 2025
11 checks passed
@jserv jserv deleted the rfence branch November 2, 2025 15:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants