Skip to content

Conversation

assembler-0
Copy link
Owner

@assembler-0 assembler-0 commented Aug 9, 2025

Summary by CodeRabbit

  • New Features

    • Introduced a memory pool allocator for efficient fixed-size memory allocations.
    • Added support for huge page allocation and mapping, enabling improved memory management and performance.
    • Implemented guarded memory allocations with guard pages to detect memory corruption.
    • Enhanced memory statistics reporting with detailed fragmentation and allocation metrics.
  • Improvements

    • Improved memory layout validation and reporting with new diagnostic outputs and warnings.
    • Enhanced kernel initialization to provide more comprehensive memory statistics and health checks.
    • Refined scheduler parameters for more responsive and aggressive process scheduling.
    • Updated stack allocation for secure processes to use guarded virtual memory allocations.
  • Bug Fixes

    • Fixed missing newline in virtual file system mount log message.
  • Chores

    • Updated build configuration to include the new memory pool allocator source file.
    • Updated memory allocation in FAT12 filesystem to use the new pool allocator.

Copy link

coderabbitai bot commented Aug 9, 2025

Walkthrough

This update introduces a memory pool allocator subsystem, enhances memory and virtual memory management with huge page and guard page support, and refactors kernel initialization for improved memory diagnostics and safety. It also updates scheduler tuning parameters for the MLFQ, integrates the new memory pool into the build system, and applies minor logging and allocation improvements.

Changes

Cohort / File(s) Change Summary
Memory Pool Allocator Introduction
kernel/memory/MemPool.c, kernel/memory/MemPool.h, meson.build
Implements a memory pool allocator with fixed-size block pools, allocation/free routines, default pool initialization, and integrates the new source file into the build.
Kernel Memory Management and Initialization
kernel/core/Kernel.c, kernel/memory/Memory.c, kernel/memory/Memory.h
Refactors bootstrap page table allocation, adds huge page support checks, detailed memory stats, fragmentation reporting, and new memory stats struct/type. Adds huge page allocation logic and updates stats tracking.
Virtual Memory Enhancements
kernel/memory/VMem.c, kernel/memory/VMem.h
Adds support for huge page mappings and guarded allocations with guard pages, including new allocation/free routines and macros/constants for huge page and guard logic.
MLFQ Scheduler Tuning
kernel/process/Process.h
Revises scheduler macros for priority levels, quantum scaling, preemption, fairness, aging, load balancing, and process classification to make scheduling more responsive and aggressive.
Memory Allocation Improvements in Subsystems
fs/FAT12.c, kernel/process/Process.c, kernel/memory/KernelHeap.c
Replaces sector buffer allocation in FAT12 with pool allocator; changes secure process stack allocation to guarded virtual memory allocation; adds MemPool.h include in KernelHeap.c with minor formatting.
Logging and Minor Fixes
fs/VFS.c
Appends a newline to the root mount log message for formatting consistency.

Sequence Diagram(s)

sequenceDiagram
    participant Kernel as KernelMainHigherHalf
    participant Mem as Memory.c
    participant Pool as MemPool.c
    participant VMem as VMem.c

    Kernel->>Mem: GetDetailedMemoryStats()
    Mem-->>Kernel: MemoryStats struct

    Kernel->>Pool: InitDefaultPools()
    Pool-->>Kernel: (pools initialized)

    Kernel->>Mem: AllocHugePages(num_pages)
    Mem-->>Kernel: huge page address or NULL

    Kernel->>VMem: VMemMapHuge(vaddr, paddr, flags)
    VMem-->>Kernel: success/failure

    Kernel->>VMem: VMemAllocWithGuards(size)
    VMem-->>Kernel: pointer with guard pages
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • Development #52: Also modifies Kernel.c with changes to page table and memory mapping logic, indicating overlapping concerns in kernel memory bootstrap and mapping.

Poem

In the warren of memory, pools now abound,
Huge pages and guards keep our bytes safe and sound.
The scheduler’s sharper, priorities tuned,
With stats and with warnings, no bug goes marooned.
A hop and a skip—this kernel’s more bright!
🐇✨
Reviewers, prepare for a memory delight!

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch Development

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🔭 Outside diff range comments (5)
kernel/memory/VMem.h (1)

36-52: Virtual-space base re-introduces the 128 TB offset bug

VIRT_ADDR_SPACE_START is still 0xFFFF800000000000ULL; previous incident (#53) showed this causes heap to sit far from the kernel, leading to page-faults.
Switch back to 0xFFFFFFFF80000000ULL (same as KERNEL_VIRTUAL_BASE) or adjust mapping logic.

kernel/memory/VMem.c (2)

264-268: vmem_allocations is never incremented.

VMemAlloc updates kernel_space fields but leaves the global counter unused, so PrintVMemStats() always reports zero allocations. Increment it once the allocation succeeds.


320-324: vmem_frees counter also never updated.

Mirror the allocation counter in VMemFree after the final physical page is released to keep statistics meaningful.

kernel/memory/Memory.c (2)

177-234: Update allocation statistics inside AllocPage.

allocation_count is declared but never incremented. Add the counter immediately after a page is successfully marked used to keep the new metrics accurate.


273-312: Increment free_count inside FreePage.

Without this the detailed stats under-report frees, breaking any leak detection that relies on the counters.

🧹 Nitpick comments (8)
kernel/memory/VMem.h (2)

36-43: Confirm 2 MiB alignment helpers

Macros look correct, but watch for unsigned-overflow when addr is near UINT64_MAX; prefer inline functions if you later rely on defined overflow behaviour.


110-112: New APIs declared – remember visibility

Add the prototypes to the public header only if you truly expect external callers; otherwise mark them static inline in the C file to keep surface minimal.

kernel/memory/MemPool.c (2)

56-63: Missing double-free / bounds checks

MemPoolFree() blindly pushes the block back; a rogue pointer can corrupt the free list.
Add basic validation (range check against pool pages, duplicate detection under DEBUG).


65-75: Default pools initialised without failure logging

If CreateMemPool returns NULL the loop silently skips that pool; emit at least a kernel log so low-memory situations aren’t hidden.

kernel/memory/Memory.h (3)

6-16: Good stats struct – but keep it kernel-internal

If the struct crosses ABI boundaries, remember to freeze its layout; otherwise mark it internal to avoid future compatibility pain.


19-20: Duplicate PAGE_SIZE macro

PAGE_SIZE is already defined in VMem.h. Redefining (even to same value) can trigger warnings.
Move common constants to a single header and include that instead.


26-28: Huge-page API: clarify units

Parameters use “pages” but the size refers to 2 MiB huge pages, not 4 KiB pages.
Document this explicitly to avoid misuse.

kernel/memory/MemPool.h (1)

35-42: Document pool thread-safety expectations.

The header exposes FastAlloc/FastFree but doesn’t state whether they are safe under concurrent callers. Clarifying this now avoids misuse later.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a170ff7 and 889aac8.

📒 Files selected for processing (10)
  • fs/VFS.c (1 hunks)
  • kernel/core/Kernel.c (10 hunks)
  • kernel/memory/MemPool.c (1 hunks)
  • kernel/memory/MemPool.h (1 hunks)
  • kernel/memory/Memory.c (4 hunks)
  • kernel/memory/Memory.h (1 hunks)
  • kernel/memory/VMem.c (4 hunks)
  • kernel/memory/VMem.h (3 hunks)
  • kernel/process/Process.h (1 hunks)
  • meson.build (1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: assembler-0
PR: assembler-0/VoidFrame#53
File: kernel/core/Kernel.c:256-259
Timestamp: 2025-08-09T04:47:56.871Z
Learning: The VoidFrame kernel memory crash occurs when VIRT_ADDR_SPACE_START is set to 0xFFFF800000000000ULL instead of 0xFFFFFFFF80000000ULL, causing the heap to allocate 128TB away from where the kernel is mapped, resulting in page faults when accessing heap memory.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#47
File: kernel/core/Kernel.c:434-434
Timestamp: 2025-07-16T01:04:23.790Z
Learning: VMemAlloc() in VoidFrame kernel fails with error code -2 (VMEM_ERROR_INVALID_ADDR) due to virtual address space layout conflict where the heap start address (_bss_end + KERNEL_VIRTUAL_OFFSET + 1MB margin) exceeds VIRT_ADDR_SPACE_END (0xFFFFFFFF80000000ULL), causing VMemMap validation to fail.
📚 Learning: 2025-07-16T01:04:23.790Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#47
File: kernel/core/Kernel.c:434-434
Timestamp: 2025-07-16T01:04:23.790Z
Learning: VMemAlloc() in VoidFrame kernel fails with error code -2 (VMEM_ERROR_INVALID_ADDR) due to virtual address space layout conflict where the heap start address (_bss_end + KERNEL_VIRTUAL_OFFSET + 1MB margin) exceeds VIRT_ADDR_SPACE_END (0xFFFFFFFF80000000ULL), causing VMemMap validation to fail.

Applied to files:

  • kernel/memory/VMem.h
  • kernel/memory/Memory.h
  • kernel/memory/VMem.c
  • kernel/memory/Memory.c
  • kernel/core/Kernel.c
📚 Learning: 2025-08-09T04:47:56.871Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#53
File: kernel/core/Kernel.c:256-259
Timestamp: 2025-08-09T04:47:56.871Z
Learning: The VoidFrame kernel memory crash occurs when VIRT_ADDR_SPACE_START is set to 0xFFFF800000000000ULL instead of 0xFFFFFFFF80000000ULL, causing the heap to allocate 128TB away from where the kernel is mapped, resulting in page faults when accessing heap memory.

Applied to files:

  • kernel/memory/VMem.h
  • kernel/memory/VMem.c
  • kernel/memory/Memory.c
  • kernel/core/Kernel.c
🧬 Code Graph Analysis (5)
fs/VFS.c (1)
drivers/Serial.c (1)
  • SerialWrite (31-35)
kernel/memory/VMem.h (1)
kernel/memory/VMem.c (2)
  • VMemAllocWithGuards (332-352)
  • VMemFreeWithGuards (354-368)
kernel/memory/Memory.h (1)
kernel/memory/Memory.c (5)
  • MemoryInit (72-174)
  • FreePage (273-311)
  • AllocHugePages (235-270)
  • GetDetailedMemoryStats (317-367)
  • IsPageFree (58-64)
kernel/memory/Memory.c (1)
kernel/atomic/Spinlock.h (2)
  • SpinLockIrqSave (19-26)
  • SpinUnlockIrqRestore (28-31)
kernel/memory/MemPool.c (1)
kernel/memory/VMem.c (2)
  • VMemAlloc (215-273)
  • VMemFree (275-330)
🔇 Additional comments (6)
fs/VFS.c (1)

57-57: Nice touch – missing newline fixed

The added “\n” keeps log output on separate lines and matches surrounding calls.

meson.build (1)

75-75: Build list updated correctly

MemPool.c is now compiled; no other flags required.

kernel/memory/Memory.h (1)

29-32: Expose internal allocator helpers cautiously

IsPageFree is now public; confirm no caller relies on internal bitmap representation.

kernel/memory/Memory.c (1)

58-64: Inline/extern mismatch for IsPageFree.

The function is now a global symbol here, but headers likely still declare it static inline. This leads to multiple definitions or silent ODR violations. Declare once in the header as static inline and extern only if you need a fallback object file version, not both.

kernel/core/Kernel.c (1)

458-460: Virtual-space overlap check may fire erroneously.

VIRT_ADDR_SPACE_START >= KERNEL_SPACE_START compares unrelated address spaces (user-vs-kernel). If both macros are large constants the test can trip even on valid configs. Clarify intent or drop the check.

kernel/process/Process.h (1)

80-85: Confirm REGULAR_QUEUE_MASK width with new priority levels.

With MAX_PRIORITY_LEVELS reduced to 5 and RT_PRIORITY_THRESHOLD now 2, REGULAR_QUEUE_MASK expands to all-ones above bit 1. Make sure scheduling code masks with the correct bit-width (e.g. & (MAX_PRIORITY_LEVELS-1)) to avoid undefined behaviour on 32-bit math.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (2)
fs/FAT12.c (1)

33-60: Mismatch between alloc / free APIs – risk of heap corruption

sector_buffer is obtained with FastAlloc but later released with KernelFree.
Using different allocators for the same block is undefined ‑ either leak memory or corrupt the pool/heap.

-    sector_buffer = FastAlloc(POOL_SIZE_512);
+    sector_buffer = FastAlloc(POOL_SIZE_512);
 ...
-        KernelFree(sector_buffer);
+        FastFree(sector_buffer);

Audit every KernelFree(sector_buffer) in this file.

kernel/process/Process.c (1)

938-946: Stack is freed with the wrong API

VMemAllocWithGuards pairs with VMemFreeWithGuards, but cleanup still calls FreePage.
Guard pages stay mapped and the real stack page is leaked.

Proposed minimal fix:

-        if (proc->stack) {
-            FreePage(proc->stack);
+        if (proc->stack) {
+            VMemFreeWithGuards(proc->stack, STACK_SIZE);

Search for any other FreePage(proc->stack) occurrences.

♻️ Duplicate comments (2)
kernel/memory/VMem.c (1)

336-356: Guard pages still writable – remap fails
The VMemMap calls return VMEM_ERROR_ALREADY_MAPPED; error is ignored so guard pages keep PAGE_WRITABLE.

See previous comment – use a flag-update helper (VMemProtect) instead of re-mapping.

kernel/memory/Memory.c (1)

235-272: Update statistics while the lock is held

huge_pages_allocated++ and allocation_failures++ are executed after releasing memory_lock, risking lost updates under contention.

-    SpinUnlockIrqRestore(&memory_lock, flags);
-    ++huge_pages_allocated;
+    ++huge_pages_allocated;
+    SpinUnlockIrqRestore(&memory_lock, flags);

Do the same for the failure path.

🧹 Nitpick comments (4)
kernel/memory/Memory.c (1)

319-369: Long critical section – consider snapshot instead

GetDetailedMemoryStats scans the full page bitmap while holding memory_lock, blocking all alloc/free callers for several ms on large RAM.
Snapshot needed vars, release lock, then compute fragmentation.

kernel/core/Kernel.c (3)

64-84: Unused parameter – minor cleanup

table_name is never used after (void)table_name;.
If you don’t plan to log it, drop the parameter to avoid confusion.


97-161: Progress logging extremely chatty

Printing every 64 MB/256 MB during identity map floods serial consoles (>16 000 chars).
Consider gating behind VERBOSE_BOOT or increasing interval.


452-461: Overlap check is informative but non-fatal

Currently only logs; overlapping regions can still crash later.
Consider PANIC() if the check trips.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 889aac8 and 7efc122.

📒 Files selected for processing (6)
  • fs/FAT12.c (2 hunks)
  • kernel/core/Kernel.c (10 hunks)
  • kernel/memory/KernelHeap.c (2 hunks)
  • kernel/memory/Memory.c (4 hunks)
  • kernel/memory/VMem.c (4 hunks)
  • kernel/process/Process.c (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • kernel/memory/KernelHeap.c
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: assembler-0
PR: assembler-0/VoidFrame#53
File: kernel/core/Kernel.c:256-259
Timestamp: 2025-08-09T04:47:56.871Z
Learning: The VoidFrame kernel memory crash occurs when VIRT_ADDR_SPACE_START is set to 0xFFFF800000000000ULL instead of 0xFFFFFFFF80000000ULL, causing the heap to allocate 128TB away from where the kernel is mapped, resulting in page faults when accessing heap memory.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#47
File: kernel/core/Kernel.c:434-434
Timestamp: 2025-07-16T01:04:23.790Z
Learning: VMemAlloc() in VoidFrame kernel fails with error code -2 (VMEM_ERROR_INVALID_ADDR) due to virtual address space layout conflict where the heap start address (_bss_end + KERNEL_VIRTUAL_OFFSET + 1MB margin) exceeds VIRT_ADDR_SPACE_END (0xFFFFFFFF80000000ULL), causing VMemMap validation to fail.
📚 Learning: 2025-07-16T01:04:23.790Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#47
File: kernel/core/Kernel.c:434-434
Timestamp: 2025-07-16T01:04:23.790Z
Learning: VMemAlloc() in VoidFrame kernel fails with error code -2 (VMEM_ERROR_INVALID_ADDR) due to virtual address space layout conflict where the heap start address (_bss_end + KERNEL_VIRTUAL_OFFSET + 1MB margin) exceeds VIRT_ADDR_SPACE_END (0xFFFFFFFF80000000ULL), causing VMemMap validation to fail.

Applied to files:

  • fs/FAT12.c
  • kernel/process/Process.c
  • kernel/memory/VMem.c
  • kernel/memory/Memory.c
  • kernel/core/Kernel.c
📚 Learning: 2025-08-09T09:29:00.591Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#57
File: kernel/memory/VMem.h:64-66
Timestamp: 2025-08-09T09:29:00.591Z
Learning: In the VoidFrame kernel, removing locks from KernelMemoryAlloc/KernelMemoryFree causes heap corruption where shell command strings (like "clear" = 0x61656C63) overwrite heap block magic values due to race conditions between allocation and free operations.

Applied to files:

  • kernel/process/Process.c
  • kernel/memory/VMem.c
  • kernel/memory/Memory.c
  • kernel/core/Kernel.c
📚 Learning: 2025-08-09T04:47:56.871Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#53
File: kernel/core/Kernel.c:256-259
Timestamp: 2025-08-09T04:47:56.871Z
Learning: The VoidFrame kernel memory crash occurs when VIRT_ADDR_SPACE_START is set to 0xFFFF800000000000ULL instead of 0xFFFFFFFF80000000ULL, causing the heap to allocate 128TB away from where the kernel is mapped, resulting in page faults when accessing heap memory.

Applied to files:

  • kernel/process/Process.c
  • kernel/memory/VMem.c
  • kernel/memory/Memory.c
  • kernel/core/Kernel.c
🧬 Code Graph Analysis (3)
fs/FAT12.c (1)
kernel/memory/MemPool.c (1)
  • FastAlloc (77-87)
kernel/process/Process.c (1)
kernel/memory/VMem.c (1)
  • VMemAllocWithGuards (336-356)
kernel/memory/Memory.c (1)
kernel/atomic/Spinlock.h (2)
  • SpinLockIrqSave (19-26)
  • SpinUnlockIrqRestore (28-31)
🔇 Additional comments (10)
fs/FAT12.c (1)

6-6: Header inclusion looks correct.

Nothing to flag for the new MemPool.h include.

kernel/process/Process.c (1)

12-12: Include addition is fine.

kernel/memory/VMem.c (1)

358-374: Free-with-guards implementation looks good.

kernel/memory/Memory.c (2)

18-21: Counters added – no issues.


58-64: Public IsPageFree – confirm header exposure

Exporting this previously-private helper is fine, just ensure the prototype is in Memory.h.

kernel/core/Kernel.c (5)

25-25: MemPool include acknowledged.


253-267: Early pool init makes sense – good addition.


340-345: Detected huge pages but not enabled

CheckHugePageSupport only reports capability. If huge pages are desired, call VMemMapHuge or set CR4.PSE here.


350-356: Health check logic OK.


466-493: Unsafe bitmap walk

IsPageFree is not thread-safe; call it under memory_lock or snapshot the bitmap before counting.

Comment on lines +173 to +217
// Map huge page in VMem.c
int VMemMapHuge(uint64_t vaddr, uint64_t paddr, uint64_t flags) {
if (!IS_HUGE_PAGE_ALIGNED(vaddr) || !IS_HUGE_PAGE_ALIGNED(paddr)) {
return VMEM_ERROR_ALIGN;
}

irq_flags_t irq_flags = SpinLockIrqSave(&vmem_lock);

if (!is_valid_phys_addr(paddr)) {
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_ERROR_INVALID_ADDR;
}
// Get PDP table
uint64_t pdp_phys = VMemGetPageTablePhys((uint64_t)kernel_space.pml4, vaddr, 0, 1);
if (!pdp_phys) {
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_ERROR_NOMEM;
}

// Get PD table
uint64_t pd_phys = VMemGetPageTablePhys(pdp_phys, vaddr, 1, 1);
if (!pd_phys) {
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_ERROR_NOMEM;
}

// Access PD through identity mapping
uint64_t* pd_virt = (pd_phys < IDENTITY_MAP_SIZE) ?
(uint64_t*)pd_phys : (uint64_t*)PHYS_TO_VIRT(pd_phys);

int pd_index = (vaddr >> PD_SHIFT) & PT_INDEX_MASK;

// Check if already mapped
if (pd_virt[pd_index] & PAGE_PRESENT) {
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_ERROR_ALREADY_MAPPED;
}

// Set huge page mapping (PS bit = 1 for 2MB pages)
pd_virt[pd_index] = paddr | flags | PAGE_PRESENT | PAGE_LARGE;

VMemFlushTLBSingle(vaddr);
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_SUCCESS;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Validate virtual address in VMemMapHuge

Unlike VMemMap, the huge-page variant never checks that vaddr lies inside [VIRT_ADDR_SPACE_START, VIRT_ADDR_SPACE_END).
Passing an out-of-range address will silently corrupt page tables.

+    if (vaddr < VIRT_ADDR_SPACE_START || vaddr >= VIRT_ADDR_SPACE_END) {
+        SpinUnlockIrqRestore(&vmem_lock, irq_flags);
+        return VMEM_ERROR_INVALID_ADDR;
+    }

(Optionally update kernel_space.used_pages/total_mapped for bookkeeping.)

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Map huge page in VMem.c
int VMemMapHuge(uint64_t vaddr, uint64_t paddr, uint64_t flags) {
if (!IS_HUGE_PAGE_ALIGNED(vaddr) || !IS_HUGE_PAGE_ALIGNED(paddr)) {
return VMEM_ERROR_ALIGN;
}
irq_flags_t irq_flags = SpinLockIrqSave(&vmem_lock);
if (!is_valid_phys_addr(paddr)) {
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_ERROR_INVALID_ADDR;
}
// Get PDP table
uint64_t pdp_phys = VMemGetPageTablePhys((uint64_t)kernel_space.pml4, vaddr, 0, 1);
if (!pdp_phys) {
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_ERROR_NOMEM;
}
// Get PD table
uint64_t pd_phys = VMemGetPageTablePhys(pdp_phys, vaddr, 1, 1);
if (!pd_phys) {
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_ERROR_NOMEM;
}
// Access PD through identity mapping
uint64_t* pd_virt = (pd_phys < IDENTITY_MAP_SIZE) ?
(uint64_t*)pd_phys : (uint64_t*)PHYS_TO_VIRT(pd_phys);
int pd_index = (vaddr >> PD_SHIFT) & PT_INDEX_MASK;
// Check if already mapped
if (pd_virt[pd_index] & PAGE_PRESENT) {
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_ERROR_ALREADY_MAPPED;
}
// Set huge page mapping (PS bit = 1 for 2MB pages)
pd_virt[pd_index] = paddr | flags | PAGE_PRESENT | PAGE_LARGE;
VMemFlushTLBSingle(vaddr);
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_SUCCESS;
}
// Map huge page in VMem.c
int VMemMapHuge(uint64_t vaddr, uint64_t paddr, uint64_t flags) {
if (!IS_HUGE_PAGE_ALIGNED(vaddr) || !IS_HUGE_PAGE_ALIGNED(paddr)) {
return VMEM_ERROR_ALIGN;
}
irq_flags_t irq_flags = SpinLockIrqSave(&vmem_lock);
// Validate virtual address range
if (vaddr < VIRT_ADDR_SPACE_START || vaddr >= VIRT_ADDR_SPACE_END) {
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_ERROR_INVALID_ADDR;
}
if (!is_valid_phys_addr(paddr)) {
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_ERROR_INVALID_ADDR;
}
// Get PDP table
uint64_t pdp_phys = VMemGetPageTablePhys((uint64_t)kernel_space.pml4, vaddr, 0, 1);
if (!pdp_phys) {
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_ERROR_NOMEM;
}
// Get PD table
uint64_t pd_phys = VMemGetPageTablePhys(pdp_phys, vaddr, 1, 1);
if (!pd_phys) {
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_ERROR_NOMEM;
}
// Access PD through identity mapping
uint64_t* pd_virt = (pd_phys < IDENTITY_MAP_SIZE) ?
(uint64_t*)pd_phys : (uint64_t*)PHYS_TO_VIRT(pd_phys);
int pd_index = (vaddr >> PD_SHIFT) & PT_INDEX_MASK;
// Check if already mapped
if (pd_virt[pd_index] & PAGE_PRESENT) {
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_ERROR_ALREADY_MAPPED;
}
// Set huge page mapping (PS bit = 1 for 2MB pages)
pd_virt[pd_index] = paddr | flags | PAGE_PRESENT | PAGE_LARGE;
VMemFlushTLBSingle(vaddr);
SpinUnlockIrqRestore(&vmem_lock, irq_flags);
return VMEM_SUCCESS;
}
🤖 Prompt for AI Agents
In kernel/memory/VMem.c around lines 173 to 217, the function VMemMapHuge lacks
validation to ensure the virtual address vaddr is within the valid range
[VIRT_ADDR_SPACE_START, VIRT_ADDR_SPACE_END). Add a check at the start of
VMemMapHuge to verify that vaddr is within this range and return an appropriate
error code if not. Optionally, update kernel_space.used_pages and total_mapped
counters after a successful mapping for bookkeeping.

@assembler-0 assembler-0 merged commit 6a39224 into main Aug 9, 2025
1 check passed
This was referenced Aug 13, 2025
This was referenced Sep 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant