-
Notifications
You must be signed in to change notification settings - Fork 1
Development #57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Development #57
Conversation
WalkthroughThis update introduces a memory pool allocator subsystem, enhances memory and virtual memory management with huge page and guard page support, and refactors kernel initialization for improved memory diagnostics and safety. It also updates scheduler tuning parameters for the MLFQ, integrates the new memory pool into the build system, and applies minor logging and allocation improvements. Changes
Sequence Diagram(s)sequenceDiagram
participant Kernel as KernelMainHigherHalf
participant Mem as Memory.c
participant Pool as MemPool.c
participant VMem as VMem.c
Kernel->>Mem: GetDetailedMemoryStats()
Mem-->>Kernel: MemoryStats struct
Kernel->>Pool: InitDefaultPools()
Pool-->>Kernel: (pools initialized)
Kernel->>Mem: AllocHugePages(num_pages)
Mem-->>Kernel: huge page address or NULL
Kernel->>VMem: VMemMapHuge(vaddr, paddr, flags)
VMem-->>Kernel: success/failure
Kernel->>VMem: VMemAllocWithGuards(size)
VMem-->>Kernel: pointer with guard pages
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🔭 Outside diff range comments (5)
kernel/memory/VMem.h (1)
36-52
: Virtual-space base re-introduces the 128 TB offset bug
VIRT_ADDR_SPACE_START
is still0xFFFF800000000000ULL
; previous incident (#53) showed this causes heap to sit far from the kernel, leading to page-faults.
Switch back to0xFFFFFFFF80000000ULL
(same asKERNEL_VIRTUAL_BASE
) or adjust mapping logic.kernel/memory/VMem.c (2)
264-268
:vmem_allocations
is never incremented.
VMemAlloc
updateskernel_space
fields but leaves the global counter unused, soPrintVMemStats()
always reports zero allocations. Increment it once the allocation succeeds.
320-324
:vmem_frees
counter also never updated.Mirror the allocation counter in
VMemFree
after the final physical page is released to keep statistics meaningful.kernel/memory/Memory.c (2)
177-234
: Update allocation statistics insideAllocPage
.
allocation_count
is declared but never incremented. Add the counter immediately after a page is successfully marked used to keep the new metrics accurate.
273-312
: Incrementfree_count
insideFreePage
.Without this the detailed stats under-report frees, breaking any leak detection that relies on the counters.
🧹 Nitpick comments (8)
kernel/memory/VMem.h (2)
36-43
: Confirm 2 MiB alignment helpersMacros look correct, but watch for unsigned-overflow when
addr
is nearUINT64_MAX
; prefer inline functions if you later rely on defined overflow behaviour.
110-112
: New APIs declared – remember visibilityAdd the prototypes to the public header only if you truly expect external callers; otherwise mark them
static inline
in the C file to keep surface minimal.kernel/memory/MemPool.c (2)
56-63
: Missing double-free / bounds checks
MemPoolFree()
blindly pushes the block back; a rogue pointer can corrupt the free list.
Add basic validation (range check against pool pages, duplicate detection under DEBUG).
65-75
: Default pools initialised without failure loggingIf
CreateMemPool
returns NULL the loop silently skips that pool; emit at least a kernel log so low-memory situations aren’t hidden.kernel/memory/Memory.h (3)
6-16
: Good stats struct – but keep it kernel-internalIf the struct crosses ABI boundaries, remember to freeze its layout; otherwise mark it internal to avoid future compatibility pain.
19-20
: DuplicatePAGE_SIZE
macro
PAGE_SIZE
is already defined inVMem.h
. Redefining (even to same value) can trigger warnings.
Move common constants to a single header and include that instead.
26-28
: Huge-page API: clarify unitsParameters use “pages” but the size refers to 2 MiB huge pages, not 4 KiB pages.
Document this explicitly to avoid misuse.kernel/memory/MemPool.h (1)
35-42
: Document pool thread-safety expectations.The header exposes
FastAlloc/FastFree
but doesn’t state whether they are safe under concurrent callers. Clarifying this now avoids misuse later.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
fs/VFS.c
(1 hunks)kernel/core/Kernel.c
(10 hunks)kernel/memory/MemPool.c
(1 hunks)kernel/memory/MemPool.h
(1 hunks)kernel/memory/Memory.c
(4 hunks)kernel/memory/Memory.h
(1 hunks)kernel/memory/VMem.c
(4 hunks)kernel/memory/VMem.h
(3 hunks)kernel/process/Process.h
(1 hunks)meson.build
(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: assembler-0
PR: assembler-0/VoidFrame#53
File: kernel/core/Kernel.c:256-259
Timestamp: 2025-08-09T04:47:56.871Z
Learning: The VoidFrame kernel memory crash occurs when VIRT_ADDR_SPACE_START is set to 0xFFFF800000000000ULL instead of 0xFFFFFFFF80000000ULL, causing the heap to allocate 128TB away from where the kernel is mapped, resulting in page faults when accessing heap memory.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#47
File: kernel/core/Kernel.c:434-434
Timestamp: 2025-07-16T01:04:23.790Z
Learning: VMemAlloc() in VoidFrame kernel fails with error code -2 (VMEM_ERROR_INVALID_ADDR) due to virtual address space layout conflict where the heap start address (_bss_end + KERNEL_VIRTUAL_OFFSET + 1MB margin) exceeds VIRT_ADDR_SPACE_END (0xFFFFFFFF80000000ULL), causing VMemMap validation to fail.
📚 Learning: 2025-07-16T01:04:23.790Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#47
File: kernel/core/Kernel.c:434-434
Timestamp: 2025-07-16T01:04:23.790Z
Learning: VMemAlloc() in VoidFrame kernel fails with error code -2 (VMEM_ERROR_INVALID_ADDR) due to virtual address space layout conflict where the heap start address (_bss_end + KERNEL_VIRTUAL_OFFSET + 1MB margin) exceeds VIRT_ADDR_SPACE_END (0xFFFFFFFF80000000ULL), causing VMemMap validation to fail.
Applied to files:
kernel/memory/VMem.h
kernel/memory/Memory.h
kernel/memory/VMem.c
kernel/memory/Memory.c
kernel/core/Kernel.c
📚 Learning: 2025-08-09T04:47:56.871Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#53
File: kernel/core/Kernel.c:256-259
Timestamp: 2025-08-09T04:47:56.871Z
Learning: The VoidFrame kernel memory crash occurs when VIRT_ADDR_SPACE_START is set to 0xFFFF800000000000ULL instead of 0xFFFFFFFF80000000ULL, causing the heap to allocate 128TB away from where the kernel is mapped, resulting in page faults when accessing heap memory.
Applied to files:
kernel/memory/VMem.h
kernel/memory/VMem.c
kernel/memory/Memory.c
kernel/core/Kernel.c
🧬 Code Graph Analysis (5)
fs/VFS.c (1)
drivers/Serial.c (1)
SerialWrite
(31-35)
kernel/memory/VMem.h (1)
kernel/memory/VMem.c (2)
VMemAllocWithGuards
(332-352)VMemFreeWithGuards
(354-368)
kernel/memory/Memory.h (1)
kernel/memory/Memory.c (5)
MemoryInit
(72-174)FreePage
(273-311)AllocHugePages
(235-270)GetDetailedMemoryStats
(317-367)IsPageFree
(58-64)
kernel/memory/Memory.c (1)
kernel/atomic/Spinlock.h (2)
SpinLockIrqSave
(19-26)SpinUnlockIrqRestore
(28-31)
kernel/memory/MemPool.c (1)
kernel/memory/VMem.c (2)
VMemAlloc
(215-273)VMemFree
(275-330)
🔇 Additional comments (6)
fs/VFS.c (1)
57-57
: Nice touch – missing newline fixedThe added “\n” keeps log output on separate lines and matches surrounding calls.
meson.build (1)
75-75
: Build list updated correctly
MemPool.c
is now compiled; no other flags required.kernel/memory/Memory.h (1)
29-32
: Expose internal allocator helpers cautiously
IsPageFree
is now public; confirm no caller relies on internal bitmap representation.kernel/memory/Memory.c (1)
58-64
: Inline/extern mismatch forIsPageFree
.The function is now a global symbol here, but headers likely still declare it
static inline
. This leads to multiple definitions or silent ODR violations. Declare once in the header asstatic inline
andextern
only if you need a fallback object file version, not both.kernel/core/Kernel.c (1)
458-460
: Virtual-space overlap check may fire erroneously.
VIRT_ADDR_SPACE_START >= KERNEL_SPACE_START
compares unrelated address spaces (user-vs-kernel). If both macros are large constants the test can trip even on valid configs. Clarify intent or drop the check.kernel/process/Process.h (1)
80-85
: ConfirmREGULAR_QUEUE_MASK
width with new priority levels.With
MAX_PRIORITY_LEVELS
reduced to 5 andRT_PRIORITY_THRESHOLD
now 2,REGULAR_QUEUE_MASK
expands to all-ones above bit 1. Make sure scheduling code masks with the correct bit-width (e.g.& (MAX_PRIORITY_LEVELS-1)
) to avoid undefined behaviour on 32-bit math.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (2)
fs/FAT12.c (1)
33-60
: Mismatch between alloc / free APIs – risk of heap corruption
sector_buffer
is obtained withFastAlloc
but later released withKernelFree
.
Using different allocators for the same block is undefined ‑ either leak memory or corrupt the pool/heap.- sector_buffer = FastAlloc(POOL_SIZE_512); + sector_buffer = FastAlloc(POOL_SIZE_512); ... - KernelFree(sector_buffer); + FastFree(sector_buffer);Audit every
KernelFree(sector_buffer)
in this file.kernel/process/Process.c (1)
938-946
: Stack is freed with the wrong API
VMemAllocWithGuards
pairs withVMemFreeWithGuards
, but cleanup still callsFreePage
.
Guard pages stay mapped and the real stack page is leaked.Proposed minimal fix:
- if (proc->stack) { - FreePage(proc->stack); + if (proc->stack) { + VMemFreeWithGuards(proc->stack, STACK_SIZE);Search for any other
FreePage(proc->stack)
occurrences.
♻️ Duplicate comments (2)
kernel/memory/VMem.c (1)
336-356
: Guard pages still writable – remap fails
TheVMemMap
calls returnVMEM_ERROR_ALREADY_MAPPED
; error is ignored so guard pages keepPAGE_WRITABLE
.See previous comment – use a flag-update helper (
VMemProtect
) instead of re-mapping.kernel/memory/Memory.c (1)
235-272
: Update statistics while the lock is held
huge_pages_allocated++
andallocation_failures++
are executed after releasingmemory_lock
, risking lost updates under contention.- SpinUnlockIrqRestore(&memory_lock, flags); - ++huge_pages_allocated; + ++huge_pages_allocated; + SpinUnlockIrqRestore(&memory_lock, flags);Do the same for the failure path.
🧹 Nitpick comments (4)
kernel/memory/Memory.c (1)
319-369
: Long critical section – consider snapshot instead
GetDetailedMemoryStats
scans the full page bitmap while holdingmemory_lock
, blocking all alloc/free callers for several ms on large RAM.
Snapshot needed vars, release lock, then compute fragmentation.kernel/core/Kernel.c (3)
64-84
: Unused parameter – minor cleanup
table_name
is never used after(void)table_name;
.
If you don’t plan to log it, drop the parameter to avoid confusion.
97-161
: Progress logging extremely chattyPrinting every 64 MB/256 MB during identity map floods serial consoles (>16 000 chars).
Consider gating behindVERBOSE_BOOT
or increasing interval.
452-461
: Overlap check is informative but non-fatalCurrently only logs; overlapping regions can still crash later.
ConsiderPANIC()
if the check trips.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
fs/FAT12.c
(2 hunks)kernel/core/Kernel.c
(10 hunks)kernel/memory/KernelHeap.c
(2 hunks)kernel/memory/Memory.c
(4 hunks)kernel/memory/VMem.c
(4 hunks)kernel/process/Process.c
(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- kernel/memory/KernelHeap.c
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: assembler-0
PR: assembler-0/VoidFrame#53
File: kernel/core/Kernel.c:256-259
Timestamp: 2025-08-09T04:47:56.871Z
Learning: The VoidFrame kernel memory crash occurs when VIRT_ADDR_SPACE_START is set to 0xFFFF800000000000ULL instead of 0xFFFFFFFF80000000ULL, causing the heap to allocate 128TB away from where the kernel is mapped, resulting in page faults when accessing heap memory.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#47
File: kernel/core/Kernel.c:434-434
Timestamp: 2025-07-16T01:04:23.790Z
Learning: VMemAlloc() in VoidFrame kernel fails with error code -2 (VMEM_ERROR_INVALID_ADDR) due to virtual address space layout conflict where the heap start address (_bss_end + KERNEL_VIRTUAL_OFFSET + 1MB margin) exceeds VIRT_ADDR_SPACE_END (0xFFFFFFFF80000000ULL), causing VMemMap validation to fail.
📚 Learning: 2025-07-16T01:04:23.790Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#47
File: kernel/core/Kernel.c:434-434
Timestamp: 2025-07-16T01:04:23.790Z
Learning: VMemAlloc() in VoidFrame kernel fails with error code -2 (VMEM_ERROR_INVALID_ADDR) due to virtual address space layout conflict where the heap start address (_bss_end + KERNEL_VIRTUAL_OFFSET + 1MB margin) exceeds VIRT_ADDR_SPACE_END (0xFFFFFFFF80000000ULL), causing VMemMap validation to fail.
Applied to files:
fs/FAT12.c
kernel/process/Process.c
kernel/memory/VMem.c
kernel/memory/Memory.c
kernel/core/Kernel.c
📚 Learning: 2025-08-09T09:29:00.591Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#57
File: kernel/memory/VMem.h:64-66
Timestamp: 2025-08-09T09:29:00.591Z
Learning: In the VoidFrame kernel, removing locks from KernelMemoryAlloc/KernelMemoryFree causes heap corruption where shell command strings (like "clear" = 0x61656C63) overwrite heap block magic values due to race conditions between allocation and free operations.
Applied to files:
kernel/process/Process.c
kernel/memory/VMem.c
kernel/memory/Memory.c
kernel/core/Kernel.c
📚 Learning: 2025-08-09T04:47:56.871Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#53
File: kernel/core/Kernel.c:256-259
Timestamp: 2025-08-09T04:47:56.871Z
Learning: The VoidFrame kernel memory crash occurs when VIRT_ADDR_SPACE_START is set to 0xFFFF800000000000ULL instead of 0xFFFFFFFF80000000ULL, causing the heap to allocate 128TB away from where the kernel is mapped, resulting in page faults when accessing heap memory.
Applied to files:
kernel/process/Process.c
kernel/memory/VMem.c
kernel/memory/Memory.c
kernel/core/Kernel.c
🧬 Code Graph Analysis (3)
fs/FAT12.c (1)
kernel/memory/MemPool.c (1)
FastAlloc
(77-87)
kernel/process/Process.c (1)
kernel/memory/VMem.c (1)
VMemAllocWithGuards
(336-356)
kernel/memory/Memory.c (1)
kernel/atomic/Spinlock.h (2)
SpinLockIrqSave
(19-26)SpinUnlockIrqRestore
(28-31)
🔇 Additional comments (10)
fs/FAT12.c (1)
6-6
: Header inclusion looks correct.Nothing to flag for the new
MemPool.h
include.kernel/process/Process.c (1)
12-12
: Include addition is fine.kernel/memory/VMem.c (1)
358-374
: Free-with-guards implementation looks good.kernel/memory/Memory.c (2)
18-21
: Counters added – no issues.
58-64
: PublicIsPageFree
– confirm header exposureExporting this previously-private helper is fine, just ensure the prototype is in
Memory.h
.kernel/core/Kernel.c (5)
25-25
: MemPool include acknowledged.
253-267
: Early pool init makes sense – good addition.
340-345
: Detected huge pages but not enabled
CheckHugePageSupport
only reports capability. If huge pages are desired, callVMemMapHuge
or set CR4.PSE here.
350-356
: Health check logic OK.
466-493
: Unsafe bitmap walk
IsPageFree
is not thread-safe; call it undermemory_lock
or snapshot the bitmap before counting.
// Map huge page in VMem.c | ||
int VMemMapHuge(uint64_t vaddr, uint64_t paddr, uint64_t flags) { | ||
if (!IS_HUGE_PAGE_ALIGNED(vaddr) || !IS_HUGE_PAGE_ALIGNED(paddr)) { | ||
return VMEM_ERROR_ALIGN; | ||
} | ||
|
||
irq_flags_t irq_flags = SpinLockIrqSave(&vmem_lock); | ||
|
||
if (!is_valid_phys_addr(paddr)) { | ||
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | ||
return VMEM_ERROR_INVALID_ADDR; | ||
} | ||
// Get PDP table | ||
uint64_t pdp_phys = VMemGetPageTablePhys((uint64_t)kernel_space.pml4, vaddr, 0, 1); | ||
if (!pdp_phys) { | ||
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | ||
return VMEM_ERROR_NOMEM; | ||
} | ||
|
||
// Get PD table | ||
uint64_t pd_phys = VMemGetPageTablePhys(pdp_phys, vaddr, 1, 1); | ||
if (!pd_phys) { | ||
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | ||
return VMEM_ERROR_NOMEM; | ||
} | ||
|
||
// Access PD through identity mapping | ||
uint64_t* pd_virt = (pd_phys < IDENTITY_MAP_SIZE) ? | ||
(uint64_t*)pd_phys : (uint64_t*)PHYS_TO_VIRT(pd_phys); | ||
|
||
int pd_index = (vaddr >> PD_SHIFT) & PT_INDEX_MASK; | ||
|
||
// Check if already mapped | ||
if (pd_virt[pd_index] & PAGE_PRESENT) { | ||
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | ||
return VMEM_ERROR_ALREADY_MAPPED; | ||
} | ||
|
||
// Set huge page mapping (PS bit = 1 for 2MB pages) | ||
pd_virt[pd_index] = paddr | flags | PAGE_PRESENT | PAGE_LARGE; | ||
|
||
VMemFlushTLBSingle(vaddr); | ||
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | ||
return VMEM_SUCCESS; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate virtual address in VMemMapHuge
Unlike VMemMap
, the huge-page variant never checks that vaddr
lies inside [VIRT_ADDR_SPACE_START, VIRT_ADDR_SPACE_END)
.
Passing an out-of-range address will silently corrupt page tables.
+ if (vaddr < VIRT_ADDR_SPACE_START || vaddr >= VIRT_ADDR_SPACE_END) {
+ SpinUnlockIrqRestore(&vmem_lock, irq_flags);
+ return VMEM_ERROR_INVALID_ADDR;
+ }
(Optionally update kernel_space.used_pages/total_mapped
for bookkeeping.)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Map huge page in VMem.c | |
int VMemMapHuge(uint64_t vaddr, uint64_t paddr, uint64_t flags) { | |
if (!IS_HUGE_PAGE_ALIGNED(vaddr) || !IS_HUGE_PAGE_ALIGNED(paddr)) { | |
return VMEM_ERROR_ALIGN; | |
} | |
irq_flags_t irq_flags = SpinLockIrqSave(&vmem_lock); | |
if (!is_valid_phys_addr(paddr)) { | |
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | |
return VMEM_ERROR_INVALID_ADDR; | |
} | |
// Get PDP table | |
uint64_t pdp_phys = VMemGetPageTablePhys((uint64_t)kernel_space.pml4, vaddr, 0, 1); | |
if (!pdp_phys) { | |
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | |
return VMEM_ERROR_NOMEM; | |
} | |
// Get PD table | |
uint64_t pd_phys = VMemGetPageTablePhys(pdp_phys, vaddr, 1, 1); | |
if (!pd_phys) { | |
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | |
return VMEM_ERROR_NOMEM; | |
} | |
// Access PD through identity mapping | |
uint64_t* pd_virt = (pd_phys < IDENTITY_MAP_SIZE) ? | |
(uint64_t*)pd_phys : (uint64_t*)PHYS_TO_VIRT(pd_phys); | |
int pd_index = (vaddr >> PD_SHIFT) & PT_INDEX_MASK; | |
// Check if already mapped | |
if (pd_virt[pd_index] & PAGE_PRESENT) { | |
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | |
return VMEM_ERROR_ALREADY_MAPPED; | |
} | |
// Set huge page mapping (PS bit = 1 for 2MB pages) | |
pd_virt[pd_index] = paddr | flags | PAGE_PRESENT | PAGE_LARGE; | |
VMemFlushTLBSingle(vaddr); | |
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | |
return VMEM_SUCCESS; | |
} | |
// Map huge page in VMem.c | |
int VMemMapHuge(uint64_t vaddr, uint64_t paddr, uint64_t flags) { | |
if (!IS_HUGE_PAGE_ALIGNED(vaddr) || !IS_HUGE_PAGE_ALIGNED(paddr)) { | |
return VMEM_ERROR_ALIGN; | |
} | |
irq_flags_t irq_flags = SpinLockIrqSave(&vmem_lock); | |
// Validate virtual address range | |
if (vaddr < VIRT_ADDR_SPACE_START || vaddr >= VIRT_ADDR_SPACE_END) { | |
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | |
return VMEM_ERROR_INVALID_ADDR; | |
} | |
if (!is_valid_phys_addr(paddr)) { | |
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | |
return VMEM_ERROR_INVALID_ADDR; | |
} | |
// Get PDP table | |
uint64_t pdp_phys = VMemGetPageTablePhys((uint64_t)kernel_space.pml4, vaddr, 0, 1); | |
if (!pdp_phys) { | |
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | |
return VMEM_ERROR_NOMEM; | |
} | |
// Get PD table | |
uint64_t pd_phys = VMemGetPageTablePhys(pdp_phys, vaddr, 1, 1); | |
if (!pd_phys) { | |
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | |
return VMEM_ERROR_NOMEM; | |
} | |
// Access PD through identity mapping | |
uint64_t* pd_virt = (pd_phys < IDENTITY_MAP_SIZE) ? | |
(uint64_t*)pd_phys : (uint64_t*)PHYS_TO_VIRT(pd_phys); | |
int pd_index = (vaddr >> PD_SHIFT) & PT_INDEX_MASK; | |
// Check if already mapped | |
if (pd_virt[pd_index] & PAGE_PRESENT) { | |
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | |
return VMEM_ERROR_ALREADY_MAPPED; | |
} | |
// Set huge page mapping (PS bit = 1 for 2MB pages) | |
pd_virt[pd_index] = paddr | flags | PAGE_PRESENT | PAGE_LARGE; | |
VMemFlushTLBSingle(vaddr); | |
SpinUnlockIrqRestore(&vmem_lock, irq_flags); | |
return VMEM_SUCCESS; | |
} |
🤖 Prompt for AI Agents
In kernel/memory/VMem.c around lines 173 to 217, the function VMemMapHuge lacks
validation to ensure the virtual address vaddr is within the valid range
[VIRT_ADDR_SPACE_START, VIRT_ADDR_SPACE_END). Add a check at the start of
VMemMapHuge to verify that vaddr is within this range and return an appropriate
error code if not. Optionally, update kernel_space.used_pages and total_mapped
counters after a successful mapping for bookkeeping.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Chores