-
Notifications
You must be signed in to change notification settings - Fork 1
Development #127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Development #127
Conversation
WalkthroughEdits touch filesystem, virtual memory, heap allocator, IDE and shell messages: Iso9660 name-matching and buffer reuse updated; VFRFS prints empty-dir newline and safer name handling; KernelHeap adds fast-cache linkage, cache-aware allocation/coalescing and tuning API; VMem gains 2MB huge-page support and removes VMemMapKernel. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Caller
participant HeapAlloc as KernelHeapAlloc
participant FastCache as FastCache
participant HeapMgr as HeapManager
participant Coalescer as Coalescer
Caller->>HeapAlloc: allocate(size)
alt small-class and cache enabled
HeapAlloc->>FastCache: FastCachePop(class)
alt cache hit
FastCache-->>HeapAlloc: block (cache_next)
HeapAlloc-->>Caller: return block
else cache miss
HeapAlloc->>HeapMgr: FindFreeBlock(size) (skip in_cache)
HeapMgr-->>HeapAlloc: free block
HeapAlloc-->>Caller: return block
end
else large/non-cached
HeapAlloc->>HeapMgr: FindFreeBlock(size) (skip in_cache)
HeapMgr-->>HeapAlloc: free block
HeapAlloc-->>Caller: return block
end
Caller->>HeapAlloc: free(ptr)
HeapAlloc->>Coalescer: CoalesceWithAdjacent(block)
note right of Coalescer #DDEFEF: CacheRemove neighbors if in_cache\nVerify physical adjacency via AreAdjacent
Coalescer-->>HeapAlloc: merged/free block
HeapAlloc->>FastCache: FastCachePush(class, block) (respect capacity)
sequenceDiagram
autonumber
actor Caller
participant VMem as VMem
participant PageTable as PageTable
Caller->>VMem: map 2MB via VMemMapHuge(vaddr,paddr,flags)
VMem->>PageTable: set PDE with PAGE_PRESENT|PAGE_LARGE
PageTable-->>VMem: PDE updated
VMem->>VMem: flush TLB (range)
VMem-->>Caller: return success
Caller->>VMem: VMemGetPhysAddr(vaddr)
VMem->>PageTable: read PDE
alt PDE has PAGE_LARGE
VMem-->>Caller: return 2MB base + offset
else fallback
VMem->>PageTable: walk PT for 4KB page
VMem-->>Caller: return 4KB base + offset
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–90 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
fs/VFRFS.c (2)
230-239
: Fix read-size underflow when position > size.If position is past EOF, node->size - position underflows (uint64_t), yielding a huge readable_size. Return 0 early.
- uint64_t readable_size = node->size - handle->position; - size_t bytes_to_read = (size > readable_size) ? readable_size : size; + if (handle->position >= node->size) return 0; + uint64_t readable_size = node->size - handle->position; + size_t bytes_to_read = (size > readable_size) ? (size_t)readable_size : size;
249-259
: Handle size overflow and zero-fill sparse gap on writes past EOF.
- Overflow: handle->position + size can wrap.
- Hole writes: when appending beyond EOF, the gap remains uninitialized; later reads leak stale heap data.
- uint64_t new_size = handle->position + size; + uint64_t old_size = node->size; + uint64_t new_size = handle->position + size; + if (new_size < handle->position) return -1; // overflow if (new_size > node->size) { - void* new_data = KernelReallocate(node->data, new_size); + size_t alloc_size = (size_t)new_size; + if ((uint64_t)alloc_size != new_size) return -1; // cannot represent in size_t + void* new_data = KernelReallocate(node->data, alloc_size); if (!new_data) return -1; // Out of memory node->data = new_data; node->size = new_size; + // Zero-fill gap if we seeked past EOF before writing + if (handle->position > old_size) { + FastMemset((char*)node->data + old_size, 0, handle->position - old_size); + } } FastMemcpy((char*)node->data + handle->position, buffer, size); handle->position += size; node->modified_time = GetCurrentTime();
🧹 Nitpick comments (7)
mm/KernelHeap.c (4)
149-156
: Assert cached invariants on pop.Before clearing in_cache, assert the block actually came from cache and is free. Helps catch misuse.
HeapBlock* block = cache->free_list; +ASSERT(block->in_cache == 1); +ASSERT(block->is_free == 1); cache->free_list = block->cache_next; ... // Clear cache linkage and flag
165-169
: Guard against double-insert into cache.Add sanity checks to ensure only free, non-cached blocks are pushed.
static void FastCachePush(HeapBlock* block, int size_class) { ASSERT(__sync_fetch_and_add(&kheap_lock, 0) != 0); + ASSERT(block->is_free == 1); + ASSERT(block->in_cache == 0); FastCache* cache = &fast_caches[size_class]; if (cache->count >= FAST_CACHE_SIZE) return; // Cache full
260-283
: CacheRemove is O(total cached). Consider O(1) removal.Current scan across all size classes is fine at small FAST_CACHE_SIZE, but can become hot during fragmentation-heavy frees. Consider storing the size-class (e.g., uint8_t cache_class) on push to jump directly to the right list.
504-510
: Stats line wording/formatting nit.“Blocks: X, Y free, Z cached” reads a bit odd. Consider “Blocks: X used, Y free; Z cached” for clarity.
-PrintKernel("[HEAP] Blocks: "); PrintKernelInt(used_blocks); -PrintKernel(", "); PrintKernelInt(free_blocks); PrintKernel(" free, "); +PrintKernel("[HEAP] Blocks: "); PrintKernelInt(used_blocks); PrintKernel(" used, "); +PrintKernelInt(free_blocks); PrintKernel(" free; ");fs/VFRFS.c (3)
211-213
: Guard current process retrieval.MLFQGetCurrentProcess() could be NULL during early boot or kernel threads. Add a null-check or fall back to PID 0.
410-411
: Consistency nit: strncpy bound.Elsewhere you use MAX_FILENAME-1 with explicit terminator. For consistency, consider the same pattern here.
- strncpy(child_name_out, name_start, MAX_FILENAME); + strncpy(child_name_out, name_start, MAX_FILENAME - 1); + child_name_out[MAX_FILENAME - 1] = '\0';
438-439
: User-facing behavior change on empty dirs.Printing just a newline instead of “(empty directory)” alters shell UX. Confirm callers/tests expect a blank line.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
fs/Iso9660.c
(0 hunks)fs/VFRFS.c
(3 hunks)mm/KernelHeap.c
(11 hunks)mm/VMem.c
(0 hunks)mm/VMem.h
(0 hunks)
💤 Files with no reviewable changes (3)
- fs/Iso9660.c
- mm/VMem.h
- mm/VMem.c
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: assembler-0
PR: assembler-0/VoidFrame#101
File: kernel/etc/Shell.c:829-837
Timestamp: 2025-08-31T07:06:37.003Z
Learning: assembler-0 is working on optimizing ramfs performance in VoidFrame kernel and wants to implement proper filesystem characteristics like faster lookups, better memory management, and advanced features like copy-on-write and caching.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#90
File: kernel/memory/MemOps.c:103-124
Timestamp: 2025-08-16T05:28:34.925Z
Learning: In VoidFrame kernel, the virtual address space management uses a simple linear allocator (kernel_space.next_vaddr) without any reclamation mechanism. VMemFree and VMemUnmapMMIO clear page tables but don't free virtual address space for reuse, causing eventual virtual address space exhaustion. The solution requires implementing a free list with coalescing in the VirtAddrSpace structure and updating allocation/deallocation functions to reuse freed ranges.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#93
File: kernel/memory/VMem.c:72-79
Timestamp: 2025-08-17T08:16:43.539Z
Learning: In VoidFrame kernel, VMemGetPhysAddr and related page table walking functions were causing panics because they assumed identity mapping existed for physical addresses < IDENTITY_MAP_SIZE (4GB), but the bootloader doesn't provide this 1:1 mapping. The fix is to remove all identity mapping assumptions and always use PHYS_TO_VIRT() for page table access in functions like VMemGetPageTablePhys, VMemMap, VMemMapHuge, VMemGetPhysAddr, VMemUnmap, VMemMapMMIO, and VMemUnmapMMIO.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#106
File: kernel/sched/MLFQ.c:691-694
Timestamp: 2025-09-02T00:55:58.224Z
Learning: The VoidFrame MLFQ ProcessRuntimePath bug occurs because: 1) ProcessRuntimePath is defined as char* (not char array), 2) All processes get assigned the same FormatS static buffer pointer, 3) Directory cleanup via VfsDelete happens during TerminateProcess() not final cleanup, 4) When terminating a process, VfsDelete uses whatever path is currently in the shared static buffer, causing wrong directories to be deleted. Fix requires either changing ProcessRuntimePath to char array or using dynamic allocation.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#92
File: meson.build:31-34
Timestamp: 2025-08-17T05:53:48.040Z
Learning: The VoidFrame kernel's memory detection was reporting incorrect memory amounts (like 9200MB) because it used the highest physical address (including memory holes and reserved regions) as total memory size, rather than summing only the MULTIBOOT2_MEMORY_AVAILABLE regions. The fix involves counting only available memory regions and using the highest available address only for bitmap sizing.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#57
File: kernel/memory/VMem.h:64-66
Timestamp: 2025-08-09T09:29:00.630Z
Learning: In the VoidFrame kernel, removing locks from KernelMemoryAlloc/KernelMemoryFree causes heap corruption where shell command strings (like "clear" = 0x61656C63) overwrite heap block magic values due to race conditions between allocation and free operations.
🧬 Code graph analysis (2)
fs/VFRFS.c (2)
kernel/etc/StringOps.c (1)
strncpy
(76-81)kernel/etc/Console.c (1)
PrintKernel
(181-199)
mm/KernelHeap.c (1)
kernel/etc/Console.c (2)
PrintKernel
(181-199)PrintKernelInt
(314-338)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
🔇 Additional comments (9)
mm/KernelHeap.c (7)
12-17
: Struct changes for cache linkage look good.The added in_cache/cache_next fields are sensible and isolate cache linkage from physical order. No ABI impact since this is kernel-internal.
129-131
: Reset cache metadata in InitBlock.Good call clearing in_cache and cache_next during (re)initialization to avoid stale cache linkage.
179-180
: Exclude in-cache blocks from search.Nice—avoids double-tracking and speeds up scans.
193-195
: Best-fit path also skips cached blocks.Consistent with small-alloc path. Good.
255-259
: Adjacency predicate correctness.Using BlockToUser(a) + a->size to test header-to-header adjacency is correct for your layout and avoids off-by-header mistakes.
286-306
: Cache-aware coalescing is correct and safe.Removing neighbors from cache before merge prevents dangling cache nodes. Recursing to prev maintains maximal coalescing.
321-322
: Validation level default.Setting BASIC at init is reasonable for prod. If you rely on checksum-based corruption detection in certain builds, gate this with a build flag.
fs/VFRFS.c (2)
47-47
: Nulling node->data after free.Good hygiene—prevents accidental reuse.
109-111
: Safer name copy.Using MAX_FILENAME-1 with explicit NUL termination is correct and prevents overflow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
mm/VMem.c (3)
78-86
: Regression: identity-mapping assumptions reintroduced; remove ID-mapped access and allocation constraints.Prior fixes removed reliance on 1:1 identity mapping for page-table access/placement. This code probes identity map in VMemInit and prefers identity-mapped low memory for tables, which will panic on platforms without 1:1 and breaks when CR3 points to non-identity RAM. Access page tables via PHYS_TO_VIRT unconditionally and allocate tables anywhere.
Apply this minimal diff:
- // Now test identity mapping - if (VMemGetPhysAddr(0x100000) != 0x100000) { - PANIC("Bootstrap identity mapping failed - VALIDATION FAILED"); - } - const uint64_t probe = IDENTITY_MAP_SIZE - PAGE_SIZE; - if (VMemGetPhysAddr(probe) != probe) { - PANIC("Bootstrap identity mapping failed at IDENTITY_MAP_SIZE boundary"); - } + // Do not assume identity mapping exists; rely on higher-half mappings only. @@ - // Access the table via identity map when available, otherwise via higher-half mapping - uint64_t* table_virt = (pml4_phys < IDENTITY_MAP_SIZE) - ? (uint64_t*)pml4_phys - : (uint64_t*)PHYS_TO_VIRT(pml4_phys); + // Always access via higher-half mapping + uint64_t* table_virt = (uint64_t*)PHYS_TO_VIRT(pml4_phys); @@ - // Allocate page-table memory from identity-mapped low memory to ensure accessibility - void* new_table_phys = NULL; - for (int attempt = 0; attempt < 32; attempt++) { - void* candidate = AllocPage(); - if (!candidate) break; - if ((uint64_t)candidate < IDENTITY_MAP_SIZE) { - new_table_phys = candidate; - break; - } - // Not identity-mapped; return it to the pool and try again - FreePage(candidate); - } - if (!new_table_phys) return 0; + // Allocate page-table memory; accessibility via PHYS_TO_VIRT + void* new_table_phys = AllocPage(); + if (!new_table_phys) return 0; @@ - // Zero the new table using an address we can access - if ((uint64_t)new_table_phys < IDENTITY_MAP_SIZE) { - FastZeroPage(new_table_phys); - } else { - FastZeroPage(PHYS_TO_VIRT(new_table_phys)); - } + // Zero the new table via higher-half mapping + FastZeroPage(PHYS_TO_VIRT(new_table_phys));Also applies to: 94-131
174-181
: Consistent page-table access: remove conditional identity-map dereferences.These sites conditionally use identity mapping for PT/PD access. For consistency and to avoid future regressions, always use PHYS_TO_VIRT.
- uint64_t* pt_virt; - if (pt_phys < IDENTITY_MAP_SIZE) { - pt_virt = (uint64_t*)pt_phys; - } else { - pt_virt = (uint64_t*)PHYS_TO_VIRT(pt_phys); - } + uint64_t* pt_virt = (uint64_t*)PHYS_TO_VIRT(pt_phys); - uint64_t* pd_virt = (pd_phys < IDENTITY_MAP_SIZE) ? - (uint64_t*)pd_phys : (uint64_t*)PHYS_TO_VIRT(pd_phys); + uint64_t* pd_virt = (uint64_t*)PHYS_TO_VIRT(pd_phys); - uint64_t* pt_virt = (pt_phys < IDENTITY_MAP_SIZE) ? (uint64_t*)pt_phys : (uint64_t*)PHYS_TO_VIRT(pt_phys); + uint64_t* pt_virt = (uint64_t*)PHYS_TO_VIRT(pt_phys); - if (pt_phys < IDENTITY_MAP_SIZE) { - pt_virt = (uint64_t*)pt_phys; - } else { - pt_virt = (uint64_t*)PHYS_TO_VIRT(pt_phys); - } + pt_virt = (uint64_t*)PHYS_TO_VIRT(pt_phys);Also applies to: 226-233, 530-531, 624-629
62-89
: Stop dereferencing page-table physical addresses via an identity map — always use PHYS_TO_VIRT()
- kernel_space.pml4 is stored as a physical address (mm/VMem.c:74) and is only cast to uint64_t when passed to helpers (e.g. mm/VMem.c:436, 498, 573) — OK.
- VMemGetPageTablePhys and multiple callers still dereference tables via direct casts when phys < IDENTITY_MAP_SIZE. Instances found: mm/VMem.c:95–97, 125–129, 174–181, 225–229, 442–462, 504–507, 528–532, 623–630, 708–715. Remove the identity-map branches and always map page-table phys addrs with PHYS_TO_VIRT() before dereferencing.
- VMemInit asserts and PANIC()s if the bootstrap identity mapping is missing (mm/VMem.c:80–84) — remove or relax that validation.
🧹 Nitpick comments (6)
mm/KernelHeap.h (1)
20-22
: API addition looks good; document bounds and call-time guarantees.Please document the clamped ranges (MIN_BLOCK_SIZE..8192, 0..1024) and explicitly state it's safe pre/post KernelHeapInit and under IRQ-off sections.
Would you like me to add header comments mirroring the clamping behavior from KernelHeap.c?
mm/VMem.c (1)
434-467
: Huge-page detection in VMemGetPhysAddr: tighten masking.Using PT_ADDR_MASK then manual 2MB align works but is brittle. Prefer an explicit huge-page mask to avoid leaking flag bits on non-4K fields.
- uint64_t base = pde & PT_ADDR_MASK; // upper bits contain frame; PT_ADDR_MASK works for PD too - return (base & ~((uint64_t)HUGE_PAGE_SIZE - 1)) | (vaddr & (HUGE_PAGE_SIZE - 1)); + uint64_t base = pde & PD_ADDR_MASK; // define PD_ADDR_MASK for 2MB frames + return base | (vaddr & (HUGE_PAGE_SIZE - 1));If PD_ADDR_MASK isn’t defined, I can add it to VMem.h as (~((1ULL<<21)-1) & addr-mask-bits).
mm/KernelHeap.c (4)
147-173
: Cache stats update races; increment misses under the same lock as the cache access.misses++ occurs after unlocking, leading to inconsistent stats under contention.
- HeapBlock* block = FastCachePop(size_class); + HeapBlock* block = FastCachePop(size_class); if (block) { fast_caches[size_class].hits++; InitBlock(block, actual_size, 0); @@ - } - fast_caches[size_class].misses++; - - SpinUnlockIrqRestore(&kheap_lock, flags); + } else { + fast_caches[size_class].misses++; + } + SpinUnlockIrqRestore(&kheap_lock, flags);Also applies to: 341-361
264-286
: Add lock-held assertion in CacheRemove for safety.CacheRemove assumes kheap_lock is held (called from coalescing). Make it explicit to catch misuse.
static void CacheRemove(HeapBlock* blk) { + ASSERT(__sync_fetch_and_add(&kheap_lock, 0) != 0); if (!blk->in_cache) return;
564-589
: KernelHeapTune: solid. Consider exposing the clamp bounds in the header for callers.Helps keep call sites consistent and avoids “magic” numbers outside.
491-541
: Stats: print largest_free in KB for consistency.Minor UX nit to align units.
- PrintKernel("[HEAP] Fragmentation: "); PrintKernelInt(frag); PrintKernel("% (largest free block "); - PrintKernelInt(largest_free); PrintKernel(" bytes)\n"); + PrintKernel("[HEAP] Fragmentation: "); PrintKernelInt(frag); PrintKernel("% (largest free block "); + PrintKernelInt(largest_free / 1024); PrintKernel(" KB)\n");
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
mm/KernelHeap.c
(13 hunks)mm/KernelHeap.h
(1 hunks)mm/VMem.c
(3 hunks)mm/VMem.h
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- mm/VMem.h
🧰 Additional context used
🧠 Learnings (9)
📓 Common learnings
Learnt from: assembler-0
PR: assembler-0/VoidFrame#101
File: kernel/etc/Shell.c:829-837
Timestamp: 2025-08-31T07:06:37.003Z
Learning: assembler-0 is working on optimizing ramfs performance in VoidFrame kernel and wants to implement proper filesystem characteristics like faster lookups, better memory management, and advanced features like copy-on-write and caching.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#90
File: kernel/memory/MemOps.c:103-124
Timestamp: 2025-08-16T05:28:34.925Z
Learning: In VoidFrame kernel, the virtual address space management uses a simple linear allocator (kernel_space.next_vaddr) without any reclamation mechanism. VMemFree and VMemUnmapMMIO clear page tables but don't free virtual address space for reuse, causing eventual virtual address space exhaustion. The solution requires implementing a free list with coalescing in the VirtAddrSpace structure and updating allocation/deallocation functions to reuse freed ranges.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#92
File: meson.build:31-34
Timestamp: 2025-08-17T05:53:48.040Z
Learning: The VoidFrame kernel's memory detection was reporting incorrect memory amounts (like 9200MB) because it used the highest physical address (including memory holes and reserved regions) as total memory size, rather than summing only the MULTIBOOT2_MEMORY_AVAILABLE regions. The fix involves counting only available memory regions and using the highest available address only for bitmap sizing.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#93
File: kernel/memory/VMem.c:72-79
Timestamp: 2025-08-17T08:16:43.539Z
Learning: In VoidFrame kernel, VMemGetPhysAddr and related page table walking functions were causing panics because they assumed identity mapping existed for physical addresses < IDENTITY_MAP_SIZE (4GB), but the bootloader doesn't provide this 1:1 mapping. The fix is to remove all identity mapping assumptions and always use PHYS_TO_VIRT() for page table access in functions like VMemGetPageTablePhys, VMemMap, VMemMapHuge, VMemGetPhysAddr, VMemUnmap, VMemMapMMIO, and VMemUnmapMMIO.
📚 Learning: 2025-08-17T08:16:43.539Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#93
File: kernel/memory/VMem.c:72-79
Timestamp: 2025-08-17T08:16:43.539Z
Learning: In VoidFrame kernel, VMemGetPhysAddr and related page table walking functions were causing panics because they assumed identity mapping existed for physical addresses < IDENTITY_MAP_SIZE (4GB), but the bootloader doesn't provide this 1:1 mapping. The fix is to remove all identity mapping assumptions and always use PHYS_TO_VIRT() for page table access in functions like VMemGetPageTablePhys, VMemMap, VMemMapHuge, VMemGetPhysAddr, VMemUnmap, VMemMapMMIO, and VMemUnmapMMIO.
Applied to files:
mm/VMem.c
📚 Learning: 2025-08-13T07:19:55.274Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#79
File: drivers/PCI/PCI.c:154-154
Timestamp: 2025-08-13T07:19:55.274Z
Learning: VMemUnmapMMIO in VoidFrame kernel should use the same page table access pattern as the working VMemUnmap function: `uint64_t* pt_table = (uint64_t*)(pt_phys + KERNEL_VIRTUAL_OFFSET);` rather than PHYS_TO_VIRT macro or identity mapping checks.
Applied to files:
mm/VMem.c
📚 Learning: 2025-08-16T05:28:34.925Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#90
File: kernel/memory/MemOps.c:103-124
Timestamp: 2025-08-16T05:28:34.925Z
Learning: In VoidFrame kernel, the virtual address space management uses a simple linear allocator (kernel_space.next_vaddr) without any reclamation mechanism. VMemFree and VMemUnmapMMIO clear page tables but don't free virtual address space for reuse, causing eventual virtual address space exhaustion. The solution requires implementing a free list with coalescing in the VirtAddrSpace structure and updating allocation/deallocation functions to reuse freed ranges.
Applied to files:
mm/VMem.c
📚 Learning: 2025-07-16T01:04:23.790Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#47
File: kernel/core/Kernel.c:434-434
Timestamp: 2025-07-16T01:04:23.790Z
Learning: VMemAlloc() in VoidFrame kernel fails with error code -2 (VMEM_ERROR_INVALID_ADDR) due to virtual address space layout conflict where the heap start address (_bss_end + KERNEL_VIRTUAL_OFFSET + 1MB margin) exceeds VIRT_ADDR_SPACE_END (0xFFFFFFFF80000000ULL), causing VMemMap validation to fail.
Applied to files:
mm/VMem.c
📚 Learning: 2025-08-17T05:53:48.040Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#92
File: meson.build:31-34
Timestamp: 2025-08-17T05:53:48.040Z
Learning: The VoidFrame kernel's memory detection was reporting incorrect memory amounts (like 9200MB) because it used the highest physical address (including memory holes and reserved regions) as total memory size, rather than summing only the MULTIBOOT2_MEMORY_AVAILABLE regions. The fix involves counting only available memory regions and using the highest available address only for bitmap sizing.
Applied to files:
mm/VMem.c
📚 Learning: 2025-08-31T02:11:29.291Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#100
File: kernel/memory/VMem.h:44-47
Timestamp: 2025-08-31T02:11:29.291Z
Learning: VoidFrame kernel's MAX_PAGES is hardcoded to exactly 4GB (1,048,576 pages), causing false "Memory detected exceeds MAX_PAGES" warnings when memory holes push the highest physical address slightly above 4GB. The solution is to sum only MULTIBOOT2_MEMORY_AVAILABLE regions instead of using max_physical_address for total_pages calculation.
Applied to files:
mm/VMem.c
📚 Learning: 2025-08-16T05:27:10.760Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#90
File: kernel/memory/MemOps.c:103-124
Timestamp: 2025-08-16T05:27:10.760Z
Learning: In VoidFrame kernel VMemAlloc implementation, different virtual addresses on consecutive allocations after freeing is expected behavior because VMemAlloc uses a simple bump allocator (kernel_space.next_vaddr) that only moves forward and doesn't track or reuse freed virtual address ranges.
Applied to files:
mm/VMem.c
📚 Learning: 2025-08-13T07:17:35.941Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#79
File: drivers/PCI/PCI.c:154-154
Timestamp: 2025-08-13T07:17:35.941Z
Learning: In VoidFrame kernel, VMemGetPageTablePhys uses a 3-level hierarchy: Level 0 = PML4->PDP, Level 1 = PDP->PD, Level 2 = PD->PT. To access Page Tables, always use level=2, not level=1.
Applied to files:
mm/VMem.c
🧬 Code graph analysis (3)
mm/KernelHeap.h (1)
mm/KernelHeap.c (1)
KernelHeapTune
(564-589)
mm/KernelHeap.c (2)
mm/MemOps.c (1)
FastMemset
(10-73)kernel/atomic/Spinlock.h (2)
SpinUnlockIrqRestore
(161-164)SpinLockIrqSave
(154-159)
mm/VMem.c (1)
kernel/atomic/Spinlock.h (2)
SpinUnlockIrqRestore
(161-164)SpinLockIrqSave
(154-159)
// Physical adjacency helper | ||
static inline int AreAdjacent(HeapBlock* a, HeapBlock* b) { | ||
return (uint8_t*)b == ((uint8_t*)BlockToUser(a) + a->size); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Coalescing can merge with a cached previous block; remove it from cache first to avoid losing tail space.
You already detach a cached “next”; do the same for “prev” before recursive merge. Otherwise a cached block can grow, later be popped and shrunk without splitting, leaking the remainder.
static void CoalesceWithAdjacent(HeapBlock* block) {
@@
- // Let previous block merge with this one if physically adjacent
- if (block->prev && block->prev->is_free && AreAdjacent(block->prev, block)) {
+ // Let previous block merge with this one if physically adjacent
+ if (block->prev && block->prev->is_free && AreAdjacent(block->prev, block)) {
+ if (block->prev->in_cache) {
+ CacheRemove(block->prev);
+ }
CoalesceWithAdjacent(block->prev);
}
}
Also applies to: 288-310
🤖 Prompt for AI Agents
In mm/KernelHeap.c around lines 259-263 (and similarly in the coalescing code at
~288-310), when coalescing with the previous block you must remove/detach that
previous block from whatever free-cache or quick-list it's stored in before you
merge; currently you only detach the cached "next" and not the cached "prev",
which can later be popped and treated as smaller than it actually became,
leaking tail space. Modify the routine so that if the previous block is found in
the cache (or free list) you call the same detach/remove routine you use for the
next cached block (or mark it non-cached) prior to performing the
merge/recursive merge, and ensure any metadata or pointers are updated
consistently after the merge.
|
||
int region = (start_vaddr >= VIRT_ADDR_SPACE_HIGH_START) ? 1 : 0; | ||
VMemFreeBlock** free_list = region ? &kernel_space.free_list_high : &kernel_space.free_list_low; | ||
|
||
VMemFreeBlock* new_block = AllocFreeBlock(); | ||
if (!new_block) { | ||
VMemFreeBlock* node = AllocFreeBlock(); | ||
if (!node) { | ||
SpinUnlockIrqRestore(&vmem_lock, flags); | ||
PANIC("VMemFree: Out of free list nodes"); | ||
} | ||
new_block->base = start_vaddr; | ||
new_block->size = size; | ||
node->base = start_vaddr; | ||
node->size = size; | ||
node->next = NULL; | ||
|
||
// Insert into appropriate sorted list and merge | ||
VMemFreeBlock *prev = NULL, *current = *free_list; | ||
while (current && current->base < new_block->base) { | ||
prev = current; | ||
current = current->next; | ||
// Insert sorted by base address | ||
VMemFreeBlock* prev = NULL; | ||
VMemFreeBlock* cur = *free_list; | ||
while (cur && cur->base < node->base) { | ||
prev = cur; | ||
cur = cur->next; | ||
} | ||
|
||
// Merge with next block? | ||
if (current && new_block->base + new_block->size == current->base) { | ||
current->base = new_block->base; | ||
current->size += new_block->size; | ||
ReleaseFreeBlock(new_block); | ||
new_block = current; | ||
// Link in | ||
node->next = cur; | ||
if (prev) prev->next = node; else *free_list = node; | ||
|
||
// Coalesce with next | ||
if (node->next && (node->base + node->size == node->next->base)) { | ||
VMemFreeBlock* next = node->next; | ||
node->size += next->size; | ||
node->next = next->next; | ||
ReleaseFreeBlock(next); | ||
} | ||
|
||
// Merge with previous block? | ||
if (prev && prev->base + prev->size == new_block->base) { | ||
prev->size += new_block->size; | ||
if (new_block == current) { | ||
prev->next = current->next; | ||
ReleaseFreeBlock(current); | ||
} | ||
ReleaseFreeBlock(new_block); | ||
} else if (new_block != current) { | ||
if (prev) { | ||
new_block->next = prev->next; | ||
prev->next = new_block; | ||
} else { | ||
new_block->next = *free_list; | ||
*free_list = new_block; | ||
} | ||
// Coalesce with previous | ||
if (prev && (prev->base + prev->size == node->base)) { | ||
prev->size += node->size; | ||
prev->next = node->next; | ||
ReleaseFreeBlock(node); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Free-list pool exhaustion panics; add fallback to avoid kernel crash.
MAX_FREE_BLOCKS=1024 may be insufficient under fragmentation, leading to PANIC. Prefer a graceful path (coalesce-aggressively-then-retry, or drop the range with a warning) instead of crashing.
Example minimal fallback:
- VMemFreeBlock* node = AllocFreeBlock();
- if (!node) {
- SpinUnlockIrqRestore(&vmem_lock, flags);
- PANIC("VMemFree: Out of free list nodes");
- }
+ VMemFreeBlock* node = AllocFreeBlock();
+ if (!node) {
+ // Emergency defrag: coalesce full list, then retry once
+ VMemFreeBlock *cur = *free_list, *prev2 = NULL;
+ while (cur && cur->next) {
+ if (cur->base + cur->size == cur->next->base) {
+ cur->size += cur->next->size;
+ VMemFreeBlock* tmp = cur->next;
+ cur->next = tmp->next;
+ ReleaseFreeBlock(tmp);
+ continue;
+ }
+ prev2 = cur; cur = cur->next;
+ }
+ node = AllocFreeBlock();
+ if (!node) { SpinUnlockIrqRestore(&vmem_lock, flags); PrintKernelError("VMemFree: dropping range; no nodes\n"); return; }
+ }
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
kernel/etc/POST.c (1)
31-51
: Fix leak and overwrite of live pointers in MemoryTestAfter freeing every other block, the reallocation loop writes into ptrs[0..N/2-1], overwriting still-live odd indices and leaking them. Also, the test never frees remaining allocations before returning.
Apply this diff to reallocate only freed slots and clean up before returning:
- for (int i = 0; i < N; i++) ptrs[i] = KernelMemoryAlloc(128); + for (int i = 0; i < N; i++) ptrs[i] = KernelMemoryAlloc(128); // free every other block - for (int i = 0; i < N; i += 2) KernelFree(ptrs[i]); + for (int i = 0; i < N; i += 2) { + KernelFree(ptrs[i]); + ptrs[i] = NULL; + } // re-allocate in different sizes - for (int i = 0; i < N/2; i++) { - ptrs[i] = KernelMemoryAlloc((i % 2) ? 64 : 256); - } + for (int i = 0; i < N; i += 2) { + int idx = i; // reuse only freed slots + int alt = (i / 2) % 2; + ptrs[idx] = KernelMemoryAlloc(alt ? 64 : 256); + } @@ - return true; + // cleanup all allocations before exit to avoid polluting the heap + for (int i = 0; i < N; i++) { + if (ptrs[i]) { + KernelFree(ptrs[i]); + ptrs[i] = NULL; + } + } + return true;Optional: add OOM checks in the allocation loops and early-exit with cleanup to keep POST self-contained.
🧹 Nitpick comments (5)
kernel/etc/Shell.c (2)
649-656
: Make execve() messages consistent with the new generic textsExecuteCommand now prints “Executable loaded … / Failed to load executable”. Mirror that here for consistency.
- PrintKernelSuccess("ELF Executable loaded (PID: "); + PrintKernelSuccess("Executable loaded (PID: "); @@ - PrintKernelError("Failed to load ELF executable\n"); + PrintKernelError("Failed to load executable\n");
1213-1229
: Use FormatA into a local buffer; FormatS returns a static buffer (do not free)FormatS returns a static buffer (see kernel/etc/Format.c) — it is not heap-allocated and must not be freed. Calling FormatS twice here is fragile and unnecessary. Downstream loaders end up calling MLFQCreateProcess which copies the name into the process slot, so using a stack buffer via FormatA is safe and preferred. Apply Option A below.
- if (VfsIsFile(FormatS("%s/%s", DataApps, cmd_name))) { - const char* full = FormatS("%s/%s", DataApps, cmd_name); + char app_path[256]; + FormatA(app_path, sizeof(app_path), "%s/%s", DataApps, cmd_name); + if (VfsIsFile(app_path)) { + const char* full = app_path; const ExecLoadOptions opts = { .privilege_level = PROC_PRIV_NORM, .security_flags = 0, .max_memory = 16 * 1024 * 1024, .process_name = full }; const uint32_t pid = LoadExecutable(full, &opts); if (pid != 0) { PrintKernelSuccess("Executable loaded (PID: "); PrintKernelInt(pid); PrintKernel(")\n"); } else { PrintKernelError("Failed to load executable\n"); } KernelFree(cmd_name); return; // avoid also running a built-in with the same name }drivers/Ide.h (1)
61-66
: Clarify new API contracts (init state and 2048-byte read)Please document return semantics and buffer requirements:
- IdeIsInitialized: 0/1 vs negative on error? Consider bool for clarity.
- IdeReadLBA2048: guarantees exactly 2048 bytes into buffer; returns 0 on success, negative IDE_ERROR_* on failure?
I can add brief doxygen-style comments to the header if you’d like.
fs/Iso9660.c (2)
15-18
: Harden toupper_iso against signed-char pitfallsOn platforms where char is signed, values >127 become negative. Cast to unsigned to avoid UB in range checks.
-static inline char toupper_iso(char c) { - if (c >= 'a' && c <= 'z') return (char)(c - 'a' + 'A'); - return c; +static inline char toupper_iso(char c) { + unsigned char uc = (unsigned char)c; + if (uc >= 'a' && uc <= 'z') return (char)(uc - 'a' + 'A'); + return (char)uc; }
99-106
: Avoid redundant ';' stripping now that IsoNameEquals handles itentry_filename mutation is no longer needed; reduces work and preserves the original name for logging if desired.
- char* semicolon = FastStrChr(entry_filename, ';'); - if (semicolon) { - *semicolon = 0; - } - if (IsoNameEquals(entry_filename, filename)) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
drivers/Ide.h
(3 hunks)fs/Iso9660.c
(4 hunks)fs/VFRFS.c
(1 hunks)kernel/etc/POST.c
(1 hunks)kernel/etc/Shell.c
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- fs/VFRFS.c
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: assembler-0
PR: assembler-0/VoidFrame#101
File: kernel/etc/Shell.c:829-837
Timestamp: 2025-08-31T07:06:37.003Z
Learning: assembler-0 is working on optimizing ramfs performance in VoidFrame kernel and wants to implement proper filesystem characteristics like faster lookups, better memory management, and advanced features like copy-on-write and caching.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#92
File: meson.build:31-34
Timestamp: 2025-08-17T05:53:48.040Z
Learning: The VoidFrame kernel's memory detection was reporting incorrect memory amounts (like 9200MB) because it used the highest physical address (including memory holes and reserved regions) as total memory size, rather than summing only the MULTIBOOT2_MEMORY_AVAILABLE regions. The fix involves counting only available memory regions and using the highest available address only for bitmap sizing.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#93
File: kernel/memory/VMem.c:72-79
Timestamp: 2025-08-17T08:16:43.539Z
Learning: In VoidFrame kernel, VMemGetPhysAddr and related page table walking functions were causing panics because they assumed identity mapping existed for physical addresses < IDENTITY_MAP_SIZE (4GB), but the bootloader doesn't provide this 1:1 mapping. The fix is to remove all identity mapping assumptions and always use PHYS_TO_VIRT() for page table access in functions like VMemGetPageTablePhys, VMemMap, VMemMapHuge, VMemGetPhysAddr, VMemUnmap, VMemMapMMIO, and VMemUnmapMMIO.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#90
File: kernel/memory/MemOps.c:103-124
Timestamp: 2025-08-16T05:28:34.925Z
Learning: In VoidFrame kernel, the virtual address space management uses a simple linear allocator (kernel_space.next_vaddr) without any reclamation mechanism. VMemFree and VMemUnmapMMIO clear page tables but don't free virtual address space for reuse, causing eventual virtual address space exhaustion. The solution requires implementing a free list with coalescing in the VirtAddrSpace structure and updating allocation/deallocation functions to reuse freed ranges.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#57
File: kernel/memory/VMem.h:64-66
Timestamp: 2025-08-09T09:29:00.630Z
Learning: In the VoidFrame kernel, removing locks from KernelMemoryAlloc/KernelMemoryFree causes heap corruption where shell command strings (like "clear" = 0x61656C63) overwrite heap block magic values due to race conditions between allocation and free operations.
📚 Learning: 2025-08-17T08:16:43.539Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#93
File: kernel/memory/VMem.c:72-79
Timestamp: 2025-08-17T08:16:43.539Z
Learning: In VoidFrame kernel, VMemGetPhysAddr and related page table walking functions were causing panics because they assumed identity mapping existed for physical addresses < IDENTITY_MAP_SIZE (4GB), but the bootloader doesn't provide this 1:1 mapping. The fix is to remove all identity mapping assumptions and always use PHYS_TO_VIRT() for page table access in functions like VMemGetPageTablePhys, VMemMap, VMemMapHuge, VMemGetPhysAddr, VMemUnmap, VMemMapMMIO, and VMemUnmapMMIO.
Applied to files:
kernel/etc/POST.c
📚 Learning: 2025-09-06T07:51:33.673Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#111
File: kernel/sched/MLFQ.c:999-1002
Timestamp: 2025-09-06T07:51:33.673Z
Learning: In ISO9660 filesystem implementation, the Primary Volume Descriptor (PVD) id field is exactly 5 bytes ['C','D','0','0','1'] with NO null terminator. Using FastStrCmp() or other null-terminated string functions will fail because they expect null terminators. Use exact byte-by-byte comparison instead: candidate->id[0] == 'C' && candidate->id[1] == 'D' etc.
Applied to files:
fs/Iso9660.c
🧬 Code graph analysis (2)
kernel/etc/Shell.c (1)
kernel/etc/Console.c (4)
PrintKernelSuccess
(213-218)PrintKernelInt
(314-338)PrintKernel
(181-199)PrintKernelError
(220-225)
fs/Iso9660.c (1)
mm/KernelHeap.c (2)
KernelMemoryAlloc
(328-397)KernelFree
(444-489)
🔇 Additional comments (4)
drivers/Ide.h (2)
36-38
: Add IDENTIFY PACKET command: LGTM
46-50
: Parenthesized negative error codes: LGTMPrevents macro-substitution surprises in expressions.
fs/Iso9660.c (2)
20-37
: Case-insensitive name match and version stripping: LGTMIgnoring leading spaces and “;n” is correct for ISO9660 lookups.
268-273
: Pre-allocating a reusable sector buffer: LGTMReduces alloc/free churn in the read loop and centralizes error handling.
Also applies to: 291-291
Summary by CodeRabbit
Bug Fixes
Refactor
Style