-
Notifications
You must be signed in to change notification settings - Fork 1
Development #136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Development #136
Conversation
Caution Review failedThe pull request is closed. WalkthroughParses the Multiboot memory map during early x86_64 bootstrap to compute highest usable RAM, builds dynamic identity paging tables (PML4/PDP/PD with 2MB pages) sized from detected memory, exposes runtime identity-map size, and converts the physical-page bitmap to a dynamically allocated, size-aware structure. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant MB as Multiboot
participant ASM as pxs.asm (bootstrap)
participant MM as mm/PMem
participant VM as mm/VMem
participant K as kernel/core/Kernel.c
MB->>ASM: Provide multiboot info + memory map
ASM->>ASM: parse_memory_map() -> find highest_phys_addr
ASM->>ASM: compute memory_to_map, num_pdp_entries
ASM->>ASM: setup_dynamic_paging() -> zero PML4/PDP/PD, populate 2MB mappings
ASM->>ASM: set CR3, enable paging, enable long mode
ASM-->>K: Jump to kernel entry (long mode)
K->>MM: MemoryInit()
MM-->>K: return total_pages
K->>VM: g_identity_map_size = total_pages * PAGE_SIZE
Note right of VM: Identity-map size now runtime-configurable
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
meson.build (1)
314-315
: Build break:.full_path()
is invalid on Meson File/FileHolder hereMeson objects in build files don’t support
.full_path()
like this, causing the pipeline error. Use the fs module or a generator to derive per-input output names.Two options:
- Minimal change: use fs.stem() to build the output name.
- Preferred: use a generator with
@BASENAME@
.Apply either of the diffs below.
Option A (fs.stem):
+fs = import('fs') ... - obj_name = '@0@_asm.o'.format(asm_file.full_path().split('/')[-1].split('.')[0]) + obj_name = '@0@_asm.o'.format(fs.stem(asm_file)) ... - obj_name = '@0@_c.o'.format(c_file.full_path().split('/')[-1].split('.')[0]) + obj_name = '@0@_c.o'.format(fs.stem(c_file)) ... - obj_name = '@0@_cpp.o'.format(cpp_file.full_path().split('/')[-1].split('.')[0]) + obj_name = '@0@_cpp.o'.format(fs.stem(cpp_file))Option B (generator):
+# Prefer a generator for per-file compilation +asm_gen = generator(nasm, + output: '@BASENAME@_asm.o', + arguments: [asm_flags, '-o', '@OUTPUT@', '@INPUT@']) +asm_objects = asm_gen.process(asm_sources) - -asm_objects = [] -foreach asm_file : asm_sources - obj_name = '@0@_asm.o'.format(fs.stem(asm_file)) - asm_obj = custom_target( - obj_name, - input : asm_file, - output : obj_name, - command : [nasm, asm_flags, '-o', '@OUTPUT@', '@INPUT@'], - build_by_default : false - ) - asm_objects += asm_obj -endforeachRepeat generator pattern for C/C++ to remove similar name hacks.
Also applies to: 328-331, 342-347
mm/VMem.c (1)
425-449
: Fix double-decrement of VMEM counters in VMemFree
VMemFree
callsVMemUnmap
per page, andVMemUnmap
already decrementsused_pages
andtotal_mapped
. The extra subtraction here double-counts and can underflow.Apply:
irq_flags_t flags = SpinLockIrqSave(&vmem_lock); BuddyFree(start_vaddr, size); - kernel_space.used_pages -= size / PAGE_SIZE; - kernel_space.total_mapped -= size; vmem_frees++; SpinUnlockIrqRestore(&vmem_lock, flags);kernel/core/Kernel.c (2)
90-111
: Bootstrap bug: page tables must be allocated <4GBAt this point only the assembly-created identity map (4GB) is active. Accepting any page <
IDENTITY_MAP_SIZE
(possibly >4GB) risks dereferencing unmapped addresses when writing page tables.Constrain bootstrap page-table allocations to <4GB:
- if ((uint64_t)candidate < IDENTITY_MAP_SIZE) { + // Bootstrap requires page tables to be identity-addressable: keep them <4GB. + if ((uint64_t)candidate < 0x0000000100000000ULL) { table_phys = (uint64_t)candidate; break; }
353-355
: Fix bootstrap summary stringThe log prints
Identity mapping: 4GB (XGB)
. Remove the stale literal:- PrintKernel(" Identity mapping: 4GB ("); - PrintKernelInt(IDENTITY_MAP_SIZE / (1024 * 1024 * 1024)); - PrintKernel("GB)\n"); + PrintKernel(" Identity mapping: "); + PrintKernelInt(IDENTITY_MAP_SIZE / (1024 * 1024 * 1024)); + PrintKernel("GB\n");mm/PMem.c (2)
299-336
: Guard against overflow/invalid requests in AllocHugePages; count allocations.Add basic validation and update allocation_count on success. Also bump allocation_failures on impossible requests.
irq_flags_t flags = SpinLockIrqSave(&memory_lock); // Find contiguous 2MB-aligned region (512 pages) uint64_t pages_per_huge = HUGE_PAGE_SIZE / PAGE_SIZE; // 512 uint64_t total_needed = num_pages * pages_per_huge; + if (num_pages == 0 || num_pages > (total_pages / pages_per_huge)) { + ++allocation_failures; + SpinUnlockIrqRestore(&memory_lock, flags); + return NULL; + } @@ void* huge_page = (void*)(start * PAGE_SIZE); SpinUnlockIrqRestore(&memory_lock, flags); ++huge_pages_allocated; + ++allocation_count; return huge_page;
227-234
: Reserve Multiboot Module Pages in mm/PMem.c
After reserving the multiboot info region, iterate the MULTIBOOT2_TAG_TYPE_MODULE tags (mod_start–mod_end) and call MarkPageUsed on each page to prevent the allocator from handing out initrd pages.
Location: immediately after the “Reserve the memory used by the multiboot info itself” loop in mm/PMem.c.
🧹 Nitpick comments (12)
meson.build (1)
384-399
: Avoid.full_path()
on targets in shell strings; prefer argument arrays/placeholdersInside the ISO rule you format a shell string with
kernel.full_path()
andgrub_mkrescue.full_path()
. This is brittle and may fail across Meson versions. Prefer passing programs/paths as arguments or use@INPUT@
/@OUTPUT@
placeholders withcommand: [grub_mkrescue, ...]
rather than'sh', '-c', ...
. If keeping the shell, derive filenames viafs
and avoid.full_path()
.Also applies to: 392-397
mm/VMem.h (1)
15-18
: Runtime ID map size is now variable—validate all compile-time usesRedefining
IDENTITY_MAP_SIZE
to a variable is fine for runtime checks, but any compile-time uses (e.g., array sizes, _Static_asserts) will break. Quick grep for macro uses that expect a constant is recommended.mm/VMem.c (2)
348-383
: Keep accounting consistent for huge-page maps
VMemUnmap
decrements counters for 2MB pages, butVMemMapHuge
doesn’t increment them. Either updateVMemMapHuge
to add(HUGE_PAGE_SIZE / PAGE_SIZE)
toused_pages
andHUGE_PAGE_SIZE
tototal_mapped
, or document that these stats only reflect small-page mappings.
177-181
: Identity assumption in GetTableVirt can regress earlier faults
GetTableVirt
returns a physical pointer forphys_addr < IDENTITY_MAP_SIZE
. Per prior fixes, panics occurred when identity mappings were assumed. If identity map is truly guaranteed for all page-table pages, keep this. Otherwise, always usePHYS_TO_VIRT
for safety and ensure those pages are mapped. Based on learnings.kernel/core/Kernel.c (1)
408-415
: Set identity size carefully (order-of-operations hazard)You set
g_identity_map_size = total_pages * PAGE_SIZE
before the new PML4 is active. That’s OK if page-table pages are still constrained <4GB (see previous comment). Alternatively, defer this assignment until after switching CR3, or cap it temporarily at 4GB.arch/x86_64/asm/pxs.asm (2)
97-168
: Currently maps only 4GB; dynamic routines below are not invokedThe startup path zeros six pages and maps exactly 4GB via PD tables, then enables paging and long mode. The new
parse_memory_map
/setup_dynamic_paging
aren’t called, so identity mapping remains 4GB at boot. If the intent is a dynamic identity map, consider calling them here and removing the hardcoded 4GB loops.
498-502
: Multiboot info is 64-bit; 32-bit path truncates if >4GB
multiboot_info
isdq
. In 32‑bitparse_memory_map
you load only the low dword intoESI
. If GRUB ever places the MBI above 4GB, this would break. Typically it’s below 4GB, but guard with an early check or document the assumption.mm/PMem.c (5)
8-15
: Tidy up limits and unused macros; consider making cap configurable.
- MAX_SUPPORTED_MEMORY=128GB is fine, but expose via build option (meson option/Kconfig) for flexibility.
- MAX_PAGE_BUFFER_OVERHEAD and MAX_BITMAP_WORDS are now unused; remove to avoid drift.
-#define MAX_PAGE_BUFFER_OVERHEAD (1024 * 1024) // 1MB #define MAX_SUPPORTED_MEMORY (128ULL * 1024 * 1024 * 1024) // 128GB #define MAX_PAGES (MAX_SUPPORTED_MEMORY / PAGE_SIZE) #define MAX_BITMAP_SIZE (MAX_PAGES / 8) #define BITMAP_WORD_SIZE 64 -#define MAX_BITMAP_WORDS (MAX_BITMAP_SIZE / 8)
164-193
: OK to mark non-AVAILABLE pages as used; consider word-level marking for large ranges.The per-page loop is correct but O(n) over large reserved spans. Optional: coarsen by words when aligned to speed boot on large maps.
239-297
: Track allocation_count and handle the “no bitmap” early return.
- Increment allocation_count on success.
- The early NULL return is fine; consider logging once if it happens post-init.
- if (!page_bitmap) return NULL; // Safety check + if (!page_bitmap) return NULL; // Safety check (pre-init) @@ - SpinUnlockIrqRestore(&memory_lock, flags); + ++allocation_count; + SpinUnlockIrqRestore(&memory_lock, flags); return page; @@ - SpinUnlockIrqRestore(&memory_lock, flags); + ++allocation_count; + SpinUnlockIrqRestore(&memory_lock, flags); return page;
339-377
: Bump free_count on successful FreePage.Stat becomes meaningful and helps track churn.
MarkPageFree(page_idx); @@ SpinUnlockIrqRestore(&memory_lock, flags); + ++free_count;
383-433
: Stats scan is O(n) over pages; consider a fast path later.This is fine for now; if boot time grows on large RAM, consider keeping a running largest_free_block and fragmentation counters during alloc/free paths.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
arch/x86_64/asm/pxs.asm
(4 hunks)kernel/core/Kernel.c
(1 hunks)meson.build
(1 hunks)mm/PMem.c
(7 hunks)mm/VMem.c
(1 hunks)mm/VMem.h
(1 hunks)
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: assembler-0
PR: assembler-0/VoidFrame#93
File: kernel/memory/VMem.c:72-79
Timestamp: 2025-08-17T08:16:43.539Z
Learning: In VoidFrame kernel, VMemGetPhysAddr and related page table walking functions were causing panics because they assumed identity mapping existed for physical addresses < IDENTITY_MAP_SIZE (4GB), but the bootloader doesn't provide this 1:1 mapping. The fix is to remove all identity mapping assumptions and always use PHYS_TO_VIRT() for page table access in functions like VMemGetPageTablePhys, VMemMap, VMemMapHuge, VMemGetPhysAddr, VMemUnmap, VMemMapMMIO, and VMemUnmapMMIO.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#92
File: meson.build:31-34
Timestamp: 2025-08-17T05:53:48.040Z
Learning: The VoidFrame kernel's memory detection was reporting incorrect memory amounts (like 9200MB) because it used the highest physical address (including memory holes and reserved regions) as total memory size, rather than summing only the MULTIBOOT2_MEMORY_AVAILABLE regions. The fix involves counting only available memory regions and using the highest available address only for bitmap sizing.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#100
File: kernel/memory/VMem.h:44-47
Timestamp: 2025-08-31T02:11:29.291Z
Learning: VoidFrame kernel's MAX_PAGES is hardcoded to exactly 4GB (1,048,576 pages), causing false "Memory detected exceeds MAX_PAGES" warnings when memory holes push the highest physical address slightly above 4GB. The solution is to sum only MULTIBOOT2_MEMORY_AVAILABLE regions instead of using max_physical_address for total_pages calculation.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#130
File: kernel/core/InitRD.c:25-35
Timestamp: 2025-09-14T11:01:02.698Z
Learning: In VoidFrame kernel's InitRD.c, multiboot modules should be mapped using VMemMap() with PAGE_PRESENT | PAGE_WRITABLE flags instead of VMemMapMMIO(), as the latter applies cache-disabling flags (PAGE_NOCACHE | PAGE_WRITETHROUGH) that can corrupt data when reading regular RAM content.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#90
File: kernel/memory/MemOps.c:103-124
Timestamp: 2025-08-16T05:28:34.925Z
Learning: In VoidFrame kernel, the virtual address space management uses a simple linear allocator (kernel_space.next_vaddr) without any reclamation mechanism. VMemFree and VMemUnmapMMIO clear page tables but don't free virtual address space for reuse, causing eventual virtual address space exhaustion. The solution requires implementing a free list with coalescing in the VirtAddrSpace structure and updating allocation/deallocation functions to reuse freed ranges.
📚 Learning: 2025-08-17T08:16:43.539Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#93
File: kernel/memory/VMem.c:72-79
Timestamp: 2025-08-17T08:16:43.539Z
Learning: In VoidFrame kernel, VMemGetPhysAddr and related page table walking functions were causing panics because they assumed identity mapping existed for physical addresses < IDENTITY_MAP_SIZE (4GB), but the bootloader doesn't provide this 1:1 mapping. The fix is to remove all identity mapping assumptions and always use PHYS_TO_VIRT() for page table access in functions like VMemGetPageTablePhys, VMemMap, VMemMapHuge, VMemGetPhysAddr, VMemUnmap, VMemMapMMIO, and VMemUnmapMMIO.
Applied to files:
kernel/core/Kernel.c
mm/VMem.h
mm/PMem.c
mm/VMem.c
arch/x86_64/asm/pxs.asm
📚 Learning: 2025-08-17T05:53:48.040Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#92
File: meson.build:31-34
Timestamp: 2025-08-17T05:53:48.040Z
Learning: The VoidFrame kernel's memory detection was reporting incorrect memory amounts (like 9200MB) because it used the highest physical address (including memory holes and reserved regions) as total memory size, rather than summing only the MULTIBOOT2_MEMORY_AVAILABLE regions. The fix involves counting only available memory regions and using the highest available address only for bitmap sizing.
Applied to files:
kernel/core/Kernel.c
mm/PMem.c
arch/x86_64/asm/pxs.asm
📚 Learning: 2025-08-31T02:11:29.291Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#100
File: kernel/memory/VMem.h:44-47
Timestamp: 2025-08-31T02:11:29.291Z
Learning: VoidFrame kernel's MAX_PAGES is hardcoded to exactly 4GB (1,048,576 pages), causing false "Memory detected exceeds MAX_PAGES" warnings when memory holes push the highest physical address slightly above 4GB. The solution is to sum only MULTIBOOT2_MEMORY_AVAILABLE regions instead of using max_physical_address for total_pages calculation.
Applied to files:
kernel/core/Kernel.c
mm/PMem.c
arch/x86_64/asm/pxs.asm
📚 Learning: 2025-09-14T11:01:02.698Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#130
File: kernel/core/InitRD.c:25-35
Timestamp: 2025-09-14T11:01:02.698Z
Learning: In VoidFrame kernel's InitRD.c, multiboot modules should be mapped using VMemMap() with PAGE_PRESENT | PAGE_WRITABLE flags instead of VMemMapMMIO(), as the latter applies cache-disabling flags (PAGE_NOCACHE | PAGE_WRITETHROUGH) that can corrupt data when reading regular RAM content.
Applied to files:
kernel/core/Kernel.c
mm/PMem.c
arch/x86_64/asm/pxs.asm
📚 Learning: 2025-08-16T05:28:34.925Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#90
File: kernel/memory/MemOps.c:103-124
Timestamp: 2025-08-16T05:28:34.925Z
Learning: In VoidFrame kernel, the virtual address space management uses a simple linear allocator (kernel_space.next_vaddr) without any reclamation mechanism. VMemFree and VMemUnmapMMIO clear page tables but don't free virtual address space for reuse, causing eventual virtual address space exhaustion. The solution requires implementing a free list with coalescing in the VirtAddrSpace structure and updating allocation/deallocation functions to reuse freed ranges.
Applied to files:
mm/VMem.c
📚 Learning: 2025-08-13T07:17:35.941Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#79
File: drivers/PCI/PCI.c:154-154
Timestamp: 2025-08-13T07:17:35.941Z
Learning: In VoidFrame kernel, VMemGetPageTablePhys uses a 3-level hierarchy: Level 0 = PML4->PDP, Level 1 = PDP->PD, Level 2 = PD->PT. To access Page Tables, always use level=2, not level=1.
Applied to files:
arch/x86_64/asm/pxs.asm
🧬 Code graph analysis (2)
kernel/core/Kernel.c (1)
kernel/etc/Console.c (2)
PrintKernel
(174-192)PrintKernelInt
(298-322)
mm/PMem.c (2)
kernel/etc/Console.c (5)
PrintKernelWarning
(220-225)PrintKernelInt
(298-322)PrintKernel
(174-192)PrintKernelError
(213-218)PrintKernelHex
(228-251)mm/MemOps.c (1)
FastMemset
(10-73)
🪛 GitHub Actions: VoidFrame kernel build
meson.build
[error] 314-314: Unknown method "full_path" in object <[FileHolder] holds [File]: <File: arch/x86_64/asm/pxs.asm (not built)> of type FileHolder.
🔇 Additional comments (7)
meson.build (1)
418-418
: QEMU RAM bump to 6G: check host headroom and identity-map limitsBumping to
-m 6G
is fine, but ensure bootstrap identity mapping and page table allocation still function when RAM >4GB (see Kernel.c comment below).mm/PMem.c (6)
23-26
: LGTM: switch to dynamic bitmap pointer with size tracker.Dynamic page_bitmap and bitmap_words enable scaling with detected memory. Good direction.
36-47
: Bounds checks on bitmap access look correct.Early exits on out-of-range indices and uninitialized bitmap prevent corruption. Ensure callers hold memory_lock outside init paths.
Also applies to: 51-63, 66-74
154-163
: LGTM: printing bitmap physical range helps early bring-up.Keep this after successful placement; if switching to PHYS_TO_VIRT, still print physical addresses for clarity.
195-199
: LGTM: first 1MB reservation remains necessary.
200-209
: LGTM: explicitly reserving bitmap pages.This complements the placement fix above.
210-226
: LGTM: kernel image reservation uses ceil on end address.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
arch/x86_64/asm/pxs.asm
(6 hunks)meson.build
(3 hunks)mm/PMem.c
(7 hunks)
🧰 Additional context used
🧠 Learnings (9)
📓 Common learnings
Learnt from: assembler-0
PR: assembler-0/VoidFrame#93
File: kernel/memory/VMem.c:72-79
Timestamp: 2025-08-17T08:16:43.539Z
Learning: In VoidFrame kernel, VMemGetPhysAddr and related page table walking functions were causing panics because they assumed identity mapping existed for physical addresses < IDENTITY_MAP_SIZE (4GB), but the bootloader doesn't provide this 1:1 mapping. The fix is to remove all identity mapping assumptions and always use PHYS_TO_VIRT() for page table access in functions like VMemGetPageTablePhys, VMemMap, VMemMapHuge, VMemGetPhysAddr, VMemUnmap, VMemMapMMIO, and VMemUnmapMMIO.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#130
File: kernel/core/InitRD.c:25-35
Timestamp: 2025-09-14T11:01:02.698Z
Learning: In VoidFrame kernel's InitRD.c, multiboot modules should be mapped using VMemMap() with PAGE_PRESENT | PAGE_WRITABLE flags instead of VMemMapMMIO(), as the latter applies cache-disabling flags (PAGE_NOCACHE | PAGE_WRITETHROUGH) that can corrupt data when reading regular RAM content.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#92
File: meson.build:31-34
Timestamp: 2025-08-17T05:53:48.040Z
Learning: The VoidFrame kernel's memory detection was reporting incorrect memory amounts (like 9200MB) because it used the highest physical address (including memory holes and reserved regions) as total memory size, rather than summing only the MULTIBOOT2_MEMORY_AVAILABLE regions. The fix involves counting only available memory regions and using the highest available address only for bitmap sizing.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#79
File: drivers/PCI/PCI.c:154-154
Timestamp: 2025-08-13T07:17:35.941Z
Learning: In VoidFrame kernel, VMemGetPageTablePhys uses a 3-level hierarchy: Level 0 = PML4->PDP, Level 1 = PDP->PD, Level 2 = PD->PT. To access Page Tables, always use level=2, not level=1.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#100
File: kernel/memory/VMem.h:44-47
Timestamp: 2025-08-31T02:11:29.291Z
Learning: VoidFrame kernel's MAX_PAGES is hardcoded to exactly 4GB (1,048,576 pages), causing false "Memory detected exceeds MAX_PAGES" warnings when memory holes push the highest physical address slightly above 4GB. The solution is to sum only MULTIBOOT2_MEMORY_AVAILABLE regions instead of using max_physical_address for total_pages calculation.
📚 Learning: 2025-08-31T02:11:29.291Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#100
File: kernel/memory/VMem.h:44-47
Timestamp: 2025-08-31T02:11:29.291Z
Learning: VoidFrame kernel's MAX_PAGES is hardcoded to exactly 4GB (1,048,576 pages), causing false "Memory detected exceeds MAX_PAGES" warnings when memory holes push the highest physical address slightly above 4GB. The solution is to sum only MULTIBOOT2_MEMORY_AVAILABLE regions instead of using max_physical_address for total_pages calculation.
Applied to files:
mm/PMem.c
arch/x86_64/asm/pxs.asm
📚 Learning: 2025-08-13T07:19:55.274Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#79
File: drivers/PCI/PCI.c:154-154
Timestamp: 2025-08-13T07:19:55.274Z
Learning: VMemUnmapMMIO in VoidFrame kernel should use the same page table access pattern as the working VMemUnmap function: `uint64_t* pt_table = (uint64_t*)(pt_phys + KERNEL_VIRTUAL_OFFSET);` rather than PHYS_TO_VIRT macro or identity mapping checks.
Applied to files:
mm/PMem.c
arch/x86_64/asm/pxs.asm
📚 Learning: 2025-08-17T05:53:48.040Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#92
File: meson.build:31-34
Timestamp: 2025-08-17T05:53:48.040Z
Learning: The VoidFrame kernel's memory detection was reporting incorrect memory amounts (like 9200MB) because it used the highest physical address (including memory holes and reserved regions) as total memory size, rather than summing only the MULTIBOOT2_MEMORY_AVAILABLE regions. The fix involves counting only available memory regions and using the highest available address only for bitmap sizing.
Applied to files:
mm/PMem.c
arch/x86_64/asm/pxs.asm
📚 Learning: 2025-08-17T08:16:43.539Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#93
File: kernel/memory/VMem.c:72-79
Timestamp: 2025-08-17T08:16:43.539Z
Learning: In VoidFrame kernel, VMemGetPhysAddr and related page table walking functions were causing panics because they assumed identity mapping existed for physical addresses < IDENTITY_MAP_SIZE (4GB), but the bootloader doesn't provide this 1:1 mapping. The fix is to remove all identity mapping assumptions and always use PHYS_TO_VIRT() for page table access in functions like VMemGetPageTablePhys, VMemMap, VMemMapHuge, VMemGetPhysAddr, VMemUnmap, VMemMapMMIO, and VMemUnmapMMIO.
Applied to files:
mm/PMem.c
arch/x86_64/asm/pxs.asm
📚 Learning: 2025-09-14T11:01:02.698Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#130
File: kernel/core/InitRD.c:25-35
Timestamp: 2025-09-14T11:01:02.698Z
Learning: In VoidFrame kernel's InitRD.c, multiboot modules should be mapped using VMemMap() with PAGE_PRESENT | PAGE_WRITABLE flags instead of VMemMapMMIO(), as the latter applies cache-disabling flags (PAGE_NOCACHE | PAGE_WRITETHROUGH) that can corrupt data when reading regular RAM content.
Applied to files:
mm/PMem.c
arch/x86_64/asm/pxs.asm
📚 Learning: 2025-09-01T01:10:22.123Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#103
File: README.md:9-9
Timestamp: 2025-09-01T01:10:22.123Z
Learning: The current VoidFrame Registers struct in arch/x86_64/cpu/Cpu.h only saves basic GPRs, segment registers, and exception context (interrupt_number, error_code, rip, cs, rflags, rsp, ss). It's missing critical registers for deep fault tracing: SIMD registers (XMM/YMM/ZMM), control registers (CR0/CR2/CR3/CR4), debug registers (DR0-DR7), and extended FPU state. The struct has a "DO NOT TOUCH" comment indicating it must match interrupt ASM stack layout.
Applied to files:
arch/x86_64/asm/pxs.asm
📚 Learning: 2025-09-01T01:10:22.123Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#103
File: README.md:9-9
Timestamp: 2025-09-01T01:10:22.123Z
Learning: VoidFrame's interrupt handlers in arch/x86_64/interrupts/Interrupts.asm only save basic GPRs and segment registers, missing critical registers for deep fault tracing: control registers (CR0/CR2/CR3/CR4), SIMD state (XMM/YMM/ZMM), debug registers (DR0-DR7), and extended FPU state. The CPU initialization code enables SSE/AVX/XSAVE features but interrupt handlers don't preserve this extended state. Enhanced fault tracing requires selective extended state saving for critical exceptions (ISR 6, 8, 13, 14) while keeping lightweight handling for regular interrupts.
Applied to files:
arch/x86_64/asm/pxs.asm
📚 Learning: 2025-08-13T07:17:35.941Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#79
File: drivers/PCI/PCI.c:154-154
Timestamp: 2025-08-13T07:17:35.941Z
Learning: In VoidFrame kernel, VMemGetPageTablePhys uses a 3-level hierarchy: Level 0 = PML4->PDP, Level 1 = PDP->PD, Level 2 = PD->PT. To access Page Tables, always use level=2, not level=1.
Applied to files:
arch/x86_64/asm/pxs.asm
🧬 Code graph analysis (1)
mm/PMem.c (2)
kernel/etc/Console.c (6)
PrintKernelWarning
(220-225)PrintKernelInt
(298-322)PrintKernel
(174-192)PrintKernelF
(253-260)PrintKernelError
(213-218)PrintKernelHex
(228-251)mm/MemOps.c (1)
FastMemset
(10-73)
🪛 GitHub Actions: VoidFrame kernel build
meson.build
[error] 3-3: Meson version is 1.3.2 but project requires >= 1.4.0.
🔇 Additional comments (2)
mm/PMem.c (1)
131-201
: Stop assuming the whole physical range is identity mapped; choose an AVAILABLE region and map it.You’re still pegging the bitmap to “16 MB within
total_pages * PAGE_SIZE
” and then casting that physical address straight to a pointer. On machines where the new identity map (seeg_identity_map_size
) stops short oftotal_pages * PAGE_SIZE
,FastMemset(page_bitmap, …)
will page‑fault the moment it crosses the mapped limit. Worse, if there’s a reserved/ACPI hole at 16 MB, we’ll scribble over firmware tables before the reservation loop even runs. This is the exact pitfall we documented earlier. Based on learningsPlease bound against the actual identity map, scan for a
MULTIBOOT2_MEMORY_AVAILABLE
range ≥ bitmap size, and convert viaPHYS_TO_VIRT
before touching it. For example:- uint64_t bitmap_start = 0x1000000; // Start searching at 16MB - uint64_t identity_limit = total_pages * PAGE_SIZE; // Don't go beyond detected memory - if (identity_limit > MAX_SUPPORTED_MEMORY) { - identity_limit = MAX_SUPPORTED_MEMORY; - } - bitmap_start = (bitmap_start + PAGE_SIZE - 1) & ~(PAGE_SIZE - 1); - if (bitmap_start + bitmap_size_bytes > identity_limit) { - PrintKernelError("ERROR: Cannot fit bitmap in available memory space\n"); - return -1; - } - page_bitmap = (uint64_t*)bitmap_start; + uint64_t identity_limit = IDENTITY_MAP_SIZE; + if (identity_limit > MAX_SUPPORTED_MEMORY) identity_limit = MAX_SUPPORTED_MEMORY; + + uint64_t bitmap_start = 0; + struct MultibootTag* scan = (struct MultibootTag*)(uintptr_t)(multiboot_info_addr + 8); + while (scan->type != MULTIBOOT2_TAG_TYPE_END) { + if (scan->type == MULTIBOOT2_TAG_TYPE_MMAP) { + struct MultibootTagMmap* mmap_tag = (struct MultibootTagMmap*)scan; + uint32_t entry_count = (mmap_tag->size - sizeof(*mmap_tag)) / mmap_tag->entry_size; + for (uint32_t i = 0; i < entry_count; ++i) { + const struct MultibootMmapEntry* e = + (const struct MultibootMmapEntry*)((const uint8_t*)mmap_tag + + sizeof(*mmap_tag) + i * mmap_tag->entry_size); + if (e->type != MULTIBOOT2_MEMORY_AVAILABLE) continue; + uint64_t region_start = e->addr; + uint64_t region_end = e->addr + e->len; + if (region_start >= identity_limit) continue; + if (region_end > identity_limit) region_end = identity_limit; + uint64_t cand = region_start < 0x1000000 ? 0x1000000 : region_start; + cand = (cand + PAGE_SIZE - 1) & ~(PAGE_SIZE - 1); + if (cand + bitmap_size_bytes <= region_end) { bitmap_start = cand; break; } + } + } + if (bitmap_start) break; + scan = (struct MultibootTag*)((uint8_t*)scan + ((scan->size + 7) & ~7)); + } + if (!bitmap_start) { + PrintKernelError("ERROR: No AVAILABLE region found for bitmap within identity map\n"); + return -1; + } + page_bitmap = (uint64_t*)PHYS_TO_VIRT(bitmap_start);Claiming identity coverage you don’t have is a recipe for triple faults; the bitmap must live inside a known AVAILABLE chunk that’s actually identity mapped.
arch/x86_64/asm/pxs.asm (1)
339-353
: Write the full 64‑bit PDE (low and high dword) and keep the high bits in sync.
mul edx
only leaves the low 32 bits ineax
; you drop the high 32 bits (now inedx
) when youpop edx
, and you never store them to[edi+4]
. Oncememory_to_map
exceeds 4 GB this truncates every PDE to the lower 32 bits, so the >4 GB portion of the identity map points at junk. Please mirror the upper dword into the entry and advance it withadc
so the high bits stay consistent:- push edx - mov edx, 0x40000000 ; 1GB per PDP entry - mul edx ; EDX:EAX = phys base - pop edx - or eax, 0x83 ; flags + mov ebx, 0x40000000 ; 1GB per PDP entry + mul ebx ; EDX:EAX = phys base + mov ebx, edx ; preserve high dword + or eax, 0x83 ; flags (low dword only) @@ -.fill_pd_loop: - mov [edi], eax ; Store PDE - add edi, 8 ; Next PDE - add eax, 0x200000 ; Next 2MB physical address - dec ebx - jnz .fill_pd_loop +.fill_pd_loop: + mov [edi], eax + mov [edi + 4], ebx + add edi, 8 + add eax, 0x200000 + adc ebx, 0 + dec ecx + jnz .fill_pd_loop(Adjust the loop counter so you still iterate 512 times.) Without this, long-memory configs will fault immediately when touching the truncated ranges.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (4)
CMakeLists.txt (4)
41-42
: NASM object format: prefer CMake’s OBJECT_FORMAT, and keep defines in CMAKE_ASM_NASM_FLAGSEnsure consistent
elf64
emission via the supported variable; keep the-D
define.Apply:
-set(CMAKE_ASM_NASM_FLAGS_INIT " -f elf64") +set(CMAKE_ASM_NASM_OBJECT_FORMAT elf64)Keep the define but don’t lose existing flags:
-set(CMAKE_ASM_NASM_FLAGS "${CMAKE_ASM_NASM_FLAGS} -DVF_CONFIG_VESA_FB") +set(CMAKE_ASM_NASM_FLAGS "${CMAKE_ASM_NASM_FLAGS} -DVF_CONFIG_VESA_FB")If you want to be explicit and future-proof, also set:
set_source_files_properties(${ASM_SOURCES} PROPERTIES NASM_OBJ_FORMAT elf64)Also applies to: 79-80
12-18
: Graceful tool detection and target gatingSome targets hard-require these tools. Add checks to fail fast (or gate targets) with clear messages.
Apply:
find_program(LLVM_OBJDUMP llvm-objdump) find_program(GRUB_MKRESCUE grub-mkrescue) find_program(QEMU_IMG qemu-img) find_program(MKFS_FAT mkfs.fat) find_program(MKFS_EXT2 mkfs.ext2) find_program(QEMU_SYSTEM_X86_64 qemu-system-x86_64) + +if(NOT GRUB_MKRESCUE) + message(FATAL_ERROR "grub-mkrescue not found; cannot build ISO. Install grub2 tools.") +endif() +if(NOT QEMU_SYSTEM_X86_64) + message(WARNING "qemu-system-x86_64 not found; run/runmin targets will be unavailable.") +endif() +if(NOT LLVM_OBJDUMP) + message(WARNING "llvm-objdump not found; dump target will be unavailable.") +endif()Optionally wrap
run
,runmin
,dump
, and image targets inif()
guards so configuration does not fail on minimal environments.
286-312
: Optional: make KVM usage conditional
-enable-kvm
fails on non-Linux hosts or without KVM. Consider gating with a cache option or host check.Example:
option(ENABLE_KVM "Enable KVM for QEMU run target" ON) set(KVM_FLAG $<$<AND:$<BOOL:${ENABLE_KVM}>,$<STREQUAL:${CMAKE_HOST_SYSTEM_NAME},Linux>>:-enable-kvm>) # ... then use ${KVM_FLAG} in the run command line
274-279
: Optional: avoid building ISO by defaultMaking
iso
anALL
target forces grub tooling on every build. Consider removingALL
so developers can build the kernel without ISO artifacts.-add_custom_target(iso ALL DEPENDS VoidFrame.iso) +add_custom_target(iso DEPENDS VoidFrame.iso)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.gitignore
(0 hunks)CMakeLists.txt
(1 hunks)
💤 Files with no reviewable changes (1)
- .gitignore
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: assembler-0
PR: assembler-0/VoidFrame#93
File: kernel/memory/VMem.c:72-79
Timestamp: 2025-08-17T08:16:43.539Z
Learning: In VoidFrame kernel, VMemGetPhysAddr and related page table walking functions were causing panics because they assumed identity mapping existed for physical addresses < IDENTITY_MAP_SIZE (4GB), but the bootloader doesn't provide this 1:1 mapping. The fix is to remove all identity mapping assumptions and always use PHYS_TO_VIRT() for page table access in functions like VMemGetPageTablePhys, VMemMap, VMemMapHuge, VMemGetPhysAddr, VMemUnmap, VMemMapMMIO, and VMemUnmapMMIO.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#130
File: kernel/core/InitRD.c:25-35
Timestamp: 2025-09-14T11:01:02.698Z
Learning: In VoidFrame kernel's InitRD.c, multiboot modules should be mapped using VMemMap() with PAGE_PRESENT | PAGE_WRITABLE flags instead of VMemMapMMIO(), as the latter applies cache-disabling flags (PAGE_NOCACHE | PAGE_WRITETHROUGH) that can corrupt data when reading regular RAM content.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#92
File: meson.build:31-34
Timestamp: 2025-08-17T05:53:48.040Z
Learning: The VoidFrame kernel's memory detection was reporting incorrect memory amounts (like 9200MB) because it used the highest physical address (including memory holes and reserved regions) as total memory size, rather than summing only the MULTIBOOT2_MEMORY_AVAILABLE regions. The fix involves counting only available memory regions and using the highest available address only for bitmap sizing.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#100
File: kernel/memory/VMem.h:44-47
Timestamp: 2025-08-31T02:11:29.291Z
Learning: VoidFrame kernel's MAX_PAGES is hardcoded to exactly 4GB (1,048,576 pages), causing false "Memory detected exceeds MAX_PAGES" warnings when memory holes push the highest physical address slightly above 4GB. The solution is to sum only MULTIBOOT2_MEMORY_AVAILABLE regions instead of using max_physical_address for total_pages calculation.
Learnt from: assembler-0
PR: assembler-0/VoidFrame#79
File: drivers/PCI/PCI.c:154-154
Timestamp: 2025-08-13T07:17:35.941Z
Learning: In VoidFrame kernel, VMemGetPageTablePhys uses a 3-level hierarchy: Level 0 = PML4->PDP, Level 1 = PDP->PD, Level 2 = PD->PT. To access Page Tables, always use level=2, not level=1.
📚 Learning: 2025-08-12T03:19:52.873Z
Learnt from: assembler-0
PR: assembler-0/VoidFrame#73
File: kernel/elf/ELFloader.c:0-0
Timestamp: 2025-08-12T03:19:52.873Z
Learning: VoidFrame kernel now has complete UEFI boot support with hybrid BIOS/UEFI ISO creation capability. The existing VBE framebuffer implementation requires no changes as it uses Multiboot2 framebuffer tags and direct memory-mapped access, making it inherently UEFI-compatible. Build process: ninja + ./scripts/build_hybrid_iso.sh creates dual-platform bootable ISO.
Applied to files:
CMakeLists.txt
🔇 Additional comments (1)
CMakeLists.txt (1)
46-69
: Confirm whether VF_CONFIG_VESA_FB is also needed in C/C++You pass
VF_CONFIG_VESA_FB
only to NASM. If any C/C++ files gate code on this macro (e.g., VBE framebuffer or identity map logic), add it toadd_compile_definitions
as well.If needed:
add_compile_definitions( VF_CONFIG_ENABLE_XHCI @@ VF_CONFIG_SCHED_MLFQ + VF_CONFIG_VESA_FB )
Also applies to: 79-80
set(CMAKE_CXX_FLAGS_INIT "${CMAKE_C_FLAGS_INIT} -fno-exceptions -fno-rtti -fno-threadsafe-statics") | ||
set(CMAKE_ASM_NASM_FLAGS_INIT " -f elf64") | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
CMAKE_*_FLAGS_INIT won’t take effect here; use per-language compile options
CMAKE_CXX_FLAGS_INIT
/CMAKE_ASM_NASM_FLAGS_INIT
are read before project()
. At this point they are no-ops, so -fno-exceptions
, -fno-rtti
, etc., may not be applied.
Apply:
-set(CMAKE_CXX_FLAGS_INIT "${CMAKE_C_FLAGS_INIT} -fno-exceptions -fno-rtti -fno-threadsafe-statics")
-set(CMAKE_ASM_NASM_FLAGS_INIT " -f elf64")
+# Apply to C++ only
+add_compile_options($<$<COMPILE_LANGUAGE:CXX>:-fno-exceptions -fno-rtti -fno-threadsafe-statics>)
NASM object format fix is in the next comment.
🤖 Prompt for AI Agents
CMakeLists.txt lines 40-42: the use of CMAKE_CXX_FLAGS_INIT and
CMAKE_ASM_NASM_FLAGS_INIT here is ineffective because those *_INIT variables are
read before project(); instead move flag configuration after the project() call
and apply them per-language or per-target — e.g. use
target_compile_options(<target> PRIVATE -fno-exceptions -fno-rtti
-fno-threadsafe-statics) or
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:-fno-exceptions;-fno-rtti;-fno-threadsafe-statics>)
and for NASM use a per-language setting like
add_compile_options($<$<COMPILE_LANGUAGE:ASM_NASM>:-f elf64>) or
target_compile_options with the ASM_NASM generator expression so the flags are
actually applied.
set_target_properties(voidframe.krnl PROPERTIES | ||
OUTPUT_NAME "voidframe.krnl" | ||
LINK_FLAGS "-T ${CMAKE_CURRENT_SOURCE_DIR}/voidframe.ld -melf_x86_64" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kernel link pipeline: use target_link_options; drop PIE/PIC and fix -nostdlib placement
Current setup risks linking via clang with default libs and treating -melf_x86_64
as an unsupported driver flag. Also -nostdlib
is in compile flags (ineffective) and -fPIE/-fPIC
conflicts with kernel link. Move -nostdlib/-no-pie
to link, remove PIE/PIC at compile, and pass linker script via -Wl,-T,...
.
Apply:
- set_target_properties(voidframe.krnl PROPERTIES
- OUTPUT_NAME "voidframe.krnl"
- LINK_FLAGS "-T ${CMAKE_CURRENT_SOURCE_DIR}/voidframe.ld -melf_x86_64"
- )
+ set_target_properties(voidframe.krnl PROPERTIES
+ OUTPUT_NAME "voidframe.krnl"
+ )
+ # Link kernel without host runtimes, no PIE; pass the linker script explicitly.
+ target_link_options(voidframe.krnl PRIVATE
+ -nostdlib
+ -no-pie
+ "-Wl,-T,${CMAKE_CURRENT_SOURCE_DIR}/voidframe.ld"
+ )
And adjust compile flags (see Line 30 diff below) to remove -fPIE -fPIC
and relocate -nostdlib
.
Additionally, consider -Wl,--build-id=none
if you want to avoid build-id notes in a freestanding kernel.
- add_compile_options(-m64 -O2 -fno-omit-frame-pointer -finline-functions -foptimize-sibling-calls -nostdinc -nostdlib -fno-builtin -ffreestanding -mno-red-zone -mserialize -fPIE -fPIC -mcmodel=kernel -fcf-protection=full -fvisibility=hidden)
+ add_compile_options(-m64 -O2 -fno-omit-frame-pointer -finline-functions -foptimize-sibling-calls -nostdinc -fno-builtin -ffreestanding -mno-red-zone -mserialize -fno-pie -mcmodel=kernel -fcf-protection=full -fvisibility=hidden)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
set_target_properties(voidframe.krnl PROPERTIES | |
OUTPUT_NAME "voidframe.krnl" | |
LINK_FLAGS "-T ${CMAKE_CURRENT_SOURCE_DIR}/voidframe.ld -melf_x86_64" | |
) | |
# --- Adjusted kernel compile flags (remove PIE/PIC, relocate -nostdlib) --- | |
# Original at line ~30: | |
# add_compile_options(-m64 -O2 -fno-omit-frame-pointer -finline-functions -foptimize-sibling-calls | |
# -nostdinc -nostdlib -fno-builtin -ffreestanding -mno-red-zone | |
# -mserialize -fPIE -fPIC -mcmodel=kernel -fcf-protection=full | |
# -fvisibility=hidden) | |
add_compile_options( | |
-m64 | |
-O2 | |
-fno-omit-frame-pointer | |
-finline-functions | |
-foptimize-sibling-calls | |
-nostdinc | |
-fno-builtin | |
-ffreestanding | |
-mno-red-zone | |
-mserialize | |
-fno-pie | |
-mcmodel=kernel | |
-fcf-protection=full | |
-fvisibility=hidden | |
) | |
# --- Updated target link setup for voidframe.krnl (was lines 266–269) --- | |
# Original: | |
# set_target_properties(voidframe.krnl PROPERTIES | |
# OUTPUT_NAME "voidframe.krnl" | |
# LINK_FLAGS "-T ${CMAKE_CURRENT_SOURCE_DIR}/voidframe.ld -melf_x86_64" | |
# ) | |
set_target_properties(voidframe.krnl PROPERTIES | |
OUTPUT_NAME "voidframe.krnl" | |
) | |
# Link kernel without host runtimes, disable PIE, and apply our script explicitly | |
target_link_options(voidframe.krnl PRIVATE | |
-nostdlib | |
-no-pie | |
"-Wl,-T,${CMAKE_CURRENT_SOURCE_DIR}/voidframe.ld" | |
) |
🤖 Prompt for AI Agents
In CMakeLists.txt around lines 266-269, replace the set_target_properties
LINK_FLAGS usage with target_link_options(voidframe.krnl PRIVATE ...) and move
linker-only flags there; remove -fPIE and -fPIC from compile flags (adjust
earlier compile flags around line 30), remove -nostdlib from compile flags and
instead add -Wl,-nostdlib and -Wl,-no-pie along with the linker script using
-Wl,-T,${CMAKE_CURRENT_SOURCE_DIR}/voidframe.ld (do not pass -melf_x86_64 as a
driver flag — if needed, pass it via the linker driver with -Wl, or omit if the
linker target is correct), and optionally add -Wl,--build-id=none for a
freestanding kernel.
COMMAND sh -c "${QEMU_IMG} create -f qcow2 VoidFrameDisk.img 16G && ${MKFS_EXT2} VoidFrameDisk.img" | ||
COMMENT "Creating disk images" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Formatting qcow2 directly with mkfs.ext2 will not work
mkfs.ext2
expects a block device or a raw file (-F
). Running it on a qcow2 container file produces a broken image.
Use raw for simplicity:
-add_custom_target(img
- COMMAND sh -c "${QEMU_IMG} create -f qcow2 VoidFrameDisk.img 16G && ${MKFS_EXT2} VoidFrameDisk.img"
- COMMENT "Creating disk images"
-)
+add_custom_target(img
+ COMMAND sh -c "${QEMU_IMG} create -f raw VoidFrameDisk.img 16G && ${MKFS_EXT2} -F VoidFrameDisk.img"
+ COMMENT "Creating disk images (raw + ext2)"
+)
Alternatively, keep qcow2 but format via nbd: qemu-nbd --connect=/dev/nbd0 VoidFrameDisk.img && mkfs.ext2 /dev/nbd0 && qemu-nbd --disconnect /dev/nbd0
. I can provide a portable target if desired.
🤖 Prompt for AI Agents
In CMakeLists.txt around lines 327 to 329, running mkfs.ext2 directly on the
qcow2 container file is incorrect because mkfs.ext2 expects a block device or a
raw file; update the COMMAND to either create a raw image (e.g., use qemu-img
create -f raw ...) and then run mkfs.ext2 on that file, or keep qcow2 and
replace the single-step mkfs call with a connect/format/disconnect flow using
qemu-nbd (qemu-nbd --connect=/dev/nbdX VoidFrameDisk.img && mkfs.ext2 /dev/nbdX
&& qemu-nbd --disconnect /dev/nbdX), ensuring the CMake COMMAND uses the chosen
approach and handles errors/cleanup.
Summary by CodeRabbit
New Features
Bug Fixes
Chores