A standalone filesystem-based application loader running on ArceOS unikernel, with all dependencies sourced from crates.io. Demonstrates FAT filesystem initialization, file I/O, and VirtIO block device driver across multiple architectures.
This application demonstrates the full I/O stack from filesystem down to block device:
- VirtIO-blk driver: Automatically discovered and initialized via PCI bus probing.
- FAT filesystem: Mounted on the VirtIO block device during ArceOS runtime startup.
- File read: Opens
/sbin/origin.binfrom the FAT filesystem and reads its first 64 bytes. - Child task: Spawns a worker thread that prints the first 8 bytes of the file as hex values.
- CFS scheduling: Uses preemptive CFS scheduler for task management.
Application (std::fs::File)
└── axfs (FAT filesystem)
└── axdriver (VirtIO-blk)
└── virtio-drivers (PCI transport)
└── QEMU VirtIO block device
| Architecture | Rust Target | QEMU Machine | Platform |
|---|---|---|---|
| riscv64 | riscv64gc-unknown-none-elf |
qemu-system-riscv64 -machine virt |
riscv64-qemu-virt |
| aarch64 | aarch64-unknown-none-softfloat |
qemu-system-aarch64 -machine virt |
aarch64-qemu-virt |
| x86_64 | x86_64-unknown-none |
qemu-system-x86_64 -machine q35 |
x86-pc |
| loongarch64 | loongarch64-unknown-none |
qemu-system-loongarch64 -machine virt |
loongarch64-qemu-virt |
- Rust nightly toolchain (edition 2024)
- QEMU for target architectures
- rust-objcopy (
cargo install cargo-binutils)
cargo install cargo-clone
cargo clone arceos-loadapp
cd arceos-loadapp
# Build and run on RISC-V 64 QEMU (default)
cargo xtask run
# Other architectures
cargo xtask run --arch aarch64
cargo xtask run --arch x86_64
cargo xtask run --arch loongarch64Expected output:
Load app from fat-fs ...
fname: /sbin/origin.bin
Wait for workers to exit ...
worker1 checks code:
0x10 0x21 0x32 0x43 0x54 0x65 0x76 0x87
worker1 ok!
Load app from disk ok!
app-loadapp/
├── .cargo/
│ └── config.toml # cargo xtask alias & AX_CONFIG_PATH
├── xtask/
│ └── src/
│ └── main.rs # build/run tool (FAT32 disk image + QEMU)
├── configs/
│ ├── riscv64.toml
│ ├── aarch64.toml
│ ├── x86_64.toml
│ └── loongarch64.toml
├── src/
│ └── main.rs # File open/read + worker thread
├── build.rs
├── Cargo.toml
└── README.md
| Component | Role |
|---|---|
axstd |
ArceOS standard library (std::fs::File, std::io, std::thread) |
axfs |
Filesystem module — mounts FAT32 on the VirtIO block device |
axdriver |
Device driver framework — VirtIO-blk via PCI bus |
axtask |
Task scheduler with CFS algorithm |
fatfs (xtask) |
Creates the FAT32 disk image with /sbin/origin.bin at build time |
The xtask tool uses the fatfs Rust crate to create a 64MB FAT32 disk image (target/disk.img):
- Allocates a 64MB raw file
- Formats it as FAT32 using
fatfs::format_volume() - Creates
/sbin/directory - Writes
/sbin/origin.binwith 64 bytes of sample binary data - Attaches the image to QEMU as
-device virtio-blk-pci
No external tools (mkfs.fat, mtools) are required.
Based on the arceos-loadapp kernel component and the reference code under the exercise directory, implement a kernel component named arceos-loadapp-ramfs--rename that supports two operations: rename and mv.
Within the kernel, the following similar operations can be completed:
mkdir dira
rename dira dirb
echo "hello" > a.txt
rename a.txt b.txt
mv b.txt ./dirb
ls ./dirb
[Ramfs-Rename]: ok!
This crate is part of a series of tutorial crates for learning OS development with ArceOS. The crates are organized by functionality and complexity progression:
| # | Crate Name | Description |
|---|---|---|
| 1 | arceos-helloworld | Minimal ArceOS unikernel application that prints Hello World, demonstrating the basic boot flow |
| 2 | arceos-collections | Dynamic memory allocation on a unikernel, demonstrating the use of String, Vec, and other collection types |
| 3 | arceos-readpflash | MMIO device access via page table remapping, reading data from QEMU's PFlash device |
| 4 | arceos-childtask | Multi-tasking basics: spawning a child task (thread) that accesses a PFlash MMIO device |
| 5 | arceos-msgqueue | Cooperative multi-task scheduling with a producer-consumer message queue, demonstrating inter-task communication |
| 6 | arceos-fairsched | Preemptive CFS scheduling with timer-interrupt-driven task switching, demonstrating automatic task preemption |
| 7 | arceos-readblk | VirtIO block device driver discovery and disk I/O, demonstrating device probing and block read operations |
| 8 | arceos-loadapp (this crate) | FAT filesystem initialization and file I/O, demonstrating the full I/O stack from VirtIO block device to filesystem |
| 9 | arceos-userprivilege | User-privilege mode switching: loading a user-space program, switching to unprivileged mode, and handling syscalls |
| 10 | arceos-lazymapping | Lazy page mapping (demand paging): user-space program triggers page faults, and the kernel maps physical pages on demand |
| 11 | arceos-runlinuxapp | Loading and running real Linux ELF applications (musl libc) on ArceOS, with ELF parsing and Linux syscall handling |
| 12 | arceos-guestmode | Minimal hypervisor: creating a guest address space, entering guest mode, and handling a single VM exit (shutdown) |
| 13 | arceos-guestaspace | Hypervisor address space management: loop-based VM exit handling with nested page fault (NPF) on-demand mapping |
| 14 | arceos-guestvdev | Hypervisor virtual device support: timer virtualization, console I/O forwarding, and NPF passthrough; guest runs preemptive multi-tasking |
| 15 | arceos-guestmonolithickernel | Full hypervisor + guest monolithic kernel: the guest kernel supports user-space process management, syscall handling, and preemptive scheduling |
Progression Logic:
- #1–#8 (Unikernel Stage): Starting from the simplest output, these crates progressively introduce memory allocation, device access (MMIO / VirtIO), multi-task scheduling (both cooperative and preemptive), and filesystem support, building up the core capabilities of a unikernel.
- #8–#10 (Monolithic Kernel Stage): Building on the unikernel foundation, these crates add user/kernel privilege separation, page fault handling, and ELF loading, progressively evolving toward a monolithic kernel.
- #11–#14 (Hypervisor Stage): Starting from minimal VM lifecycle management, these crates progressively add address space management, virtual devices, timer injection, and ultimately run a full monolithic kernel inside a virtual machine.
GPL-3.0-or-later OR Apache-2.0 OR MulanPSL-2.0