-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
platform(x86_64): initial x86_64 bringup #216
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@saleemrashid, if you want to work on this, feel free to pick up from where i've left off! i left some notes to help you get started.
we'll want to add some bin crates in platform/x86-64
for actually building mnemOS binaries with various bootloaders. i think the rust-osdev/bootloader
crate is nice for testing in e.g. QEMU and i already have code from Mycelium we can copy for implementing hal-x86-64
's bootinfo trait for rust-osdev/bootloader
. we can steal the build process from Mycelium's stupid, overengineered build tool, or by following the bootloader
crate's documentation. any other bootloaders we want to support will need to either chainload into bootloader to populate its bootinfo, or have their own way of providing the hal-core::BootInfo
trait, implemented in that bootloader's bin target.
platforms/x86-64/core/src/lib.rs
Outdated
interrupt::enable_exceptions(); | ||
bootinfo.init_paging(); | ||
|
||
// TODO: init allocator! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the bootinfo will give us a memory map we can use to populate a kernel allocator. this should provide something for mnemos_alloc
but will also need to implement the page::Alloc
trait from mycelium hal.
see https://github.com/hawkw/mycelium/blob/1f125194902cd4970b72eab0aa1d85d1b6ec1489/src/lib.rs#L118-L152
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we want to init tracing before we init the allocator, we will probably need a small bump region for the tracing
subscriber's single arc allocation.
platforms/x86-64/core/src/lib.rs
Outdated
pub mod interrupt; | ||
|
||
pub fn init(bootinfo: &impl BootInfo, rsdp_addr: Option<PAddr>) -> &'static Kernel { | ||
// TODO: init early tracing? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be nice to have framebuffer and/or UART tracing before the runtime comes up...
platforms/x86-64/core/src/lib.rs
Outdated
|
||
// TODO: init allocator! | ||
|
||
// TODO: PCI? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably punt on PCI for the initial bringup branch...
platforms/x86-64/core/src/lib.rs
Outdated
} | ||
}; | ||
|
||
// TODO: spawn drivers (UART, keyboard, ...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
most of this isn't needed for the initial bringup branch but we probably need either emb_display
(writing directly to framebuf) or UART in order to prove we are alive. mycelium's hal-x86_64
crate has modules for the framebuffer as well as the 16550 UART, which will need to be wrapped with mnemOS-style driver services eventually (they would implement the EmbDisplayService
and SimpleSerialService
, respectively).
platforms/x86-64/core/src/lib.rs
Outdated
loop { | ||
// Tick the scheduler | ||
// TODO(eliza): do we use the PIT or the local APIC timer? | ||
let start = todo!("current value of freewheeling timer"); | ||
let tick = k.tick(); | ||
|
||
// Timer is downcounting | ||
let elapsed = start.wrapping_sub(todo!("timer current value")); | ||
let turn = k.timer().force_advance_ticks(elapsed.into()); | ||
|
||
// If there is nothing else scheduled, and we didn't just wake something up, | ||
// sleep for some amount of time | ||
if turn.expired == 0 && !tick.has_remaining { | ||
let wfi_start = todo!("timer current value"); | ||
|
||
// TODO(AJM): Sometimes there is no "next" in the timer wheel, even though there should | ||
// be. Don't take lack of timer wheel presence as the ONLY heuristic of whether we | ||
// should just wait for SOME interrupt to occur. For now, force a max sleep of 100ms | ||
// which is still probably wrong. | ||
let amount = turn | ||
.ticks_to_next_deadline() | ||
.unwrap_or(todo!("figure this out")); | ||
|
||
todo!("reset timer"); | ||
|
||
unsafe { | ||
interrupt::wait_for_interrupt(); | ||
} | ||
// Disable the timer interrupt in case that wasn't what woke us up | ||
todo!("clear timer irq"); | ||
|
||
// Account for time slept | ||
let elapsed = wfi_start.wrapping_sub(todo!("current timer value");); | ||
let _turn = k.timer().force_advance_ticks(elapsed.into()); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is missing timer stuff.
we will need to change this to actually use the timer. the other mnemOS platform impls use the timer in freewheeling mode, where the timer is configured to interrupt us at the time when the next timeout scheduled on the timer wheel expires. this is a bit different from how Mycelium currently uses the timer; it's configured in periodic mode and just pends a single tick every 10ms. the mnemOS way is a bit nicer imo, but somewhat more complex.
the mycelium HAL has modules for both the Programmable Interrupt Timer (PIT) as well as the local APIC timer. we should prefer the local APIC timer, as it's newer and allows each CPU core to have its own timer. however, my hal code currently only has a method to configure the local APIC timer in periodic mode rather than in freewheeling/oneshot code. so, we might want to add that in hal-x86_64
...
we'll also need to be able to read the timestamp from that timer if we want our runloop to use a downcounting rather than periodic timer, so that we can know how much to advance the wheel by.
it might, alternatively, be simpler to start out with periodic mode, but our runloop would look somewhat different from other mnemOS impls...
platforms/x86-64/core/src/lib.rs
Outdated
k | ||
} | ||
|
||
pub fn run(bootinfo: &impl BootInfo, k: &'static Kernel) -> ! { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this will probably also need to take some kind of "CPU core context" eventually when we actually get to SMP support...
This is needed when building new versions of `llvm-tools`.
This commit changes the Cargo workspace setup to put all crates in One Big Workspace, rather than having separate workspaces for some targets. We now use the `per-package-target` unstable cargo feature to build different crates for different targets. This means that `cargo` commands in the root workspace now work without requiring the user to `cd` into a particular directory to build a platform target --- for example, I can now run: ```console # in the repo root directory $ cargo build -p mnemos-d1 --bin mq-pro ``` and build a MnemOS binary for the MQ Pro, without having to `cd` into the MQ Pro directory. One issue is that `cargo build --workspace` (and `check --workspace`, etc) still does not work correctly, due to some [weird][1] [issues][2] with feature unification which I don't entirely understand. However, as a workaround, I've changed the workspace Cargo.toml to add a [`default-members` field][3], so that running `cargo build` or `cargo check` _without_ `--workspace` will build the subset of crates in the `default-members` key. This way, `cargo {build, check, etc}` in the repo root will do something reasonable by default, and the actual platform targets can be built/checked with `cargo $WHATEVER --package $CRATE`. IMO, this is still substantially nicer than having a bunch of separate workspaces. [1]: ia0/data-encoding#47 [2]: bincode-org/bincode#556 [3]: https://doc.rust-lang.org/cargo/reference/workspaces.html#the-default-members-field
a295f34
to
1ba8998
Compare
??????? ??? ????????? ???
1ba8998
to
4e55131
Compare
This commit changes the Cargo workspace setup to put all crates in One Big Workspace, rather than having separate workspaces for some targets. We now use the `per-package-target` unstable cargo feature to build different crates for different targets. This means that `cargo` commands in the root workspace now work without requiring the user to `cd` into a particular directory to build a platform target --- for example, I can now run: ```shell # in the repo root directory $ cargo build -p mnemos-d1 --bin mq-pro ``` and build a MnemOS binary for the MQ Pro, without having to `cd` into the MQ Pro directory. This is also necessary in order to make the `x86_64` build process added in PR #216 work, since it relies on cargo artifact dependencies, which appear not to work across workspaces. One issue is that `cargo build --workspace` (and `check --workspace`, etc) still does not work correctly, due to some [weird][1] [issues][2] with feature unification which I don't entirely understand. However, as a workaround, I've changed the workspace Cargo.toml to add a [`default-members` field][3], so that running `cargo build` or `cargo check` _without_ `--workspace` will build the subset of crates in the `default-members` key. This way, `cargo {build, check, etc}` in the repo root will do something reasonable by default, and the actual platform targets can be built/checked with `cargo $WHATEVER --package $CRATE`. IMO, this is still substantially nicer than having a bunch of separate workspaces. [1]: ia0/data-encoding#47 [2]: bincode-org/bincode#556 [3]: https://doc.rust-lang.org/cargo/reference/workspaces.html#the-default-members-field
; Conflicts: ; .cargo/config.toml ; Cargo.lock ; Cargo.toml ; justfile
Depends on #219.
This branch adds an initial implementation of an x86_64 platform target for
mnemOS, using the
mycelium
x86 HAL. Currently, I haven't implementeddrivers for the UART, keyboard, or emb-display service (using the framebuffer as
the display), so we don't have SerMux, Forth, trace-proto, or anything else
interesting. However, what we do have is a basic bootable image with a timer,
allocator, and rudimentary kernel run loop, and --- as you can see here --- it
works:
I've also added new
just
commands for building x86_64 images and running themin QEMU for development.