Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

platform(x86_64): initial x86_64 bringup #216

Merged
merged 35 commits into from
Aug 9, 2023
Merged

platform(x86_64): initial x86_64 bringup #216

merged 35 commits into from
Aug 9, 2023

Conversation

hawkw
Copy link
Contributor

@hawkw hawkw commented Aug 2, 2023

Depends on #219.

This branch adds an initial implementation of an x86_64 platform target for
mnemOS, using the mycelium x86 HAL. Currently, I haven't implemented
drivers for the UART, keyboard, or emb-display service (using the framebuffer as
the display), so we don't have SerMux, Forth, trace-proto, or anything else
interesting. However, what we do have is a basic bootable image with a timer,
allocator, and rudimentary kernel run loop, and --- as you can see here --- it
works:

image

I've also added new just commands for building x86_64 images and running them
in QEMU for development.

Copy link
Contributor Author

@hawkw hawkw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@saleemrashid, if you want to work on this, feel free to pick up from where i've left off! i left some notes to help you get started.

we'll want to add some bin crates in platform/x86-64 for actually building mnemOS binaries with various bootloaders. i think the rust-osdev/bootloader crate is nice for testing in e.g. QEMU and i already have code from Mycelium we can copy for implementing hal-x86-64's bootinfo trait for rust-osdev/bootloader. we can steal the build process from Mycelium's stupid, overengineered build tool, or by following the bootloader crate's documentation. any other bootloaders we want to support will need to either chainload into bootloader to populate its bootinfo, or have their own way of providing the hal-core::BootInfo trait, implemented in that bootloader's bin target.

interrupt::enable_exceptions();
bootinfo.init_paging();

// TODO: init allocator!
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the bootinfo will give us a memory map we can use to populate a kernel allocator. this should provide something for mnemos_alloc but will also need to implement the page::Alloc trait from mycelium hal.

see https://github.com/hawkw/mycelium/blob/1f125194902cd4970b72eab0aa1d85d1b6ec1489/src/lib.rs#L118-L152

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we want to init tracing before we init the allocator, we will probably need a small bump region for the tracing subscriber's single arc allocation.

pub mod interrupt;

pub fn init(bootinfo: &impl BootInfo, rsdp_addr: Option<PAddr>) -> &'static Kernel {
// TODO: init early tracing?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be nice to have framebuffer and/or UART tracing before the runtime comes up...


// TODO: init allocator!

// TODO: PCI?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably punt on PCI for the initial bringup branch...

}
};

// TODO: spawn drivers (UART, keyboard, ...)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

most of this isn't needed for the initial bringup branch but we probably need either emb_display (writing directly to framebuf) or UART in order to prove we are alive. mycelium's hal-x86_64 crate has modules for the framebuffer as well as the 16550 UART, which will need to be wrapped with mnemOS-style driver services eventually (they would implement the EmbDisplayService and SimpleSerialService, respectively).

Comment on lines 51 to 86
loop {
// Tick the scheduler
// TODO(eliza): do we use the PIT or the local APIC timer?
let start = todo!("current value of freewheeling timer");
let tick = k.tick();

// Timer is downcounting
let elapsed = start.wrapping_sub(todo!("timer current value"));
let turn = k.timer().force_advance_ticks(elapsed.into());

// If there is nothing else scheduled, and we didn't just wake something up,
// sleep for some amount of time
if turn.expired == 0 && !tick.has_remaining {
let wfi_start = todo!("timer current value");

// TODO(AJM): Sometimes there is no "next" in the timer wheel, even though there should
// be. Don't take lack of timer wheel presence as the ONLY heuristic of whether we
// should just wait for SOME interrupt to occur. For now, force a max sleep of 100ms
// which is still probably wrong.
let amount = turn
.ticks_to_next_deadline()
.unwrap_or(todo!("figure this out"));

todo!("reset timer");

unsafe {
interrupt::wait_for_interrupt();
}
// Disable the timer interrupt in case that wasn't what woke us up
todo!("clear timer irq");

// Account for time slept
let elapsed = wfi_start.wrapping_sub(todo!("current timer value"););
let _turn = k.timer().force_advance_ticks(elapsed.into());
}
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is missing timer stuff.

we will need to change this to actually use the timer. the other mnemOS platform impls use the timer in freewheeling mode, where the timer is configured to interrupt us at the time when the next timeout scheduled on the timer wheel expires. this is a bit different from how Mycelium currently uses the timer; it's configured in periodic mode and just pends a single tick every 10ms. the mnemOS way is a bit nicer imo, but somewhat more complex.

the mycelium HAL has modules for both the Programmable Interrupt Timer (PIT) as well as the local APIC timer. we should prefer the local APIC timer, as it's newer and allows each CPU core to have its own timer. however, my hal code currently only has a method to configure the local APIC timer in periodic mode rather than in freewheeling/oneshot code. so, we might want to add that in hal-x86_64...

we'll also need to be able to read the timestamp from that timer if we want our runloop to use a downcounting rather than periodic timer, so that we can know how much to advance the wheel by.

it might, alternatively, be simpler to start out with periodic mode, but our runloop would look somewhat different from other mnemOS impls...

k
}

pub fn run(bootinfo: &impl BootInfo, k: &'static Kernel) -> ! {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will probably also need to take some kind of "CPU core context" eventually when we actually get to SMP support...

@hawkw hawkw changed the title [WIP] start on x86 bringup platform(x86_64): initial x86_64 bringup Aug 8, 2023
@hawkw hawkw requested a review from jamesmunns August 8, 2023 17:43
@hawkw hawkw marked this pull request as ready for review August 8, 2023 17:45
hawkw added 9 commits August 9, 2023 09:23
This is needed when building new versions of `llvm-tools`.
This commit changes the Cargo workspace setup to put all crates in One
Big Workspace, rather than having separate workspaces for some targets.
We now use the `per-package-target` unstable cargo feature to build
different crates for different targets. This means that `cargo` commands
in the root workspace now work without requiring the user to `cd` into a
particular directory to build a platform target --- for example, I can
now run:

```console
# in the repo root directory
$ cargo build -p mnemos-d1 --bin mq-pro
```

and build a MnemOS binary for the MQ Pro, without having to `cd` into
the MQ Pro directory.

One issue is that `cargo build --workspace` (and `check --workspace`,
etc) still does not work correctly, due to some [weird][1] [issues][2]
with feature unification which I don't entirely understand. However, as
a workaround, I've changed the workspace Cargo.toml to add a
[`default-members` field][3], so that running `cargo build` or `cargo
check` _without_ `--workspace` will build the subset of crates in the
`default-members` key. This way, `cargo {build, check, etc}` in the repo
root will do something reasonable by default, and the actual platform
targets can be built/checked with `cargo $WHATEVER --package $CRATE`.
IMO, this is still substantially nicer than having a bunch of separate
workspaces.

[1]: ia0/data-encoding#47
[2]: bincode-org/bincode#556
[3]: https://doc.rust-lang.org/cargo/reference/workspaces.html#the-default-members-field
@hawkw hawkw force-pushed the eliza/x86-bringup branch from a295f34 to 1ba8998 Compare August 9, 2023 19:33
hawkw added a commit that referenced this pull request Aug 9, 2023
This commit changes the Cargo workspace setup to put all crates in One
Big Workspace, rather than having separate workspaces for some targets.
We now use the `per-package-target` unstable cargo feature to build
different crates for different targets. This means that `cargo` commands
in the root workspace now work without requiring the user to `cd` into a
particular directory to build a platform target --- for example, I can
now run:

```shell
# in the repo root directory
$ cargo build -p mnemos-d1 --bin mq-pro
```

and build a MnemOS binary for the MQ Pro, without having to `cd` into
the MQ Pro directory.

This is also necessary in order to make the `x86_64` build process added
in PR #216 work, since it relies on cargo artifact dependencies, which
appear not to work across workspaces.

One issue is that `cargo build --workspace` (and `check --workspace`,
etc) still does not work correctly, due to some [weird][1] [issues][2]
with feature unification which I don't entirely understand. However, as
a workaround, I've changed the workspace Cargo.toml to add a
[`default-members` field][3], so that running `cargo build` or `cargo
check`
_without_ `--workspace` will build the subset of crates in the
`default-members` key. This way, `cargo {build, check, etc}` in the repo
root will do something reasonable by default, and the actual platform
targets can be built/checked with `cargo $WHATEVER --package $CRATE`.
IMO, this is still substantially nicer than having a bunch of separate
workspaces.

[1]: ia0/data-encoding#47
[2]: bincode-org/bincode#556
[3]: https://doc.rust-lang.org/cargo/reference/workspaces.html#the-default-members-field
hawkw added 2 commits August 9, 2023 13:59
; Conflicts:
;	.cargo/config.toml
;	Cargo.lock
;	Cargo.toml
;	justfile
@hawkw hawkw merged commit 0e003c3 into main Aug 9, 2023
@hawkw hawkw deleted the eliza/x86-bringup branch August 9, 2023 21:21
@jamesmunns jamesmunns added the platform: x86_64 Specific to the x86_64 hardware platform label Aug 10, 2023
@hawkw hawkw added this to the x86_64 basic bringup milestone Oct 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform: x86_64 Specific to the x86_64 hardware platform
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants