Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempting to de-constrain memory module #639

Closed
ethindp opened this issue Jul 8, 2019 · 29 comments
Closed

Attempting to de-constrain memory module #639

ethindp opened this issue Jul 8, 2019 · 29 comments

Comments

@ethindp
Copy link

ethindp commented Jul 8, 2019

Right now I have the memory module as given in blog_os with some modifications (mainly heap allocation, thanks phil-opp). I am attempting to remove the constraints on the module, i.e.: right now I can't dynamically allocate pages outside of that module without an external memory mapper and frame allocator object -- which I can't (and don't want to) have to pass around my entire kernel, and its not going very well.
I am trying to write to memory-mapped ports at address 0xFEBF0000 (Qemu's Intel HDA device). Unfortunately, this is not present, and so I understandably get a page fault. I am, therefore, attempting to alter this so I can map it in the HDA driver and not at kernel load time (i.e., when heap allocation is taking place). ' Trying this:

use lazy_static::lazy_static;
use spin::Mutex;

lazy_static! {
static ref MAPPER: Mutex<Option<&'static dyn Mapper<Size4KiB>>> = Mutex::new(None);
static ref FRAME_ALLOCATOR: Mutex<Option<GlobalFrameAllocator>> = Mutex::new(None);
}

And then setting the mapper and frame allocator statics to Some() when they're set up by the kernel at runtime is failing with:

error[E0038]: the trait `x86_64::structures::paging::mapper::Mapper` cannot be made into an object
  --> src\memory.rs:14:1
   |
14 | / lazy_static! {
15 | | static ref MAPPER: Mutex<Option<&'static dyn Mapper<Size4KiB>>> = Mutex::new(None);
16 | | static ref FRAME_ALLOCATOR: Mutex<Option<GlobalFrameAllocator>> = Mutex::new(None);
17 | | }
   | |_^ the trait `x86_64::structures::paging::mapper::Mapper` cannot be made into an object
   |
   = note: method `map_to` has generic type parameters
   = note: method `identity_map` has generic type parameters
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

I've gotten this very error before when attempting to add a VFS driver with a trait that drivers could use to implement hooks for their own custom code so that the VFS layer is truly what it needs to be. I solved that by temporarily removing that part (it was a struct field), but I can't do that for this one. If the memory module remains how it is in the tutorial as-is, it will prohibit me from porting libraries like newlib or other C libraries that require functions like malloc(), free(), realloc() and calloc(). This is problematic, especially if I want to make any programs run on my OS, as all of them require the C stdlib.

How would I then be able to do this without having to completely rewrite my memory module? I would like to keep the structure but make it less difficult to use.

As a side note: I am not using the heap allocator that the tutorial on Heap allocation suggests, but am rather using the slab allocator that Redox uses.

My other option would be to rewrite the OS in C. But that would be extremely painstaking to do, not to mention I would lose all of the benefits Rust has. Does anyone have any ideas on how I can do this?

@phil-opp
Copy link
Owner

phil-opp commented Jul 8, 2019

Thanks for reporting! The problem is that the Mapper trait is not object safe because the map_to and identity_map functions are generic. I opened rust-osdev/x86_64#80 to track this issue.

In the meantime, you should be able to directly store the MappedPageTable struct in the static (instead of a &dyn Mapper). You still need &dyn for the physical_to_virtual function:

static MAPPER: Mutex<Option<MappedPageTable<'static, Box(dyn Fn(PhysFrame) -> *mut PageTable + Sync)>> = Mutex::new(None);

@ethindp
Copy link
Author

ethindp commented Jul 8, 2019

Doing that caused a hole host of issues. First the syntax wasn't valid on your statement. Then I needed to pull on alloc::boxed::Box. Then I had to change that to:

static ref MAPPER: Mutex<Option<MappedPageTable<'static, Box<dyn Fn(PhysFrame) -> *mut PageTable + Sync>>>> = Mutex::new(None);

That, in turn, caused a huge number of issues, mainly:

error[E0277]: `(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Sync + 'static)` cannot be sent between threads safely
  --> src\memory.rs:16:1
   |
16 | / lazy_static! {
17 | | static ref MAPPER: Mutex<Option<MappedPageTable<'static, Box<dyn Fn(PhysFrame) -> *mut PageTable + Sync>>>> = Mutex::new(None);
18 | | static ref FRAME_ALLOCATOR: Mutex<Option<GlobalFrameAllocator>> = Mutex::new(None);
19 | | }
   | |_^ `(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Sync + 'static)` cannot be sent between threads safely
   |
   = help: the trait `core::marker::Send` is not implemented for `(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Sync + 'static)`
   = note: required because of the requirements on the impl of `core::marker::Send` for `core::ptr::Unique<(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Sync + 'static)>`
   = note: required because it appears within the type `memory::alloc::boxed::Box<(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Sync + 'static)>`
   = note: required because it appears within the type `x86_64::structures::paging::mapper::mapped_page_table::PageTableWalker<memory::alloc::boxed::Box<(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Sync + 'static)>>`
   = note: required because it appears within the type `x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'static, memory::alloc::boxed::Box<(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Sync + 'static)>>`
   = note: required because it appears within the type `core::option::Option<x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'static, memory::alloc::boxed::Box<(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Sync + 'static)>>>`
   = note: required because of the requirements on the impl of `core::marker::Sync` for `spin::mutex::Mutex<core::option::Option<x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'static, memory::alloc::boxed::Box<(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Sync + 'static)>>>>`
   = note: required by `vga::lazy_static::lazy::Lazy`
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

@phil-opp
Copy link
Owner

phil-opp commented Jul 8, 2019

Have you tried adding + Send after Sync?

@ethindp
Copy link
Author

ethindp commented Jul 8, 2019

Taht worked. But holy hell is this code disgusting.

@phil-opp
Copy link
Owner

phil-opp commented Jul 8, 2019

Taht worked.

Perfect! Often you can directly follow the compiler errors: Send trait not implemented -> add a Send bound on the type parameter.

But holy hell is this code disgusting.

Just introduce a type alias for the dyn expression:

type PhysToVirt = Box<dyn Fn(PhysFrame) -> *mut PageTable + Send + Sync>;

static ref MAPPER: Mutex<Option<MappedPageTable<'static, PhysToVirt>>> = Mutex::new(None);

@ethindp
Copy link
Author

ethindp commented Jul 8, 2019

Yeah, I'll probably do that. Make a type for the entire statement type.

@phil-opp
Copy link
Owner

phil-opp commented Jul 8, 2019

Ok, I think we can close this issue given that it's working now?

@ethindp
Copy link
Author

ethindp commented Jul 8, 2019

Yeah, just need to fix code that I wrote to work.

@ethindp
Copy link
Author

ethindp commented Jul 8, 2019

OK, so this isn't working very well. I've defined a type for the disgusting code:

type PageTableMapper = MappedPageTable<'static, Box<dyn Fn(PhysFrame) -> *mut PageTable + Sync + Send>>;

And that works:

static ref MAPPER: Mutex<Option<PageTableMapper>> = Mutex::new(None);

What doesn't work is when I go to actually initialize the mapper to something that I can use. The following doesn't work:

pub fn init(physical_memory_offset: u64, memory_map: &'static MemoryMap) {
let mut mapper = MAPPER.lock();
let mut allocator = FRAME_ALLOCATOR.lock();
*mapper = Some(unsafe { PageTableMapper::from(init_mapper(physical_memory_offset)) });
*allocator = Some(unsafe { GlobalFrameAllocator::init(memory_map) });
}

And fails with

error[E0308]: mismatched types
   --> src\memory.rs:138:47
    |
138 | *mapper = Some(unsafe { PageTableMapper::from(init_mapper(physical_memory_offset)) });
    |                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable`, found opaque type
    |
    = note: expected type `x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'static, memory::alloc::boxed::Box<(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Send + core::marker::Sync + 'static)>>`
               found type `impl x86_64::structures::paging::mapper::MapperAllSizes`

I can't use an as type cast, because that doesn't work with complex types. I can't just do:

*mapper = Some(unsafe { init_mapper(physical_memory_offset) });

Because it gives me that same blasted error. The function its calling is the same (albeit renamed) that is blog_os:

unsafe fn init_mapper(physical_memory_offset: u64) -> impl MapperAllSizes {
// Get active L4 table
    let (level_4_table, _) = get_active_l4_table(physical_memory_offset);
// Convert the physical frame address into a virtual one
    let phys_to_virt = move |frame: PhysFrame| -> *mut PageTable {
        let phys = frame.start_address().as_u64();
        let virt = VirtAddr::new(phys + physical_memory_offset);
        virt.as_mut_ptr()
    };
// And return it
    MappedPageTable::new(level_4_table, phys_to_virt)
}

Any ideas?

@phil-opp
Copy link
Owner

phil-opp commented Jul 8, 2019

Your init_mapper has a return type of impl MapperAllSizes, which basically means "some type that implements the trait". For this reason, the Rust compiler doesn't know which concrete type is returned from that function.

To fix it, change the return type to your PageTableMapper alias:

unsafe fn init_mapper(physical_memory_offset: u64) -> PageTableMapper {}

You will need to box your phys_to_virt function for that.

@ethindp
Copy link
Author

ethindp commented Jul 8, 2019

OK, I'll try some combinations and close this issue if I get it resolved.

@ethindp
Copy link
Author

ethindp commented Jul 8, 2019

No, this isn't working. This function:

pub fn allocate_heap(start: u64, size: u64) {
// Acquire mapper and allocator locks
let (mapper, allocator) = (MAPPER.lock(), FRAME_ALLOCATOR.lock());
// Determine if we've initialized the mapper and allocator yet
if mapper.is_some() && allocator.is_some() {
// Create a heap page range
let mut mapper = mapper.unwrap();
let mut allocator = allocator.unwrap();
allocate_paged_heap(start, size, &mut mapper, &mut allocator).unwrap();
} else {
panic!("Attempted to allocate paged heap when mapper and alocator weren't initialized");
}
}

Fails with:

error[E0507]: cannot move out of dereference of `spin::mutex::MutexGuard<'_, core::option::Option<x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'_, memory::alloc::boxed::Box<dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Send + core::marker::Sync>>>>`
   --> src\memory.rs:148:18
    |
148 | let mut mapper = mapper.unwrap();
    |                  ^^^^^^ move occurs because value has type `core::option::Option<x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'_, memory::alloc::boxed::Box<dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Send + core::marker::Sync>>>`, which does not implement the `Copy` trait

This is odd; I thought Option<...> implemented clone and copy traits? If I add .as_mut() to it, a different error occurs:

error[E0277]: the trait bound `&mut x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'static, memory::alloc::boxed::Box<(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Send + core::marker::Sync + 'static)>>: x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size4KiB>` is not satisfied
   --> src\memory.rs:150:1
    |
150 | allocate_paged_heap(start, size, &mut mapper, &mut allocator).unwrap();
    | ^^^^^^^^^^^^^^^^^^^ the trait `x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size4KiB>` is not implemented for `&mut x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'static, memory::alloc::boxed::Box<(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Send + core::marker::Sync + 'static)>>`
    |
    = help: the following implementations were found:
              <x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'a, PhysToVirt> as x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size1GiB>>
              <x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'a, PhysToVirt> as x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size2MiB>>
              <x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'a, PhysToVirt> as x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size4KiB>>
note: required by `memory::allocate_paged_heap`
   --> src\memory.rs:42:1
    |
42  | fn allocate_paged_heap(start: u64, size: u64, mapper: &mut impl Mapper<Size4KiB>, frame_allocator: &mut impl FrameAllocator<Size4KiB>)->Result<(), MapToError> {
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I've boxed the phys_to_addr function as you said:

unsafe fn init_mapper(physical_memory_offset: u64) -> PageTableMapper {
// Get active L4 table
    let (level_4_table, _) = get_active_l4_table(physical_memory_offset);
// Convert the physical frame address into a virtual one
    let phys_to_virt = Box::new(move |frame: PhysFrame| -> *mut PageTable {
        let phys = frame.start_address().as_u64();
        let virt = VirtAddr::new(phys + physical_memory_offset);
        virt.as_mut_ptr()
    });
// And return it
    MappedPageTable::new(level_4_table, phys_to_virt)
}

@ethindp
Copy link
Author

ethindp commented Jul 8, 2019

Update: solved the problem, thanks to a not-so-desimilar problem over at https://www.reddit.com/r/rust/comments/bbjoqq/how_to_fix_value_used_here_after_move_error_in
Edit: spoke too early. I've changed the function to:

pub fn allocate_heap(start: u64, size: u64) {
let (mapper, allocator) = (MAPPER.lock(), FRAME_ALLOCATOR.lock());
match (*mapper, *allocator) {
(Some(mut m), Some(mut a)) => {
allocate_paged_heap(start, size, &mut m, &mut a).unwrap();
},
_ => panic!("Cannot acquire mapper or frame allocator lock!")
}
}

This (IMO) makes this function simpler and easier to read. But it still throws the same error in the previous comment. I'm just confused now -- this doesn't make much sense to me.

@phil-opp
Copy link
Owner

phil-opp commented Jul 8, 2019

If you can't move take ownership of an value ("cannot move out of …"), try using a reference instead. For example by using Some(ref mut m) or match (mapper, allocator)/match (&mut mapper, &mut allocator) (not sure what works, I don't have the time to try it right now).

@ethindp
Copy link
Author

ethindp commented Jul 8, 2019

Neither worked. I may need to rid the code of the mutex in this instance, even though I really don't want to do that.

@phil-opp
Copy link
Owner

phil-opp commented Jul 8, 2019

The lock method returns a guard that derefs to a normal mutable reference, so you gain nothing by removing the mutex. Try the Option::as_mut() method

@ethindp
Copy link
Author

ethindp commented Jul 8, 2019

Changed it to:

pub fn allocate_heap(start: u64, size: u64) {
let mut mapper = MAPPER.lock();
let mut allocator = FRAME_ALLOCATOR.lock();
match (mapper.as_ref(), allocator.as_ref()) {
(Some(mut m), Some(mut a)) => {
allocate_paged_heap(start, size, &mut m, &mut a).unwrap();
},
_ => panic!("Cannot acquire mapper or frame allocator lock!")
}
}

And get:

error[E0277]: the trait bound `&x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'_, memory::alloc::boxed::Box<dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Send + core::marker::Sync>>: x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size4KiB>` is not satisfied
   --> src\memory.rs:147:1
    |
147 | allocate_paged_heap(start, size, &mut m, &mut a).unwrap();
    | ^^^^^^^^^^^^^^^^^^^ the trait `x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size4KiB>` is not implemented for `&x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'_, memory::alloc::boxed::Box<dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Send + core::marker::Sync>>`
    |
    = help: the following implementations were found:
              <x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'a, PhysToVirt> as x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size1GiB>>
              <x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'a, PhysToVirt> as x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size2MiB>>
              <x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'a, PhysToVirt> as x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size4KiB>>
note: required by `memory::allocate_paged_heap`
   --> src\memory.rs:42:1
    |
42  | fn allocate_paged_heap(start: u64, size: u64, mapper: &mut impl Mapper<Size4KiB>, frame_allocator: &mut impl FrameAllocator<Size4KiB>)->Result<(), MapToError> {
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

error[E0277]: the trait bound `&memory::GlobalFrameAllocator: x86_64::structures::paging::frame_alloc::FrameAllocator<x86_64::structures::paging::page::Size4KiB>` is not satisfied
   --> src\memory.rs:147:1
    |
147 | allocate_paged_heap(start, size, &mut m, &mut a).unwrap();
    | ^^^^^^^^^^^^^^^^^^^ the trait `x86_64::structures::paging::frame_alloc::FrameAllocator<x86_64::structures::paging::page::Size4KiB>` is not implemented for `&memory::GlobalFrameAllocator`
    |
    = help: the following implementations were found:
              <memory::GlobalFrameAllocator as x86_64::structures::paging::frame_alloc::FrameAllocator<x86_64::structures::paging::page::Size4KiB>>
note: required by `memory::allocate_paged_heap`
   --> src\memory.rs:42:1
    |
42  | fn allocate_paged_heap(start: u64, size: u64, mapper: &mut impl Mapper<Size4KiB>, frame_allocator: &mut impl FrameAllocator<Size4KiB>)->Result<(), MapToError> {
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

If I change it to as_mut(), I get something similar:

error[E0277]: the trait bound `&mut x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'static, memory::alloc::boxed::Box<(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Send + core::marker::Sync + 'static)>>: x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size4KiB>` is not satisfied
   --> src\memory.rs:147:1
    |
147 | allocate_paged_heap(start, size, &mut m, &mut a).unwrap();
    | ^^^^^^^^^^^^^^^^^^^ the trait `x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size4KiB>` is not implemented for `&mut x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'static, memory::alloc::boxed::Box<(dyn core::ops::Fn(x86_64::structures::paging::frame::PhysFrame) -> *mut x86_64::structures::paging::page_table::PageTable + core::marker::Send + core::marker::Sync + 'static)>>`
    |
    = help: the following implementations were found:
              <x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'a, PhysToVirt> as x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size1GiB>>
              <x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'a, PhysToVirt> as x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size2MiB>>
              <x86_64::structures::paging::mapper::mapped_page_table::MappedPageTable<'a, PhysToVirt> as x86_64::structures::paging::mapper::Mapper<x86_64::structures::paging::page::Size4KiB>>
note: required by `memory::allocate_paged_heap`
   --> src\memory.rs:42:1
    |
42  | fn allocate_paged_heap(start: u64, size: u64, mapper: &mut impl Mapper<Size4KiB>, frame_allocator: &mut impl FrameAllocator<Size4KiB>)->Result<(), MapToError> {
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

error[E0277]: the trait bound `&mut memory::GlobalFrameAllocator: x86_64::structures::paging::frame_alloc::FrameAllocator<x86_64::structures::paging::page::Size4KiB>` is not satisfied
   --> src\memory.rs:147:1
    |
147 | allocate_paged_heap(start, size, &mut m, &mut a).unwrap();
    | ^^^^^^^^^^^^^^^^^^^ the trait `x86_64::structures::paging::frame_alloc::FrameAllocator<x86_64::structures::paging::page::Size4KiB>` is not implemented for `&mut memory::GlobalFrameAllocator`
    |
    = help: the following implementations were found:
              <memory::GlobalFrameAllocator as x86_64::structures::paging::frame_alloc::FrameAllocator<x86_64::structures::paging::page::Size4KiB>>
note: required by `memory::allocate_paged_heap`
   --> src\memory.rs:42:1
    |
42  | fn allocate_paged_heap(start: u64, size: u64, mapper: &mut impl Mapper<Size4KiB>, frame_allocator: &mut impl FrameAllocator<Size4KiB>)->Result<(), MapToError> {
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I'm sorry if this is newbie stuff, its just incredibly confusing.
Edit: got it to work. :)

@ethindp
Copy link
Author

ethindp commented Jul 8, 2019

OK. One last issue. I'm trying to initialize the memory allocator. The problem is that my page mapper uses Box<>, which requires a memory allocator (I think). If this is going to work, I'll need to map physical memory into the allocator instead of using memory pages. Thoughts/ideas/suggestions?

@phil-opp
Copy link
Owner

phil-opp commented Jul 9, 2019

See https://os.phil-opp.com/heap-allocation/#creating-a-kernel-heap on how to map pages for the kernel heap. You can do that with the mapper instance before you initialize your static MAPPER, which requires the Box.

@ethindp
Copy link
Author

ethindp commented Jul 9, 2019

Yes, but according to that code, I need an already existing mapper. I can't map the page range before I initialize the mapper with the mapper, because the mapper requires a heap, and I don't have a heap until I initialize the mapper. (That's very confusing.)
As an illustration of the conundrum I face, we have this type:

type PageTableMapper = MappedPageTable<'static, Box<dyn Fn(PhysFrame) -> *mut PageTable + Sync + Send>>;

And this static

static ref MAPPER: Mutex<Option<PageTableMapper>> = Mutex::new(None);

which expands to

static ref MAPPER: Mutex<Option<MappedPageTable<'static, Box<dyn Fn(PhysFrame) -> *mut PageTable + Sync + Send>>>> = Mutex::new(None);

That's fine. It... half works. I have my init function:

pub fn init(physical_memory_offset: u64, memory_map: &'static MemoryMap) {
    let mut mapper = MAPPER.lock();
    let mut allocator = FRAME_ALLOCATOR.lock();
    *mapper = Some(unsafe { init_mapper(physical_memory_offset) });
    *allocator = Some(unsafe { GlobalFrameAllocator::init(memory_map) });
    let mut start_addr: u64 = 0;
    let mut end_addr = 0;
    for region in memory_map.iter() {
        if region.region_type == MemoryRegionType::Usable {
            start_addr = physical_memory_offset + region.range.start_addr();
            end_addr = physical_memory_offset + region.range.end_addr();
            while ((end_addr - start_addr) % 32768) != 0 {
                end_addr = end_addr - 1;
            }
            break;
        }
    }
    allocate_heap(start_addr, end_addr - start_addr);
}

and that function calls init_mapper():

unsafe fn init_mapper(physical_memory_offset: u64) -> PageTableMapper {
    // Get active L4 table
    let (level_4_table, _) = get_active_l4_table(physical_memory_offset);
    // Convert the physical frame address into a virtual one
    let phys_to_virt = Box::new(move |frame: PhysFrame| -> *mut PageTable {
        let phys = frame.start_address().as_u64();
        let virt = VirtAddr::new(phys + physical_memory_offset);
        virt.as_mut_ptr()
    });
    // And return it
    MappedPageTable::new(level_4_table, phys_to_virt)
}

And that's the issue, I think: the Box<> requires a heap, but this function also requires a heap to allocate the mapper, but to get a heap we need to map a page with the mapper. Its confusing, but I hope this code illustrates the problem.

@phil-opp
Copy link
Owner

phil-opp commented Jul 9, 2019

Oh, I understand the issue now. This is indeed a bit tricky.

I'm about to add a new OffsetPageTable type to the x86_64 crate that should simplify things considerably (no closure or Box needed anymore). Perhaps you could try whether new type works for you.

To try it, change the version of x86_64 to 0.7.2-beta.0 in your Cargo.toml. Then you can change your init_mapper function to the following:

use x86_64::structures::paging::OffsetPageTable;

pub unsafe fn init(physical_memory_offset: u64) -> OffsetPageTable<'static> {
    let level_4_table = active_level_4_table(physical_memory_offset);
    OffsetPageTable::new(level_4_table, physical_memory_offset)
}

Your MAPPER static can then become:

static ref MAPPER: Mutex<Option<OffsetPageTable>> = Mutex::new(None);

@ethindp
Copy link
Author

ethindp commented Jul 9, 2019

Gloriously, that worked! My statics now look like:

lazy_static! {
/// The page table mapper (PTM) used by the kernel global memory allocator.
static ref MAPPER: Mutex<Option<OffsetPageTable<'static>>> = Mutex::new(None);
/// The global frame allocator (GFA); works in conjunction with the PTM.
static ref FRAME_ALLOCATOR: Mutex<Option<GlobalFrameAllocator>> = Mutex::new(None);
}

Unfortunately, actually mapping the heap is not. My kernel is being mapped in the really high regions of memory; the boot physical memory offset (or PMO as I call it) is 0xFFFFFC0000000000. The parent of that page is apparently a huge page, which is prohibiting mapping to that page range:

Fatal error: panicked at 'Cannot allocate primary heap: ParentEntryHugePage', src\memory.rs:159:11

I have absolutely no idea why the boot loader is assuming such a ridiculous offset as that. (Isn't that offset outside the range of physical memory on any computer in existence?)
I can change the PMO via an environment variable, but I'd need to write some kind of batch/shell script to do that every time I build. Is there an easier way to do this (say, within cargo.toml)?

@phil-opp
Copy link
Owner

phil-opp commented Jul 9, 2019

Could you show me what you are trying to do? Sounds like you're confusing physical and virtual memory. The virtual address space always 2^48 bytes large and completely independent of the amount of physical memory.

@ethindp
Copy link
Author

ethindp commented Jul 9, 2019

Yep. I have a function, allocate_heap, that allocates me a heap:

pub fn allocate_heap(start: u64, size: u64) {
    let mut mapper = MAPPER.lock();
    let mut allocator = FRAME_ALLOCATOR.lock();
    match (mapper.as_mut(), allocator.as_mut()) {
        (Some(m), Some(a)) => {
            allocate_paged_heap(start, size, m, a).unwrap();
        }
        _ => panic!("Cannot acquire mapper or frame allocator lock!"),
    }
}

But I can't call that within my init function since that function holds the mutex lock. So here's my init function:

pub fn init(physical_memory_offset: u64, memory_map: &'static MemoryMap) {
    let mut mapper = MAPPER.lock();
    let mut allocator = FRAME_ALLOCATOR.lock();
    *mapper = Some(unsafe { init_mapper(physical_memory_offset) });
    *allocator = Some(unsafe { GlobalFrameAllocator::init(memory_map) });
    // We risk over-allocating, but that's much better than under-allocating here.
    let mut start_addr: u64 = 0;
    let mut end_addr = 0;
    for region in memory_map.iter() {
        if region.region_type == MemoryRegionType::Usable {
            start_addr = physical_memory_offset + region.range.start_addr();
            end_addr = physical_memory_offset + region.range.end_addr();
            break;
        }
    }
    // We cannot call allocate_paged_heap here since we hold the spinlock, which would result in an endless lock acquisition attempt loop. Instead we call the function directly here.
    match (mapper.as_mut(), allocator.as_mut()) {
        (Some(m), Some(a)) => match allocate_paged_heap_with_perms(
            start_addr,
            end_addr - start_addr,
            m,
            a,
            PageTableFlags::PRESENT | PageTableFlags::WRITABLE | PageTableFlags::HUGE_PAGE,
        ) {
            Ok(()) => (),
            Err(e) => panic!("Cannot allocate primary heap: {:?}", e),
        },
        _ => panic!("Cannot acquire mapper or frame allocator lock!"),
    }
}

For reference, I have two functions:

  • allocate_paged_heap: allocates a heap with flags Writable and Present only.
  • allocate_paged_heap_with_perms: allocates a heap with the specified permissions.

Similarly, I have two other functions for mapping normal memory pages, allocate_page_range and allocate_page_range_with_perms. The code for allocate_paged_heap_with_perms looks like:

/// Allocates a paged heap with the specified permissions.
/// Possible permissions are:
///
/// * Writable: controls whether writes to the mapped frames are allowed. If this bit is unset in a level 1 page table entry, the mapped frame is read-only. If this bit is unset in a higher level page table entry the complete range of mapped pages is read-only.
/// * User accessible: controls whether accesses from userspace (i.e. ring 3) are permitted.
/// * Write-through: if this bit is set, a "write-through" policy is used for the cache, else a "write-back" policy is used. Types of caching is explained below.
/// * No cache: Disables caching for this memory page.
/// * Accessed: set by the CPU when the mapped frame or page table is accessed.
/// * Dirty: set by the CPU on a write to the mapped frame.
/// * Huge page: specifies that the entry maps a huge frame instead of a page table. Only allowed in P2 or P3 tables.
/// * Global: indicates that the mapping is present in all address spaces, so it isn't flushed from the TLB on an address space switch.
/// * bits 9, 10, 11, and 52-62: available to the OS, can be used to store additional data, e.g. custom flags.
/// * No execute: forbid code execution from the mapped frames. Can be only used when the no-execute page protection feature is enabled in the EFER register.
fn allocate_paged_heap_with_perms(
    start: u64,
    size: u64,
    mapper: &mut impl Mapper<Size4KiB>,
    frame_allocator: &mut impl FrameAllocator<Size4KiB>,
    permissions: PageTableFlags,
) -> Result<(), MapToError> {
    let page_range = {
        let heap_start = VirtAddr::new(start as u64);
        let heap_end = heap_start + size - 1u64;
        let heap_start_page = Page::containing_address(heap_start);
        let heap_end_page = Page::containing_address(heap_end);
        Page::range_inclusive(heap_start_page, heap_end_page)
    };
    for page in page_range {
        let frame = match frame_allocator.allocate_frame() {
            Some(f) => f,
            None => panic!("Can't allocate frame!"),
        };
        unsafe {
            mapper
                .map_to(page, frame, permissions, frame_allocator)?
                .flush()
        };
    }
    Ok(())
}

(Yes, I copied the descriptions for the flags from x86_64 crates docs, but that was because I couldn't find any other source that described them. I need to source it in that comment.)
So that's what I'm trying to do, and its dying when trying to map the heap page because the parnt is a huge page (or so I think that's what that error means if I understand it right).

@phil-opp
Copy link
Owner

phil-opp commented Jul 9, 2019

The virtual address range that we choose for our heap is arbitrary and completely independent of the physical memory. As noted in https://os.phil-opp.com/heap-allocation/#creating-a-kernel-heap:

The first step is to define a virtual memory region for the heap. We can choose any virtual address range that we like, as long as it is not already used for a different memory region. Let's define it as the memory starting at address 0x_4444_4444_0000 so that we can easily recognize a heap pointer later:

// in src/allocator.rs

pub const HEAP_START: usize = 0x_4444_4444_0000;
pub const HEAP_SIZE: usize = 100 * 1024; // 100 KiB

You do this instead:

    // We risk over-allocating, but that's much better than under-allocating here.
    let mut start_addr: u64 = 0;
    let mut end_addr = 0;
    for region in memory_map.iter() {
        if region.region_type == MemoryRegionType::Usable {
            start_addr = physical_memory_offset + region.range.start_addr();
            end_addr = physical_memory_offset + region.range.end_addr();
            break;
        }
    }

This way, you choose the virtual address range of the physical memory mapping (setup by the bootloader). This violates the "as long as it is not already used for a different memory region" condition and is the cause of your error message.

Just replace this code with:

    let start_addr = 0x_4444_4444_0000;
    let end_addr = start_addr + 100 * 1024;

The 0x_4444_4444_0000 address is completely arbitrary. You can also choose 0x_5555_5550_0000, 0x_1000_1000_0000, or any other address, as long as it's not already used for something else.

@ethindp
Copy link
Author

ethindp commented Jul 9, 2019

True. My reasoning of using that was I wanted to pick the first available "free" memory region, but I could also arbitrarily pick one. The issue then becomes "how do I detect whether its used or not?" Can I just check the memory map for that?

@phil-opp
Copy link
Owner

phil-opp commented Jul 9, 2019

The memory map tells you which physical memory regions are free. The page table tells you which virtual memory regions are free. Since page tables allow arbitrary mappings, these two regions can be completely different.

You got an error while mapping because you picked a virtual memory region that is not free (it was already mapped by the bootloader).

@ethindp
Copy link
Author

ethindp commented Jul 9, 2019

It works! Thank you!

@ethindp ethindp closed this as completed Jul 9, 2019
@phil-opp
Copy link
Owner

phil-opp commented Jul 9, 2019

Great, you're welcome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants