Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comments for "https://os.phil-opp.com/double-fault-exceptions/" #449

Closed
utterances-bot opened this issue Jul 3, 2018 · 77 comments
Closed

Comments

@utterances-bot
Copy link

utterances-bot commented Jul 3, 2018

This is a general purpose comment thread for the “Double Faults” post.

Copy link

pbn4 commented Jul 3, 2018

Awesome stuff, waiting for more :)

@phil-opp
Copy link
Owner

phil-opp commented Jul 5, 2018

@pbn4 Thank you :)

Copy link
Contributor

mtn commented Jul 8, 2018

This is awesome -- thank you for the amazing work @phil-opp!

I had a newbie question: I understand why these handlers are useful in preventing everything from ending up in a restart loop, but differences would an actual implementation have? For example, if a user is running a program that results in a page fault in their shell, the handler must be reporting that back to the application so it can surface it to the user, right?

@phil-opp phil-opp changed the title https://os.phil-opp.com/double-fault-exceptions/ Comments for "https://os.phil-opp.com/double-fault-exceptions/" Jul 9, 2018
@phil-opp
Copy link
Owner

phil-opp commented Jul 9, 2018

@mtn Thank you!

I understand why these handlers are useful in preventing everything from ending up in a restart loop, but differences would an actual implementation have?

Depends on the OS implementation and the fault type. For example, if the exception is caused because a userspace process tried to execute a privileged instruction, the kernel would simply kill the process (and the shell would report to the user that the process was killed).

For a page fault, the kernel can react in multiple ways. If it's just an out of bound access to unmapped memory (like we do in the blog post), the kernel would kill the user program with a segmentation fault. However, most operating systems have a mechanism called swapping, where parts of the memory are moved to disk when the main memory becomes too full. Then a legitimate memory access could cause a page fault because the accessed data is no longer in memory. The OS can handle this page fault by loading the contents of the memory page from disk and continuing the interrupted process. This technique is called demand paging and allows to run programs that wouldn't fit completely into memory.

Copy link

Yeah! That you showed for us is what we have learned from OS course.

Copy link

montao commented Jul 21, 2018

Thanks for the post, Phil. I am going to catch up on your posts now that I have completed a B.Sc. in technology. I am going to continue for a master in computer science and your material is very helpful for understanding operating systems.

Small typo maybe: You spell it 0xdeadbeaf but the common spelling is usually 0xdeadbeef isn't it?

@phil-opp
Copy link
Owner

@montao Congratulations on your degree!

Small typo maybe: You spell it 0xdeadbeaf but the common spelling is usually 0xdeadbeef isn't it?

Thanks! Fixed in f551116.

Copy link

ghost commented Aug 13, 2018

Where should I go next after completing your tutorial ? I am neither a beginner nor an expert in Rust but I am interested in OS development and have followed your tutorial thoroughly, I would like to proceed further

Thanks for making this series :)

@robert-w-gries
Copy link
Contributor

@siddharthsymphony The OSDev Wiki is one of the best online resources for OS development. If you want more theoretical knowledge, take a look at Modern Operating Systems by Andrew Tanenbaum

Copy link

@siddharthsymphony I'm very interesting in System Development, any resources you recommend beside The OSDev Wiki and Modern Operating Systems by Andrew Tanenbaum ?

Copy link

tnargy commented Aug 21, 2018

@phil-opp I’ve really enjoyed your blog! The way you express the concepts makes it easy for me to follow. Are you still planning on handling interrupts from external devices in next post?

@phil-opp
Copy link
Owner

@tnargy Thank you! Yes, the next post will be about hardware interrupts. It will explore the programmable interrupt controller, timer interrupts, and keyboard interrupts. I already created a first draft that you can preview here (it's still work in progress).

Copy link
Contributor

Ben-PH commented Sep 20, 2018

Awesome stuff.

What was your source for learning this? If I wanted to contribute to the next posts while I'm having a go at them, do you have any recommended reading for that?

@phil-opp
Copy link
Owner

phil-opp commented Oct 7, 2018

@Ben-PH Thanks!

I don't have a single source. It's a mix of what I learned at university, the OSDev wiki, Wikipedia, the Intel/AMD manuals, and various other resources. If you're looking for a book about the fundamentional of operating systems, I can recommenend the free Three Easy Pieces.

Copy link

Typo: becaues

@phil-opp
Copy link
Owner

@kballard Thanks, fixed in f5b6b7a.

Copy link

@phil-opp
Copy link
Owner

@MinusGix Thanks for reporting! I fixed the link in 7e5757e.

Copy link

gerowam commented Mar 19, 2019

General qusetion here about integration testing. There's a lot of code to set up the test fixture that is replicated from non-test code (gdt.rs and interrupts.rs). This feels like it reduces the usefulness of the test because it would only catch problems introduced by changes to both the non-test code /and/ the fixture.

Is there any way to reduce replicated code in integration tests like this?

@phil-opp
Copy link
Owner

@gerowam The problem is that we want to do something completely different in our double fault handler (serial_println!("ok")), so we can't use the default double fault handler that loops endlessly. Since we want a custom double fault handler, we also need a custom IDT to define it. This replicates functionality from interrupts.rs and I agree that this is unfortunate, but I don't know how to avoid this.

However, we don't replicate any code from gdt.rs, but directly use blog_os::gdt::init(). Thus, the test is still useful, as it only works if the gdt.rs sets up the GDT and the TSS correctly.

Copy link

krsoninikhil commented Mar 23, 2019

Why loop {} is required at the end of double fault handler. In absence of which it seems double fault handler again causes a double fault. The handler runs in loop. Could you please help me understand this?

@phil-opp
Copy link
Owner

phil-opp commented Mar 25, 2019

@krsoninikhil For "fault" exceptions (such as a double fault Edit: This is wrong, see #449 (comment)), the CPU points the instruction pointer to the instruction that caused the exception. By returning from our double fault handler, the CPU restarts the same instruction again, which causes the same double fault. It is designed this way because the faulting instruction was not executed. By continuing with the next instruction we would effectively skip the faulting instruction, which is not what we want normally.

As an example to see where this behavior is useful, consider a page fault exception that occurs because a memory page is swapped out to disk. When the exception occurs, the page fault handler swaps in the page again. It then returns, which automatically restarts the faulting instruction that accessed the page. Since the page is present now, the instruction succeeds now and the program can continue as if no error occurred in between.

I hope this helps!

@krsoninikhil
Copy link

Ah, that make sense. Thanks Philipp.

Copy link

Are there any details from the original fault pushed on the stack or otherwise available to the double fault handler? I'm looking for a way to provide more details in the double fault error message (e.g., IDT[123] was not present)

@phil-opp
Copy link
Owner

phil-opp commented Jul 2, 2019

Not really. The error code is always zero and even the saved instruction pointer is undefined.

If you want more information about the original fault, just add a handler function for it. For example, you can get the "IDT[123] was not present" message by adding a handler for the segment not present exception. This exception pushes a selector error code that tells you which table entry caused the issue. Note that issue rust-lang/rust#57270, which leads to wrong error codes in debug mode, is still open.

Copy link

faraazahmad commented Oct 11, 2019

This is my interrupts module

// ...
lazy_static! {
	static ref IDT: InterruptDescriptorTable = {
		let mut idt = InterruptDescriptorTable::new();
		idt.breakpoint.set_handler_fn(breakpoint_handler);
		// idt.double_fault.set_handler_fn(double_fault_handler);
		idt
	};
}

pub fn init_idt() {
	IDT.load();
}

extern "x86-interrupt" fn breakpoint_handler(stack_frame: &mut InterruptStackFrame) {
	println!("EXCEPTION: BREAKPOINT\n{:#?}", stack_frame);
}
// ...

and I call the init_idt() from my main function, but I think I'm getting a triple fault (it keeps resetting). The double fault handler didn't help either. What do i do?

EDIT: So I had to increase the size of the stack and it worked! I feel very stupid right now haha

Copy link

L3tum commented Oct 21, 2019

@phil-opp Regarding the duplicated code for the IDT integration test: Couldn't you "just" use the panic handler defined in the integration test here

#[panic_handler]
fn panic(info: &PanicInfo) -> ! {
    blog_os::test_panic_handler(info)
}

to exit with a success code instead of invoking the standard test panic handler? Since the regular double-fault handler panics, that would be caught by this handler, right?

Copy link

@phil-opp : With SSE enabled there are nasty stack alignment issues in interrupt handlers that push an error code. I am trying to understand what is going on but it looks like a compiler bug, and I have seen that you authored https://reviews.llvm.org/D30049 which tried to solve it, but apparently doesn't fix the issue entirely. (I get explosion when calling panic in my double fault handler), which shows that the stack is no longer properly aligned when the call to panic occurs.

What is the most appropriate place to report the bug and try to understand what is going on ?

@andyhhp
Copy link

andyhhp commented Feb 20, 2020

@phil-opp

I thought you were refering to context switches and saving all SSE state. For normal spilling of individual registers, movaps is of course the better choice.

TBH, for a kernel, you're much better off following the Linux/Unix route and disallowing all FPU/SSE generally. Its not useful for the majority of logic, and maintaining the function call ABI is unnecessary overhead.

There are specific areas where SSE/AVX optimised algorithms are a benefit (hashing/crypto code in particular), and they are best suited to know if they can operate with spilling just one register, or whether the operation is so long that using xsave/xrstore is the sensible approach.

@phil-opp
Copy link
Owner

TBH, for a kernel, you're much better off following the Linux/Unix route and disallowing all FPU/SSE generally.

I agree! I prefer a microkernel-like design where almost everything happens in userspace anyway, so the kernel should not do any complex calculations that would profit from SSE/AVX.

Copy link

The segmentation fault test is failing if tested as release build
cargo xtest --release --test stack_overflow
First I thought that might be because the recursion gets optimised out, but that's not it.
Any idea?

@bjorn3
Copy link
Contributor

bjorn3 commented Jun 7, 2020

  1. The stack_overflow function is tail recursive, which means that tail call optimization can just turn it into a jump to itself. Aka an infinite loop.
  2. LLVM considers infinite loops UB: https://blog.rust-lang.org/inside-rust/2020/03/19/terminating-rust.html

This combined makes LLVM emit a retq instruction instead of infinite recursion:

#[allow(unconditional_recursion)]
pub fn stack_overflow() {
    stack_overflow(); // for each recursion, the return address is pushed
}
playground::stack_overflow:
	retq

Inserting a volatile_load after the call defeats this optimization:

#![feature(core_intrinsics)]

#[allow(unconditional_recursion)]
pub fn stack_overflow() {
    stack_overflow();
    unsafe { std::intrinsics::volatile_load(&0); }
}
playground::stack_overflow: # @playground::stack_overflow
# %bb.0:
	pushq	%rax
	callq	*playground::stack_overflow@GOTPCREL(%rip)
	movl	.L__unnamed_1(%rip), %eax
	popq	%rax
	retq
                                        # -- End function

.L__unnamed_1:
	.zero	4

Copy link

Thanks a lot, I was only thinking of completely removing the calls and not of tail-call optimization (really should have known better as Lisp fan), so my version printing the iteration number was still tail-call optimised and didn't grow the stack.
Also thanks for the tip using 'volatile_load'.

This is an awesome series, so polished, I am having a lot of fun with it, thank you so much for all the work you put into it.

@bjorn3
Copy link
Contributor

bjorn3 commented Jun 7, 2020

@boris-tschirschwitz It is @phil-opp who created this series, not me. :)

@phil-opp
Copy link
Owner

phil-opp commented Jun 8, 2020

This is an awesome series, so polished, I am having a lot of fun with it, thank you so much for all the work you put into it.

Great to hear that, thanks a lot :).

@phil-opp
Copy link
Owner

phil-opp commented Jun 8, 2020

I opened #818 to add the volatile read to the stack_overflow test. Thanks for the suggestion, @bjorn3!

Copy link

them0ntem commented Jul 26, 2020

With x86_64 version 0.11.1, I'm getting a triple fault (it keeps resetting).

I downgraded x86_64 version to =0.11.0. No triple fault, but whole stack frame doesn't get print.
Blog OS: Double Fault Exception

I also tried post-06 branch from https://github.com/phil-opp/blog_os, full interrupt stack frame doesn't get print for both x86_64 version.

$ rustc --version
rustc 1.47.0-nightly (d6953df14 2020-07-25)

EDIT1: print! works fine. Issue is with panic! macro printing stack frame.

@phil-opp
Copy link
Owner

@themontem Only a single change happened in version 0.11.1: https://github.com/rust-osdev/x86_64/blob/master/Changelog.md#0111 . It only exports two error types in the API, so there were no behavior-related changes.

The printing problem is probably #831. I'm not sure about the triple fault, but normally this indicates that your exception handler itself causes another exception. Can you reproduce the triple fault with the post-06 branch?

Copy link

@phil-opp I followed your tutorial which is amazing but at this step I found a weird problem if I write the double fault handler like this the machine doesn't get the tripple fault error


extern "x86-interrupt" fn double_fault_handler(
    stack_frame: &mut InterruptStackFrame,
    error_code: u64,
) -> ! {
    // TODO ask about this
    println!("EXCEPTION: BREAKPOINT\n{:#?}", stack_frame);
    panic!();
}

But if I write it like this


extern "x86-interrupt" fn double_fault_handler(
    stack_frame: &mut InterruptStackFrame,
    error_code: u64,
) -> ! {
    // TODO ask about this
    panic!("EXCEPTION: BREAKPOINT\n{:#?}", stack_frame);
}

I get a thripple fault, any idea why that might happen ? From a basic look I see that the panic macro calls $crate::rt::begin_panic_fmt(&$crate::format_args!($fmt, $($arg)+)) before calling our panic handler can there be a problem ?

@phil-opp
Copy link
Owner

phil-opp commented Aug 2, 2020

@nicolae536 Phew, that was not easy to debug! I was able to reproduce the error and tracked it using GDB through the complete formatting code of the core library. For me, a function pointer was unexpetedly zero at some point, which lead to a page fault when the function was called. After double-checking the ELF loading-code of the bootloader crate, the Debug implementation in the x86_64 crate, and our VGA code, I finally understood what was going on:

The problem was that the double fault stack overflowed. Since this stack is defined as a normal static mut, it has no guard page, so the overflows wasn't detected. Instead, silently corrupted some adjacent data in the .data section of the executable, resulting in the unexpected null pointer.

The fix is simple: Increase the stack size of the double fault stack by adjusting the STACK_SIZE constant in the gdt.rs. For example, set it to const STACK_SIZE: usize = 4096 * 5. It's also worth noting that running in --release mode also works with the smaller stack size because it is more optimized then. (You need to ensure that stack_overflow method is not optimized to a loop in --release mode, e.g. by doing a volatile read in it.)

I will update the post to use a larger stack size. Thanks a lot for reporting this!

Copy link

@phil-opp Thanks a lot I also tried to implement what you mentioned here but somehow I kindof stuck because I cannot have a global mut ref to WrappedWriter and I cannot call write_fmt on it I reached the conclusion that the only way to implement this properly is to make our Writer implement Sync + Send and do the work regarding the thread safty there, maybe you can have a look over my questions there they might be a bit silly.

Copy link

@themontem @phil-opp Just got the same error he had (on my screen there was just a "panicked at" without a new line (very weird)), so as it wasn't a triple fault, I thought it could be a problem with the stack, so I increased the size of the stack and it worked just fine. 4096 * 5 seems to work though but how can the panic / print functions can take more than 10kB ?

PS. Very good posts though, I'm enjoying your work ! Thank you !

@bjorn3
Copy link
Contributor

bjorn3 commented Aug 12, 2020

Are you building in release mode? Also the formatting machinery is optimized for code size as opposed to runtime performance and stack size. It for example tries hard to prevent inlining.

Copy link

TeoBernier commented Aug 12, 2020

@bjorn3 I wasn't building in release mode, nor in optimized mode, which may explain why I only got the "panicked at" ahah, thanks for the explanation!

@phil-opp
Copy link
Owner

@Bari0th Stack overflows are undefined behavior, so everything can happen when they occur. Depending the memory that is overwritten and the values on the stack it can result in a triple fault, some other exception, wrong behavior, silent memory corruption, etc.

In your case, the stack overflow happened when it tried to print the file and line information. Apparently it broke the formatting code without causing a triple fault, but even a slight change in your code might change this behavior.

Like bjorn3 said, the code is much more optimized when compiling in release mode, so that a smaller stack might suffice.

PS. Very good posts though, I'm enjoying your work ! Thank you !

Thanks!

@phil-opp
Copy link
Owner

I will update the post to use a larger stack size. Thanks a lot for reporting this!

Done in 0425bd3 and 817e36c.

@ferbass
Copy link

ferbass commented Sep 28, 2020

Hey @phil-opp thanks again for the great tutorial.

About the tests, I understand stack overflows are undefined behavior we should run it using the release mode, but I think we can add the optimization-level to the test profile and run the tests without use release mode.

I did some tests on my end and I realize if we add the opt-level = 1 in the profile.test section of Cargo.toml for basic optimizations as described in here https://doc.rust-lang.org/cargo/reference/profiles.html#opt-level we are able to run the tests without force release mode.

I added this change to my repo you can check it out if you want https://github.com/ferbass/gat_os/pull/1/files#diff-80398c5faae3c069e4e6aa2ed11b28c0R27

Do you think this is a valid setup to use or should we avoid opt-level for profile.test?

Thank you in advanced

--
ferbass

@Qubasa
Copy link

Qubasa commented Oct 12, 2020

Hi @phil-opp,

I have trouble understanding why a kernel code segment is needed.
I read that some instructions check for the permission level in the cs register.
That's why you need a kernel cs and a user cs.
What I do not understand is if it used for something else besides that?
The cs register can only map to 4GB Ram, so what if I address the 5th GB?
Is this not important because the cs register is only used for CPL validation?
Also I guess it's needed to transition from real mode to long mode in the bootloader?

Adding a little footnote in the blog would be nice I think.

Thanks in advance, and keep up the great work! :-)

@GuillaumeDIDIER
Copy link

In 64-bit mode, segmentation is mostly deactivated, apart from the Privilege level bits. In 16bit and 32 bit mode however, Segmentation is mandatory and correct cs ss and ds segments are usually needed, with correct bits (but often a 0 base address anyway).

@phil-opp
Copy link
Owner

@ferbass

About the tests, I understand stack overflows are undefined behavior we should run it using the release mode, but I think we can add the optimization-level to the test profile and run the tests without use release mode.

Let me clarify my above comments a bit: Stack overflows on the main kernel stack are not undefined behavior because the bootloader creates a special unmapped page called guard page at the bottom ot this stack. Thus, a stack overflow results in a page fault and no memory is corrupted.

The problem is/was that the double fault stack that we create in this post doesn't have such a guard page yet (we will improve this in a future post). Thus, a stack overflow is undefined behavior as it overwrites other data that might still be needed. While compiling with optimizations reduces stack size and can thus avoid these stack overflows in some cases, this is merely a workaround and not a valid solution to the problem. Instead, the double fault stack should still be large enough to work in debug mode too. For this reason I increased the stack size for the double fault stack, so that stack overflows should no longer occur even in debug mode, provided that you keep the double fault handler minimal.

It's important to note that this problem is not exclusive to test. It can also occur on a normal execution, e.g. if we accidentally write a function with endless recursion. Since we don't want any undefined behavior in this case, even when running in debug mode, the double fault stack should be large enough for this. So changing the optimization level for tests is not a good solution for this problem because if a test fails in debug mode, a normal cargo run in debug mode might fail in the same way.

Do you think this is a valid setup to use or should we avoid opt-level for profile.test?

In general, I don't think that changing the test optimization level is problematic. For example, it might be a valid way to speed up a test suite in some cases. However, the program/kernel/etc should still work in debug mode, so optimizing the tests only to avoid some runtime problems is not a good idea.

@phil-opp
Copy link
Owner

@luis-hebendanz As @GuillaumeDIDIER said, segmentation is mostly deactivated in 64-bit mode. The x86_64 architecture still requires a code segment for historical purposes, even though most of its content is ignored. It is still used for specifiying the privilege level and and for putting the CPU in 64-bit mode.

@slinkydeveloper
Copy link

Hi @phil-opp, thanks fo this amazing guide!

I was wondering why stack size is fixed to 4096 * 5 and then I just stumbled upon this:

The fix is simple: Increase the stack size of the double fault stack by adjusting the STACK_SIZE constant in the gdt.rs. For example, set it to const STACK_SIZE: usize = 4096 * 5. It's also worth noting that running in --release mode also works with the smaller stack size because it is more optimized then. (You need to ensure that stack_overflow method is not optimized to a loop in --release mode, e.g. by doing a volatile read in it.)

I think it would be nice if you could add a comment about that stack size in the post 😄

Copy link

ivfranco commented May 3, 2021

I think crate::interrupts::init_idt should be unsafe, it now depends on an entry in the IST that's not there if crate::gdt::init is not called, allowing access to uninitialized memory in safe code.

Copy link

I followed the post up to the point where the basic double fault handler is implemented. Adding the double_fault_handler function does not change the behavior when triggering the page fault. The kernel still ends up in a boot loop.

I saw that you pushed some changes a few days ago. Might this be related? I am using the version 0.14.2 of x86_64.

PS. Thanks for this great blog. I learned a lot so far!

Repository owner locked and limited conversation to collaborators Jun 12, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests