New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support nested virtualization (for rusty-hermit) #6
Comments
I'd like to add that The output of lscpu for both native and virtual ubuntu. Since all the necessary information is available in the virtualbox, uhyve should be able to provide the clockspeed to rusty-hermit. I did not really understand how I also double checked and uhyve drops error messages when running with nested virtualization. I compiled and inspected the same program with gdb, and verified that they both panic at the same lscpu native Ubuntu
lscpu virtualized Ubuntu (kvm hypervisor)
panic with uhyve running on virtual ubuntu
panic with uhyve running on native ubuntu
|
Hm, I have similar setup and it work for me. Can you check if https://github.com/hermitcore/uhyve/blob/master/src/vm.rs#L683 determines the correct frequency on your system? Which Linux kernel do you use? |
On native Ubuntu freq is 3500, which is correct. This behavior is expected I guess, since rusty-hermit on uhyve on native ubuntu also can't read out the processor frequency with the CpuId crate. This implies that CpuID doesn't work (in some) virtual environments. Kernel version virtual Ubuntu (vagrant box): |
#9 works on my local machine. uyhve on virtual ubuntu can detect the CPU frequency on my computer. However this doesn't work everywhere. For example it doesn't work on travis: https://travis-ci.com/github/jschwe/rusty-hermit/jobs/326283414 When looking at the Job log you can also see that the original error reason which should have been "Could not determine the processor frequency" , due to the failed expect is not printed. This only happens on nested environments. It is also not completely consistent, since there where cases when I have seen the reason for a panic printed out on travis or in my virtualbox. I'll try to investigate this further.
Before adding the prints the error contained much less info You can view the changes I made here: hermit-os/hermit-rs#5 |
I've now also tested this with my second travis pipeline.
With the added debug prints I actually get the expected error message this time:
However either the kernel or uhyve doesn't terminate correctly. It hangs and is terminated by travis after 10 minutes. I recall having seen this behaviour about two or three times locally too. Something strange is definitely going on here. Could there be some kind of race condition in the panic handler? |
@jschwe Can you check, if hermit core/libhermit-rs#48 determines the CPU frequency correctly on test setup. |
@stlankes I checked, and this does not determine the CPU frequency on travis. . It might work if we use the cpuid function from uhyve, so I'll test that when I have time and write an update here. |
Update: Using this method in uhyve also doesn't work on travis. raw_cpuid is able to detect that the hypervisor is kvm, but returns a frequency of 0. |
Do you still receive sometimes a page fault? |
Currently I'm not experiencing any panics, so I am not experiencing any page faults when running rusty-demo. However I believe there still is an issue with the panic_handler, since I can still reproduce the Page fault when deliberately panicking (hermit-os/kernel#43) |
When using nested virtualization rusty-hermit currently panics when detecting the cpu frequency, since all methods fail. Uhyve should provide the CPU frequency even in a nested virtualization environment.
This can be done either by ensuring
detect_from_hypervisor()
works or by modifying the CPUid brandstring to contain the clockspeed.Also (this still needs some more testing on my side though) uhyve should print error messages / quit in the same way for nested virtualization as it does for normal virtualization. Currently uhyve seems to print less when using nested virtualization.
The text was updated successfully, but these errors were encountered: