New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Safe use of Hyperthreading when Xen stable includes new sched-gran parameter #5547
Comments
Typo: the command line params should be smt=on sched-gran=core |
Generally I agree, but only when it gets stable enough. And probably with some option to enable/disable it. While Xen 4.13 do include this feature it is marked as "experimental", which among other things, means it does not receive security support. |
Thanks for the reply. Prompt as ever.
But one thing puzzles me. You say
...
Some workaround for this issue is to set vcpus=1 for VMs where HT would
have unintended consequences. This still allows using HT for other VMs,
where cross-process leak isn't really an issue (like, build environment).
My understanding, and tell me if I'm mistaken, is that if Xen was started
with smt=yes, then setting vcpus=1 could mean that you get only "half a
core", with the other half being run being available for some malicious
exploit launched by another VM. So it's avoiding cross process attacks but
offering a surface for inter VM attacks. That sounds to me worse, in that
if someone gets in there is a path to expand into other VMs and potentially
into Dom0 :(
I really hope I'm wrong on that because your workaround sounds good...
The workaround I was looking at involved pinning all the pcpus on one core
to Dom0, and then excluding that core from all other VMs. Do there is
permanent physical separation between Dom0 and the DomUs collectively. This
also reduces the attack surface for hypothetical exploits based on
attacking the in core caches.
I haven't figured out yet how to pin or block a pcpu permanently, so that
the setting is replicated whenever that VM is started. That still leaves
DomU VMs able potentially to attack each other but not able to get the "big
prize" of breaking into Dom0.
Let me know if either of these comments is worth posting on issues.
Otherwise I'm content just to have put a marker down for some hypothetical
time in the future: once its properly supported by Xen.
Warmly
R~~
|
With thread scheduling, that would be true (and this is why we set smt=off), but with core scheduling, the other thread would be idle - effectively not taking advantage of HT for some VMs. |
Marek wrote:
With thread scheduling, that would be true (and this is why we set
smt=off), but with core scheduling, the other thread would be idle -
effectively not taking advantage of HT for some VMs.
I'm not sure you can do both core and CPU scheduling at the same time. If
you can, then that would be ideal.
If you can't, then vcpus=1 might effectively give you vcores=1 -- it needs
some testing once the feature gets into supported stable Xen trees to see
exactly how they implemented the new code.
R~~
|
... and another possibility in some use cases would be to use the isolcpus
kernel commandline parameter inside the VM
|
Has SMT been compiled out of Xen 4.14 (Qubes 4.1)? When I switch it on the kernel fails to load and I just get a black screen |
As long as Xen can propagate the CPU topology, the VM could to handle the SMT securely even with multiple cores. Essentially, I see three ways that guest could use: a. No SMT. (For paranoid?) Actually, the slide 14 on https://www.slideshare.net/xen_com_mgr/xpdds19-core-scheduling-in-xen-jrgen-gro-suse suggests suggests that this has been considered:
|
Seems like core scheduling work is being done as well on kernel side, which as you mentioned, if Xen can propagate the CPU topology it would provide a safer in-VM isolation. |
It seems to work after removing the SMT=off from the command parameter. In my case it's Hyperthreading (Intel i7-4720HQ) |
Nope. That only re-enabled hyper-threading, but did not enabled core scheduling. This is the configuration where many speculative bugs will be able to steal the data from other VMs! What you need, is to additionally add |
thanks for this important hint Hope this setting will soon be part of Qubes. |
Does it work with 4.14 that ships now with Qubes 4.1? What about 4.0? IMO, disabling HT should be optional, advanced security feature to those who need it. On one of my laptops, the performance was reduced significantly with HT disabled. I didn't understand first what was going on until I saw the core count. On another laptop, I don't see big difference. Maybe this depends on the CPU and/or Xen version? Problematic laptop is running Tiger Lake CPU with 4/8 cores on Qubes 4.1 w/ default Xen 4.14. Once I enabled HT, this machine got new life. With HT disabled, there was some strange lockups. |
I own a laptop from 2013 with a Intel(R) Core(TM) i7-3540M CPU. Adding the parameter |
I added smt=on sched-gran=core and its working just fine on 4.1 |
Cannot switch on SMT on Qubes 4.1 (5.10.112-1.fc32.qubes.x86_64):
Can please someone suggest what I need to do to finally get it applied? |
There may be multiple |
That was the reason! Thank you very much for your advice! |
trueriver commentedDec 29, 2019
Qubes OS version:
R 4.0.1
All current versions as of 1 Jan 2020
Affected component(s) or functionality:
Xen, hyperthreading, Intel
Steps to reproduce the behavior:
Run Qubes on machine with hyperthreading (HT) hardware
Expected or desired behavior:
HT should ideally be available
Actual behavior:
HT disabled by default
General notes:
HT has been deliberately disabled for security reasons, and with the current relaease versions of Xen this is sensible. The reason is that there exist several exploits on Intel kit that involve abuse of shared cache, etc, when one thread in a CPU is running on one VM and another on another.
However, the Xen wizards are on the case, and they have a fix that will ensure that all the threads running on a cpu are allocated together. This means that the only software exposed by such exploits will be software that already has access to that machine. This may reduce the attack surface to an acceptable level for at least some users.
The patch is now included in some unstable branches of Xen, and is invoked by the command line parameters
smt on sched-gran=core
(or socket or cpu). My request is that this is implemented by Qubes, but ONLY once we start using a Xen version that includes this feature.
However, it is currently (Jan 2020) too soon to change the current behaviour, as the versions of Xen currently in use ignore this combination without warning.
My request therefore is that this is assigned a sensibly long timescale.
I have consulted the following relevant documentation:
https://www.slideshare.net/xen_com_mgr/xpdds19-core-scheduling-in-xen-jrgen-gro-suse
https://patchwork.kernel.org/cover/11086677/
https://xenbits.xen.org/docs/unstable/misc/xen-command-line.html#sched-gran-x86
(NOTE the above path is in the "unstable" branch)
I am aware of the following related, non-duplicate issues:
The text was updated successfully, but these errors were encountered: