New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compile Dom0 with CONFIG_ETHERNET=N or better yet CONFIG_NET=N #3656

Open
DemiMarie opened this Issue Mar 5, 2018 · 6 comments

Comments

Projects
None yet
4 participants
@DemiMarie

Qubes OS version:

R4.0

Affected component(s):

Dom0 kernel


Steps to reproduce the behavior:

Check dom0 kernel configuration

Expected behavior:

dom0 is compiled with CONFIG_ETHERNET=N or better yet CONFIG_NET=N, to ensure that it cannot possibly have a network connection.

Actual behavior:

dom0’s kernel is compiled with CONFIG_ETHERNET=Y.

General notes:

There have been bugs (related to HDMI, for instance) which have caused dom0 to get an Ethernet connection. We can most effectively prevent this by not compiling networking support into the kernel.


Related issues:

#2743

@rtiangha

This comment has been minimized.

Show comment
Hide comment
@rtiangha

rtiangha Mar 5, 2018

This would be easier to do if you compile separate kernels for dom0 and domUs. Since the Qubes project uses a shared kernel by default, it makes it harder for them to separate them because the system was never designed to do that. But you can get around it by compiling your own kernels and varying the 'rel' attribute so that you can have multiple kernels (Package "kernel" for dom0 and "kernel-qubes-vm" for domUs) installed each compiled with a different kconfig (ex. 4.14.24-50 for dom0, 4.14.25-100 for VMs, etc). It's what I do, but that's an exercise for the reader.

I believe the Xen network backend and frontend drivers need to be enabled in dom0 in order to make the NICs visible in VMs (if that isn't the case, then please correct me!) and that, in turn, requires CONFIG_NET=y because even if you strip away all network drivers, it still needs TCP/IP socket support (i.e. CONFIG_INET) otherwise the kernel won't boot (trust me, I've tried). However, it might be possible to get away with CONFIG_ETHERNET=N in dom0. I haven't tried it myself, though, but I will the next time I update my kernels.

rtiangha commented Mar 5, 2018

This would be easier to do if you compile separate kernels for dom0 and domUs. Since the Qubes project uses a shared kernel by default, it makes it harder for them to separate them because the system was never designed to do that. But you can get around it by compiling your own kernels and varying the 'rel' attribute so that you can have multiple kernels (Package "kernel" for dom0 and "kernel-qubes-vm" for domUs) installed each compiled with a different kconfig (ex. 4.14.24-50 for dom0, 4.14.25-100 for VMs, etc). It's what I do, but that's an exercise for the reader.

I believe the Xen network backend and frontend drivers need to be enabled in dom0 in order to make the NICs visible in VMs (if that isn't the case, then please correct me!) and that, in turn, requires CONFIG_NET=y because even if you strip away all network drivers, it still needs TCP/IP socket support (i.e. CONFIG_INET) otherwise the kernel won't boot (trust me, I've tried). However, it might be possible to get away with CONFIG_ETHERNET=N in dom0. I haven't tried it myself, though, but I will the next time I update my kernels.

@DemiMarie

This comment has been minimized.

Show comment
Hide comment
@DemiMarie

DemiMarie Mar 5, 2018

@DemiMarie

This comment has been minimized.

Show comment
Hide comment
@DemiMarie

DemiMarie Mar 19, 2018

Ideally, all of the sys-* VMs would use unikernels written in memory safe languages. But that is a long ways off.

Ideally, all of the sys-* VMs would use unikernels written in memory safe languages. But that is a long ways off.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Mar 19, 2018

Member

Ideally, all of the sys-* VMs would use unikernels written in memory safe languages. But that is a long ways off.

Key word here: drivers...

As for original issue here - currently realistic solution would be to blacklist relevant drivers in dom0. Not ideal, because can easily get out of sync with actual kernel, but easy to implement with what we currently have.

Member

marmarek commented Mar 19, 2018

Ideally, all of the sys-* VMs would use unikernels written in memory safe languages. But that is a long ways off.

Key word here: drivers...

As for original issue here - currently realistic solution would be to blacklist relevant drivers in dom0. Not ideal, because can easily get out of sync with actual kernel, but easy to implement with what we currently have.

@DemiMarie

This comment has been minimized.

Show comment
Hide comment
@DemiMarie

DemiMarie Mar 19, 2018

@marmarek Yeah, drivers are the key word here. Though we can disable drivers that a domain doesn’t actually need, and minimize attack surface in userspace by using the minimum userspace we can.

@marmarek Yeah, drivers are the key word here. Though we can disable drivers that a domain doesn’t actually need, and minimize attack surface in userspace by using the minimum userspace we can.

@DemiMarie

This comment has been minimized.

Show comment
Hide comment
@DemiMarie

DemiMarie Apr 8, 2018

@marmarek What about doing find /lib/modules/4.14.18-1.pvops.qubes.x86_64/kernel/drivers/net -delete?

@marmarek What about doing find /lib/modules/4.14.18-1.pvops.qubes.x86_64/kernel/drivers/net -delete?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment