New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot remove VMs created by qvm-backup-restore #3273

Open
tasket opened this Issue Nov 2, 2017 · 8 comments

Comments

Projects
None yet
5 participants
@tasket

tasket commented Nov 2, 2017

Qubes OS version:

R4.0rc2

Affected TemplateVMs:


Steps to reproduce the behavior:

Restore a backup made with R3.2. Some extraneous VMs with names like 'disp-no-netvm' should be automatically created during the restore.

Expected behavior:

qvm-remove disp-no-netvm should remove the extra VMs

Actual behavior:

Error:
qubesadmin.exc/QubesException: Domain is in use

General notes:

Running qvm-ls shows that no other VMs are using these restore-generated VMs, so they appear to NOT be in use.

Link to discussion:
https://groups.google.com/d/msgid/qubes-users/e141715b30c5a764387f4c636b0f9d3c.squirrel%40bitmessage.ch


Related issues:

@qubesuser

This comment has been minimized.

Show comment
Hide comment
@qubesuser

qubesuser Nov 2, 2017

It's because it's in the default_dispvm pref for the restored VM.

However, I think this is quite user-unfriendly: in particular, the error message should at least tell you why the domain is in use, and ideally there be should a --force option that removes it anyway, replacing "no VM" for every time it's used.

The practice of creating those dispvms is also questionable IMHO.

qubesuser commented Nov 2, 2017

It's because it's in the default_dispvm pref for the restored VM.

However, I think this is quite user-unfriendly: in particular, the error message should at least tell you why the domain is in use, and ideally there be should a --force option that removes it anyway, replacing "no VM" for every time it's used.

The practice of creating those dispvms is also questionable IMHO.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Nov 2, 2017

Member

in particular, the error message should at least tell you why the domain is in use

This is for privacy reason: to not reveal name of other VMs on the system, when returning an error to a management VM (it is the same API). The more detailed reason is the log (journalctl in dom0). I made already a change to include that pointer in the error message.
@rootkovska what do you think about it? Maybe we should reconsider this?

ideally there be should a --force option that removes it anyway, replacing "no VM" for every time it's used.

That would require changing the above behavior (reporting where it is used). We definitely do not want to introduce Admin API method that would remove a VM and change a bunch of properties at the same time.

The practice of creating those dispvms is also questionable IMHO.

This is preserve Qubes 3.2 behavior as close as possible. There you choose just netvm for DispVM, but on Qubes 4.0 you choose the whole DispVM. Not preserving this property could lead to some fatal mistakes - like starting a DispVM with clearnet access from a VM behind tor...

Member

marmarek commented Nov 2, 2017

in particular, the error message should at least tell you why the domain is in use

This is for privacy reason: to not reveal name of other VMs on the system, when returning an error to a management VM (it is the same API). The more detailed reason is the log (journalctl in dom0). I made already a change to include that pointer in the error message.
@rootkovska what do you think about it? Maybe we should reconsider this?

ideally there be should a --force option that removes it anyway, replacing "no VM" for every time it's used.

That would require changing the above behavior (reporting where it is used). We definitely do not want to introduce Admin API method that would remove a VM and change a bunch of properties at the same time.

The practice of creating those dispvms is also questionable IMHO.

This is preserve Qubes 3.2 behavior as close as possible. There you choose just netvm for DispVM, but on Qubes 4.0 you choose the whole DispVM. Not preserving this property could lead to some fatal mistakes - like starting a DispVM with clearnet access from a VM behind tor...

@qubesuser

This comment has been minimized.

Show comment
Hide comment
@qubesuser

qubesuser Nov 2, 2017

Couldn't it return the error as long as the user has permission to read the pref that makes it in use? Or potentially the client could read preferences about all VMs it can access, and create the message itself.

--force could be done client-side too (where the client has permission).

There you choose just netvm for DispVM, but on Qubes 4.0 you choose the whole DispVM

I think I'd like it more to just always start DispVMs that are started from other VMs with no netvm assigned, and let the user assign one later if desired.

Generally "open in DispVM" is used for potentially malicious content, and any netvm being set would allow the malicious content to try to infect any reachable host, launch denial of service attacks and also tell remote servers that it was opened, all of which seems undesirable and perhaps unexpected.

And one could also have "open in DispVM with networking" that would assign the current VM's netvm to the new DispVM, like Qubes 3.2 does, for things where network access is essential.

qubesuser commented Nov 2, 2017

Couldn't it return the error as long as the user has permission to read the pref that makes it in use? Or potentially the client could read preferences about all VMs it can access, and create the message itself.

--force could be done client-side too (where the client has permission).

There you choose just netvm for DispVM, but on Qubes 4.0 you choose the whole DispVM

I think I'd like it more to just always start DispVMs that are started from other VMs with no netvm assigned, and let the user assign one later if desired.

Generally "open in DispVM" is used for potentially malicious content, and any netvm being set would allow the malicious content to try to infect any reachable host, launch denial of service attacks and also tell remote servers that it was opened, all of which seems undesirable and perhaps unexpected.

And one could also have "open in DispVM with networking" that would assign the current VM's netvm to the new DispVM, like Qubes 3.2 does, for things where network access is essential.

@andrewdavidwong andrewdavidwong added this to the Release 4.0 milestone Nov 2, 2017

@tasket

This comment has been minimized.

Show comment
Hide comment
@tasket

tasket Nov 3, 2017

I think I'd like it more to just always start DispVMs that are started from other VMs with no netvm assigned, and let the user assign one later if desired.

Generally "open in DispVM" is used for potentially malicious content, and any netvm being set would allow the malicious content to try to infect any reachable host, launch denial of service attacks and also tell remote servers that it was opened, all of which seems undesirable and perhaps unexpected.

Good point. I recall taking a similar stance on dispVMs.

But the least we could do is have qvm-remove print "See system log for details".

tasket commented Nov 3, 2017

I think I'd like it more to just always start DispVMs that are started from other VMs with no netvm assigned, and let the user assign one later if desired.

Generally "open in DispVM" is used for potentially malicious content, and any netvm being set would allow the malicious content to try to infect any reachable host, launch denial of service attacks and also tell remote servers that it was opened, all of which seems undesirable and perhaps unexpected.

Good point. I recall taking a similar stance on dispVMs.

But the least we could do is have qvm-remove print "See system log for details".

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 7, 2017

app: clarify error message on failed domain remove (used somewhere)
Point to system logs for more details. Do not include them directly in
the message for privacy reasons (Admin API client may not be given
permission to it).

QubesOS/qubes-issues#3273
QubesOS/qubes-issues#3193
@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Nov 9, 2017

Member

Generally "open in DispVM" is used for potentially malicious content, and any netvm being set would allow the malicious content to try to infect any reachable host, launch denial of service attacks and also tell remote servers that it was opened, all of which seems undesirable and perhaps unexpected.

I know at least few people using DispVM to print files, also from network-disconnected VMs. So having network access there is the whole point about DispVM in that use case. If you want network-disconnected DispVM by default, on of VMs created during 3.2 backup restore is disp-no-netvm, so you can use this one.

And one could also have "open in DispVM with networking" that would assign the current VM's netvm to the new DispVM, like Qubes 3.2 does, for things where network access is essential.

In Qubes 4.0 this could be implemented as qrexec policy level - if you want to start network-connected DispVM for a given service, then direct it to $dispvm:some-dvm-with-network, and similarly for network disconnected. You can even choose it from source VM (if policy allows).

So, in short: what you want to achieve is already possible.

Member

marmarek commented Nov 9, 2017

Generally "open in DispVM" is used for potentially malicious content, and any netvm being set would allow the malicious content to try to infect any reachable host, launch denial of service attacks and also tell remote servers that it was opened, all of which seems undesirable and perhaps unexpected.

I know at least few people using DispVM to print files, also from network-disconnected VMs. So having network access there is the whole point about DispVM in that use case. If you want network-disconnected DispVM by default, on of VMs created during 3.2 backup restore is disp-no-netvm, so you can use this one.

And one could also have "open in DispVM with networking" that would assign the current VM's netvm to the new DispVM, like Qubes 3.2 does, for things where network access is essential.

In Qubes 4.0 this could be implemented as qrexec policy level - if you want to start network-connected DispVM for a given service, then direct it to $dispvm:some-dvm-with-network, and similarly for network disconnected. You can even choose it from source VM (if policy allows).

So, in short: what you want to achieve is already possible.

@qubesos-bot qubesos-bot referenced this issue in QubesOS/updates-status Nov 21, 2017

Closed

core-admin v4.0.12 (r4.0) #313

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Dec 22, 2017

Member

Error message contains a hint where to look for details.

Member

marmarek commented Dec 22, 2017

Error message contains a hint where to look for details.

@andrewdavidwong

This comment has been minimized.

Show comment
Hide comment
@andrewdavidwong

andrewdavidwong Apr 24, 2018

Member

Reopening due to apparent regression reported in #3846.

Member

andrewdavidwong commented Apr 24, 2018

Reopening due to apparent regression reported in #3846.

@qubesissues

This comment has been minimized.

Show comment
Hide comment
@qubesissues

qubesissues May 10, 2018

@marmarek What is not clear is what should we change the default disp VM to in the Settings for the VM. What is the disp even?

@marmarek What is not clear is what should we change the default disp VM to in the Settings for the VM. What is the disp even?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment