Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upCannot remove VMs created by qvm-backup-restore #3273
Comments
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Nov 2, 2017
It's because it's in the default_dispvm pref for the restored VM.
However, I think this is quite user-unfriendly: in particular, the error message should at least tell you why the domain is in use, and ideally there be should a --force option that removes it anyway, replacing "no VM" for every time it's used.
The practice of creating those dispvms is also questionable IMHO.
qubesuser
commented
Nov 2, 2017
•
|
It's because it's in the default_dispvm pref for the restored VM. However, I think this is quite user-unfriendly: in particular, the error message should at least tell you why the domain is in use, and ideally there be should a --force option that removes it anyway, replacing "no VM" for every time it's used. The practice of creating those dispvms is also questionable IMHO. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Nov 2, 2017
Member
in particular, the error message should at least tell you why the domain is in use
This is for privacy reason: to not reveal name of other VMs on the system, when returning an error to a management VM (it is the same API). The more detailed reason is the log (journalctl in dom0). I made already a change to include that pointer in the error message.
@rootkovska what do you think about it? Maybe we should reconsider this?
ideally there be should a --force option that removes it anyway, replacing "no VM" for every time it's used.
That would require changing the above behavior (reporting where it is used). We definitely do not want to introduce Admin API method that would remove a VM and change a bunch of properties at the same time.
The practice of creating those dispvms is also questionable IMHO.
This is preserve Qubes 3.2 behavior as close as possible. There you choose just netvm for DispVM, but on Qubes 4.0 you choose the whole DispVM. Not preserving this property could lead to some fatal mistakes - like starting a DispVM with clearnet access from a VM behind tor...
This is for privacy reason: to not reveal name of other VMs on the system, when returning an error to a management VM (it is the same API). The more detailed reason is the log (
That would require changing the above behavior (reporting where it is used). We definitely do not want to introduce Admin API method that would remove a VM and change a bunch of properties at the same time.
This is preserve Qubes 3.2 behavior as close as possible. There you choose just netvm for DispVM, but on Qubes 4.0 you choose the whole DispVM. Not preserving this property could lead to some fatal mistakes - like starting a DispVM with clearnet access from a VM behind tor... |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Nov 2, 2017
Couldn't it return the error as long as the user has permission to read the pref that makes it in use? Or potentially the client could read preferences about all VMs it can access, and create the message itself.
--force could be done client-side too (where the client has permission).
There you choose just netvm for DispVM, but on Qubes 4.0 you choose the whole DispVM
I think I'd like it more to just always start DispVMs that are started from other VMs with no netvm assigned, and let the user assign one later if desired.
Generally "open in DispVM" is used for potentially malicious content, and any netvm being set would allow the malicious content to try to infect any reachable host, launch denial of service attacks and also tell remote servers that it was opened, all of which seems undesirable and perhaps unexpected.
And one could also have "open in DispVM with networking" that would assign the current VM's netvm to the new DispVM, like Qubes 3.2 does, for things where network access is essential.
qubesuser
commented
Nov 2, 2017
•
|
Couldn't it return the error as long as the user has permission to read the pref that makes it in use? Or potentially the client could read preferences about all VMs it can access, and create the message itself. --force could be done client-side too (where the client has permission).
I think I'd like it more to just always start DispVMs that are started from other VMs with no netvm assigned, and let the user assign one later if desired. Generally "open in DispVM" is used for potentially malicious content, and any netvm being set would allow the malicious content to try to infect any reachable host, launch denial of service attacks and also tell remote servers that it was opened, all of which seems undesirable and perhaps unexpected. And one could also have "open in DispVM with networking" that would assign the current VM's netvm to the new DispVM, like Qubes 3.2 does, for things where network access is essential. |
andrewdavidwong
added
C: core
UX
labels
Nov 2, 2017
andrewdavidwong
added this to the Release 4.0 milestone
Nov 2, 2017
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
tasket
Nov 3, 2017
I think I'd like it more to just always start DispVMs that are started from other VMs with no netvm assigned, and let the user assign one later if desired.
Generally "open in DispVM" is used for potentially malicious content, and any netvm being set would allow the malicious content to try to infect any reachable host, launch denial of service attacks and also tell remote servers that it was opened, all of which seems undesirable and perhaps unexpected.
Good point. I recall taking a similar stance on dispVMs.
But the least we could do is have qvm-remove print "See system log for details".
tasket
commented
Nov 3, 2017
Good point. I recall taking a similar stance on dispVMs. But the least we could do is have qvm-remove print "See system log for details". |
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Nov 7, 2017
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Nov 9, 2017
Member
Generally "open in DispVM" is used for potentially malicious content, and any netvm being set would allow the malicious content to try to infect any reachable host, launch denial of service attacks and also tell remote servers that it was opened, all of which seems undesirable and perhaps unexpected.
I know at least few people using DispVM to print files, also from network-disconnected VMs. So having network access there is the whole point about DispVM in that use case. If you want network-disconnected DispVM by default, on of VMs created during 3.2 backup restore is disp-no-netvm, so you can use this one.
And one could also have "open in DispVM with networking" that would assign the current VM's netvm to the new DispVM, like Qubes 3.2 does, for things where network access is essential.
In Qubes 4.0 this could be implemented as qrexec policy level - if you want to start network-connected DispVM for a given service, then direct it to $dispvm:some-dvm-with-network, and similarly for network disconnected. You can even choose it from source VM (if policy allows).
So, in short: what you want to achieve is already possible.
I know at least few people using DispVM to print files, also from network-disconnected VMs. So having network access there is the whole point about DispVM in that use case. If you want network-disconnected DispVM by default, on of VMs created during 3.2 backup restore is
In Qubes 4.0 this could be implemented as qrexec policy level - if you want to start network-connected DispVM for a given service, then direct it to So, in short: what you want to achieve is already possible. |
qubesos-bot
referenced this issue
in QubesOS/updates-status
Nov 21, 2017
Closed
core-admin v4.0.12 (r4.0) #313
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
Error message contains a hint where to look for details. |
marmarek
closed this
Dec 22, 2017
andrewdavidwong
referenced this issue
Apr 24, 2018
Closed
Imported VMs to 4.0 have corresponding "disp" VMx #3846
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
Reopening due to apparent regression reported in #3846. |
andrewdavidwong
reopened this
Apr 24, 2018
andrewdavidwong
added
the
bug
label
Apr 24, 2018
andrewdavidwong
modified the milestones:
Release 4.0,
Release 4.0 updates
Apr 24, 2018
andrewdavidwong
referenced this issue
Apr 25, 2018
Closed
disp VMs created when importing from 3.2 #3850
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesissues
May 10, 2018
@marmarek What is not clear is what should we change the default disp VM to in the Settings for the VM. What is the disp even?
qubesissues
commented
May 10, 2018
|
@marmarek What is not clear is what should we change the default disp VM to in the Settings for the VM. What is the disp even? |
tasket commentedNov 2, 2017
Qubes OS version:
R4.0rc2
Affected TemplateVMs:
Steps to reproduce the behavior:
Restore a backup made with R3.2. Some extraneous VMs with names like 'disp-no-netvm' should be automatically created during the restore.
Expected behavior:
qvm-remove disp-no-netvm should remove the extra VMs
Actual behavior:
Error:
qubesadmin.exc/QubesException: Domain is in use
General notes:
Running
qvm-lsshows that no other VMs are using these restore-generated VMs, so they appear to NOT be in use.Link to discussion:
https://groups.google.com/d/msgid/qubes-users/e141715b30c5a764387f4c636b0f9d3c.squirrel%40bitmessage.ch
Related issues: