Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upConfiguration of default VMs fails during OS installation #2213
Comments
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 30, 2016
Member
Any other errors during installation? Looks like some missing package (namely qubes-mgmt-salt-dom0-qvm). Check logs in /var/log/anaconda for more details.
Have you checked your installation image? Maybe it is corrupted somehow (incomplete download or such)?
|
Any other errors during installation? Looks like some missing package (namely |
andrewdavidwong
added
bug
C: installer
labels
Jul 30, 2016
andrewdavidwong
added this to the Release 3.2 milestone
Jul 30, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
pjmelon
commented
Jul 30, 2016
|
I have exactly the same bug occur on my computer. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
pjmelon
Jul 31, 2016
Ok I can confirm on my end that it was an error in the installation file. I have used Rufus in the past to create Qubes OS bootable USB. I checked the image and found errors. I recreated the USB using a Debian machine and the dd command and the install worked perfectly.
pjmelon
commented
Jul 31, 2016
|
Ok I can confirm on my end that it was an error in the installation file. I have used Rufus in the past to create Qubes OS bootable USB. I checked the image and found errors. I recreated the USB using a Debian machine and the dd command and the install worked perfectly. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
WetwareLabs
Jul 31, 2016
SHA256 hash of downloaded iso was correct, and the iso was then written to the usb stick with dd. I checked the hash again from usb stick and it failed because of the extra "System Volume Information/WPSettings.dat" created by Windows 10. However binary diffing all the files on mounted iso and usb stick show no mismatches, so in my case this error is not because of corrupted iso.
The qubes-mgmt-salt-dom0-qvm is installed:
[root@dom0 mnt]# rpm -ql qubes-mgmt-salt-dom0-qvm
/srv/formulas/test/qvm-formula
/srv/formulas/test/qvm-formula/LICENSE
/srv/formulas/test/qvm-formula/README.rst
/srv/formulas/test/qvm-formula/qvm/init.sls
/srv/formulas/test/qvm-formula/qvm/init.top
/srv/formulas/test/qvm-formula/qvm/tests-salt-call
/srv/salt/_modules
/srv/salt/_modules/ext_module_qvm.py
/srv/salt/_modules/ext_module_qvm.pyc
/srv/salt/_modules/ext_module_qvm.pyo
/srv/salt/_states
/srv/salt/_states/ext_state_qvm.py
/srv/salt/_states/ext_state_qvm.pyc
/srv/salt/_states/ext_state_qvm.pyo
/usr/share/doc/qubes-mgmt-salt-dom0-qvm
/usr/share/doc/qubes-mgmt-salt-dom0-qvm/LICENSE
/usr/share/doc/qubes-mgmt-salt-dom0-qvm/README.rst
In anaconda logs I cannot see anything weird. Anything particular I should check there?
WetwareLabs
commented
Jul 31, 2016
|
SHA256 hash of downloaded iso was correct, and the iso was then written to the usb stick with dd. I checked the hash again from usb stick and it failed because of the extra "System Volume Information/WPSettings.dat" created by Windows 10. However binary diffing all the files on mounted iso and usb stick show no mismatches, so in my case this error is not because of corrupted iso. The qubes-mgmt-salt-dom0-qvm is installed:
In anaconda logs I cannot see anything weird. Anything particular I should check there? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 31, 2016
Member
On Sun, Jul 31, 2016 at 01:35:07AM -0700, Marcus wrote:
In anaconda logs I cannot see anything weird. Anything particular I should check there?
Errors related to any qubes-mgmt-salt-* package installation, especially
in dnf.rpm.log
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Sun, Jul 31, 2016 at 01:35:07AM -0700, Marcus wrote:
Errors related to any qubes-mgmt-salt-* package installation, especially Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
WetwareLabs
Jul 31, 2016
Nothing wrong there, all salt-packages are installed. Just in case I tried different, brand new usb stick and other hard disk, but with same result.
However I found out how to avoid and also to reproduce the problem:
- Keep default keymap and timezone -> works
- Finnish keymap and timezone (Helsinki) -> error
- Default keymap and Helsinki timezone -> error
- Finnish keymap and default timezone -> works
So it seems setting the timezone has something to do with it! Can someone reproduce this?
WetwareLabs
commented
Jul 31, 2016
|
Nothing wrong there, all salt-packages are installed. Just in case I tried different, brand new usb stick and other hard disk, but with same result. However I found out how to avoid and also to reproduce the problem:
So it seems setting the timezone has something to do with it! Can someone reproduce this? |
andrewdavidwong
added
the
P: critical
label
Jul 31, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
kototama
Aug 1, 2016
I can confirm I had the same error. I copied the ISO on a USB stick with this command:
dd if=<file.iso> of=</dev/sdb> bs=16M
kototama
commented
Aug 1, 2016
|
I can confirm I had the same error. I copied the ISO on a USB stick with this command:
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
kototama
Aug 1, 2016
Executing sudo qubesctl state.highstate, as advised on the mailing list, did NOT created the missing VMs.
kototama
commented
Aug 1, 2016
•
|
Executing |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 1, 2016
Member
On Mon, Aug 01, 2016 at 02:18:15AM -0700, kototama wrote:
Executing
sudo qubectl state.highstate, as advised on the mailing list, did NOT created the missing VMs.
Did you get any errors there? Or it haven't even tried to create missing
VMs?
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Mon, Aug 01, 2016 at 02:18:15AM -0700, kototama wrote:
Did you get any errors there? Or it haven't even tried to create missing Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
kototama
Aug 1, 2016
It didn't try. The output contains
ID: topid-always-passes
Function: test.succeed without changes',
Name: foo,
Comment: Success!,
@WetwareLabs interesting, I setup keymap 'french' and timezone 'Berlin' during installation.
kototama
commented
Aug 1, 2016
|
It didn't try. The output contains
@WetwareLabs interesting, I setup keymap 'french' and timezone 'Berlin' during installation. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 1, 2016
Member
Check qubesctl top.disabled - does it work at all? Do you see
available configuration for VMs? You can try enabling it (qubesctl top.enable qvm.sys-net qvm.sys-firewall) and call qubesctl state.highstate again.
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
Check Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
WetwareLabs
Aug 1, 2016
For side note, setting timezone to Europe/Sweden fails also.
Here's the output of what Marmarek suggested:
[user@dom0 Desktop]$ sudo qubesctl state.highstate
local:
----------
ID: topd-always-passes
Function: test.succeed_without_changes
Name: foo
Result: True
Comment: Success!
Started: 21:15:28.291557
Duration: 0.577 ms
Changes:
Summary
------------
Succeeded: 1
Failed: 0
------------
Total states run: 1
[user@dom0 Desktop]$ sudo qubesctl top.disabled
local:
----------
base:
- /srv/formulas/base/virtual-machines-formula/qvm/sys-net-with-usb.top
- /srv/formulas/base/virtual-machines-formula/qvm/vault.top
- /srv/formulas/base/virtual-machines-formula/qvm/sys-net.top
- /srv/formulas/base/virtual-machines-formula/qvm/work.top
- /srv/formulas/base/virtual-machines-formula/qvm/sys-whonix.top
- /srv/formulas/base/virtual-machines-formula/qvm/anon-whonix.top
- /srv/formulas/base/virtual-machines-formula/qvm/personal.top
- /srv/formulas/base/virtual-machines-formula/qvm/sys-usb.top
- /srv/salt/qubes/directories.top
- /srv/formulas/base/virtual-machines-formula/qvm/sys-firewall.top
- /srv/formulas/base/virtual-machines-formula/qvm/untrusted.top
- /srv/salt/qubes/user-dirs.top
[user@dom0 Desktop]$ sudo qubesctl top.enable qvm.sys-net
local:
----------
qvm.sys-net.top:
----------
status:
enabled
[user@dom0 Desktop]$ sudo qubesctl state.highstate
[CRITICAL] Rendering SLS 'base:qvm.sys-net' failed: Jinja variable 'salt.utils.templates.AliasedLoader object' has no attribute 'qvm.check'
/var/cache/salt/minion/files/base/qvm/template.jinja(105):
---
[...]
{%- set force = defaults.get('force', vm.get('force', False)) %}
{#- Only attempt to install the VM if it is missing (not installed) to prevent
# changing an existing configuration, unless ``force`` is True.
#}
{%- if force or salt['qvm.check'](vm.name, 'missing').passed() -%} <======================
{{- state_debug(defaults, vm) }}
{{- state_vm(vm) }}
{%- else -%}
{{- skip(vm) }}
{%- endif -%}
[...]
---
local:
Data failed to compile:
----------
Rendering SLS 'base:qvm.sys-net' failed: Jinja variable 'salt.utils.templates.AliasedLoader object' has no attribute 'qvm.check'
/var/cache/salt/minion/files/base/qvm/template.jinja(105):
---
[...]
{%- set force = defaults.get('force', vm.get('force', False)) %}
{#- Only attempt to install the VM if it is missing (not installed) to prevent
# changing an existing configuration, unless ``force`` is True.
#}
{%- if force or salt['qvm.check'](vm.name, 'missing').passed() -%} <======================
{{- state_debug(defaults, vm) }}
{{- state_vm(vm) }}
{%- else -%}
{{- skip(vm) }}
{%- endif -%}
[...]
---
DOM0 configuration failed, not continuing
[user@dom0 Desktop]$
So it's the same error message as in my first post.
WetwareLabs
commented
Aug 1, 2016
|
For side note, setting timezone to Europe/Sweden fails also. Here's the output of what Marmarek suggested:
So it's the same error message as in my first post. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 2, 2016
Member
Hmm, it's hard to reproduce. Another idea: what was the order of package installation (among qubes-mgmt-* and salt-*)? You can get this from /var/log/anaconda/dnf.rpm.log.
As for workaround, you can try qubesctl saltutil.sync_all, then retry qubes state.highstate.
|
Hmm, it's hard to reproduce. Another idea: what was the order of package installation (among qubes-mgmt-* and salt-*)? You can get this from As for workaround, you can try |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
WetwareLabs
Aug 2, 2016
[marcus@dom0 Desktop]$ sudo cat /var/log/anaconda/dnf.rpm.log | grep "salt"
Aug 03 01:52:32 INFO Installed: salt-2015.5.10-2.fc23.noarch
Aug 03 01:52:32 INFO Installed: salt-2015.5.10-2.fc23.noarch
Aug 03 01:52:32 INFO Installed: salt-minion-2015.5.10-2.fc23.noarch
Aug 03 01:52:32 INFO Installed: salt-minion-2015.5.10-2.fc23.noarch
Aug 03 01:52:32 INFO Installed: qubes-mgmt-salt-config-3.2.3-1.fc23.noarch
Aug 03 01:52:32 INFO Installed: qubes-mgmt-salt-config-3.2.3-1.fc23.noarch
Aug 03 01:52:32 INFO Installed: qubes-mgmt-salt-base-overrides-libs-3.2.1-1.fc23.noarch
Aug 03 01:52:32 INFO Installed: qubes-mgmt-salt-base-overrides-libs-3.2.1-1.fc23.noarch
Aug 03 01:52:32 INFO Installed: qubes-mgmt-salt-base-overrides-3.2.1-1.fc23.noarch
Aug 03 01:52:32 INFO Installed: qubes-mgmt-salt-base-overrides-3.2.1-1.fc23.noarch
Aug 03 01:52:42 INFO Installed: qubes-mgmt-salt-base-topd-3.2.1-1.fc23.noarch
Aug 03 01:52:42 INFO Installed: qubes-mgmt-salt-base-topd-3.2.1-1.fc23.noarch
Aug 03 01:52:42 INFO Installed: qubes-mgmt-salt-3.2.3-1.fc23.noarch
Aug 03 01:52:42 INFO Installed: qubes-mgmt-salt-3.2.3-1.fc23.noarch
Aug 03 01:52:42 INFO Installed: qubes-mgmt-salt-admin-tools-3.2.3-1.fc23.noarch
Aug 03 01:52:42 INFO Installed: qubes-mgmt-salt-admin-tools-3.2.3-1.fc23.noarch
Aug 03 01:52:42 INFO Installed: qubes-mgmt-salt-base-config-3.2.1-1.fc23.noarch
Aug 03 01:52:42 INFO Installed: qubes-mgmt-salt-base-config-3.2.1-1.fc23.noarch
Aug 03 01:52:43 INFO Installed: qubes-mgmt-salt-base-3.2.2-1.fc23.noarch
Aug 03 01:52:43 INFO Installed: qubes-mgmt-salt-base-3.2.2-1.fc23.noarch
Aug 03 01:52:43 INFO Installed: qubes-mgmt-salt-dom0-qvm-3.2.0-1.fc23.noarch
Aug 03 01:52:43 INFO Installed: qubes-mgmt-salt-dom0-qvm-3.2.0-1.fc23.noarch
Aug 03 01:52:43 INFO Installed: qubes-mgmt-salt-dom0-update-3.2.0-1.fc23.noarch
Aug 03 01:52:43 INFO Installed: qubes-mgmt-salt-dom0-update-3.2.0-1.fc23.noarch
Aug 03 01:52:43 INFO Installed: qubes-mgmt-salt-dom0-3.2.3-1.fc23.noarch
Aug 03 01:52:43 INFO Installed: qubes-mgmt-salt-dom0-3.2.3-1.fc23.noarch
Aug 03 01:52:43 INFO Installed: qubes-mgmt-salt-dom0-virtual-machines-3.2.2-1.fc23.noarch
Aug 03 01:52:43 INFO Installed: qubes-mgmt-salt-dom0-virtual-machines-3.2.2-1.fc23.noarch
Unfortunately the workaround didn't work:
[marcus@dom0 Desktop]$ sudo qubesctl saltutil.sync_all
local:
----------
beacons:
grains:
modules:
output:
renderers:
returners:
sdb:
states:
utils:
[marcus@dom0 Desktop]$ sudo qubesctl state.highstate
local:
----------
ID: topd-always-passes
Function: test.succeed_without_changes
Name: foo
Result: True
Comment: Success!
Started: 23:35:58.999531
Duration: 0.249 ms
Changes:
Summary
------------
Succeeded: 1
Failed: 0
------------
Total states run: 1
[marcus@dom0 Desktop]$ sudo qubesctl top.disabled
local:
----------
base:
- /srv/formulas/base/virtual-machines-formula/qvm/sys-net-with-usb.top
- /srv/formulas/base/virtual-machines-formula/qvm/vault.top
- /srv/formulas/base/virtual-machines-formula/qvm/sys-net.top
- /srv/formulas/base/virtual-machines-formula/qvm/work.top
- /srv/formulas/base/virtual-machines-formula/qvm/sys-whonix.top
- /srv/formulas/base/virtual-machines-formula/qvm/anon-whonix.top
- /srv/formulas/base/virtual-machines-formula/qvm/personal.top
- /srv/formulas/base/virtual-machines-formula/qvm/sys-usb.top
- /srv/salt/qubes/directories.top
- /srv/formulas/base/virtual-machines-formula/qvm/sys-firewall.top
- /srv/formulas/base/virtual-machines-formula/qvm/untrusted.top
- /srv/salt/qubes/user-dirs.top
[marcus@dom0 Desktop]$ sudo qubesctl top.enable qvm.sys-net
local:
----------
qvm.sys-net.top:
----------
status:
enabled
[marcus@dom0 Desktop]$ sudo qubesctl state.highstate
[CRITICAL] Rendering SLS 'base:qvm.sys-net' failed: Jinja variable 'salt.utils.templates.AliasedLoader object' has no attribute 'qvm.check'
/var/cache/salt/minion/files/base/qvm/template.jinja(105):
---
[...]
{%- set force = defaults.get('force', vm.get('force', False)) %}
{#- Only attempt to install the VM if it is missing (not installed) to prevent
# changing an existing configuration, unless ``force`` is True.
#}
{%- if force or salt['qvm.check'](vm.name, 'missing').passed() -%} <======================
{{- state_debug(defaults, vm) }}
{{- state_vm(vm) }}
{%- else -%}
{{- skip(vm) }}
{%- endif -%}
[...]
---
local:
Data failed to compile:
----------
Rendering SLS 'base:qvm.sys-net' failed: Jinja variable 'salt.utils.templates.AliasedLoader object' has no attribute 'qvm.check'
/var/cache/salt/minion/files/base/qvm/template.jinja(105):
---
[...]
{%- set force = defaults.get('force', vm.get('force', False)) %}
{#- Only attempt to install the VM if it is missing (not installed) to prevent
# changing an existing configuration, unless ``force`` is True.
#}
{%- if force or salt['qvm.check'](vm.name, 'missing').passed() -%} <======================
{{- state_debug(defaults, vm) }}
{{- state_vm(vm) }}
{%- else -%}
{{- skip(vm) }}
{%- endif -%}
[...]
---
DOM0 configuration failed, not continuing
[marcus@dom0 Desktop]$
WetwareLabs
commented
Aug 2, 2016
Unfortunately the workaround didn't work:
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 2, 2016
Member
Installation order - I have exactly the same in installation done 5 minutes ago. And in my case it worked... It must be something else. This time I've tried default locale (en_US), default keymap (us) and Finnish timezone. And also default partitioning (LVM).
What exactly were your settings during installation? All defaults? Or something changed?
|
Installation order - I have exactly the same in installation done 5 minutes ago. And in my case it worked... It must be something else. This time I've tried default locale (en_US), default keymap (us) and Finnish timezone. And also default partitioning (LVM). What exactly were your settings during installation? All defaults? Or something changed? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 2, 2016
Member
Please collect debug output for better understanding that's the problems:
qubesctl state.highstate -l all
(it will be long)
As for workaround, maybe something more drastic:
qubesctl saltutil.clear_cache
qubesctl saltutil.sync_all refresh=true
|
Please collect debug output for better understanding that's the problems:
(it will be long) As for workaround, maybe something more drastic:
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
WetwareLabs
Aug 3, 2016
Ok, clearing cache and syncing did the trick!
Here are the detailed logs:
Then after enabling sys-net and syncing
Then finally after clearing cache and syncing
After the last one sys-net was created!
BTW. The steps during installation that I did:
- set keymap (add Finnish keymap and set it's order higher than default en_US keymap)
- set Europe/Helsinki timezone
- set target hard-disk --> Automated configuration of partitions -> Reclaim space -> delete all partitions -> Reclaim space
Hope this helps!
WetwareLabs
commented
Aug 3, 2016
|
Ok, clearing cache and syncing did the trick! Here are the detailed logs: Then after enabling sys-net and syncing Then finally after clearing cache and syncing After the last one sys-net was created! BTW. The steps during installation that I did:
Hope this helps! |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 4, 2016
Member
It looks like salt for some reason doesn't have ext_module_qvm.py in cache, but consider the cache up to date. Even with this knowledge I can't reproduce the failure. But if clearing cache fixes the problem, I'll just add it to post-installation script.
Do you need help with creating other VMs (sys-firewall etc)?
|
It looks like salt for some reason doesn't have Do you need help with creating other VMs (sys-firewall etc)? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
kototama
Aug 4, 2016
Do you need help with creating other VMs (sys-firewall etc)?
I will also try to fix the problem with the instructions from above this week-end and would appreciate if you give us the instruction to build all the VMs. If I encounter too much trouble I will just reinstall.
Nonetheless tell me if I should post the output of the commands for debugging or not.
kototama
commented
Aug 4, 2016
•
I will also try to fix the problem with the instructions from above this week-end and would appreciate if you give us the instruction to build all the VMs. If I encounter too much trouble I will just reinstall. Nonetheless tell me if I should post the output of the commands for debugging or not. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 4, 2016
Member
I think I have enough info to fix this.
As for creating VMs, high level documentation: https://www.qubes-os.org/doc/salt/
Description of actual states: https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/blob/master/README.rst
TL;DR:
qubesctl saltutil.clear_cache
qubesctl top.enable qvm.sys-net qvm.sys-firewall qvm.work qvm.personal qvm.untrusted qvm.vault
# for Whonix:
qubesctl top.enable qvm.sys-whonix qvm.anon-whonix
# for USB VM (sys-usb):
qubesctl top.enable qvm.sys-usb
# apply all the above
qubesctl state.highstate|
I think I have enough info to fix this. As for creating VMs, high level documentation: https://www.qubes-os.org/doc/salt/ TL;DR: qubesctl saltutil.clear_cache
qubesctl top.enable qvm.sys-net qvm.sys-firewall qvm.work qvm.personal qvm.untrusted qvm.vault
# for Whonix:
qubesctl top.enable qvm.sys-whonix qvm.anon-whonix
# for USB VM (sys-usb):
qubesctl top.enable qvm.sys-usb
# apply all the above
qubesctl state.highstate |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
WetwareLabs
Aug 4, 2016
No problem with subsequent VM configuration after cache was cleared. Just enable them with qubesctl and invoke state.highstate. Only nuisance was that DNS didn't work in AppVMs right after these steps, but reboot fixed that.
Marek, you were right about the missing ext_module_qvm.py in cache. I re-installed Qubes and got following logs:
root@dom0 /]# find . -name "ext_module*"
./srv/salt/_modules/ext_module_qvm.pyc
./srv/salt/_modules/ext_module_qvm.py
./srv/salt/_modules/ext_module_qvm.pyo
[root@dom0 /]# qubesctl saltutil.clear_cache
local:
True
[root@dom0 /]# qubesctl saltutil.sync_all refresh=true
local:
----------
beacons:
grains:
- grains.boot_mode
- grains.pci_devs
- grains.redefined_dom0_grains
- grains.whonix
modules:
- modules.debug
- modules.ext_module_qvm
- modules.module_utils
- modules.qubes
- modules.qubes_dom0_update
output:
renderers:
returners:
sdb:
states:
- states.debug
- states.ext_state_qvm
- states.status
utils:
- utils.__init__
- utils.nulltype
- utils.qubes_utils
[root@dom0 /]# find . -name "ext_module*"
./srv/salt/_extensions/modules/ext_module_qvm.py
./srv/salt/_modules/ext_module_qvm.pyc
./srv/salt/_modules/ext_module_qvm.py
./srv/salt/_modules/ext_module_qvm.pyo
./var/cache/salt/minion/roots/hash/base/_modules/ext_module_qvm.py.hash.md5
./var/cache/salt/minion/files/base/_modules/ext_module_qvm.py
root@dom0 cache]# diff -ur salt_before_clearcache/ salt/
Only in salt_before_clearcache/minion/extmods/modules: localemod.py
Only in salt_before_clearcache/minion/extmods/modules: topd.py
Only in salt_before_clearcache/minion/extmods/modules: topd.pyc
Only in salt_before_clearcache/minion/extmods/utils: fileinfo.py
Only in salt_before_clearcache/minion/extmods/utils: fileinfo.pyc
Only in salt_before_clearcache/minion/extmods/utils: matcher.py
Only in salt_before_clearcache/minion/extmods/utils: matcher.pyc
Only in salt_before_clearcache/minion/extmods/utils: pathinfo.py
Only in salt_before_clearcache/minion/extmods/utils: pathinfo.pyc
Only in salt_before_clearcache/minion/extmods/utils: pathutils.py
Only in salt_before_clearcache/minion/extmods/utils: pathutils.pyc
Only in salt_before_clearcache/minion/extmods/utils: toputils.py
Only in salt_before_clearcache/minion/extmods/utils: toputils.pyc
Binary files salt_before_clearcache/minion/file_lists/roots/base.p and salt/minion/file_lists/roots/base.p differ
Only in salt/minion/files/base: _grains
Only in salt/minion/files/base/_modules: debug.py
Only in salt/minion/files/base/_modules: ext_module_qvm.py
Only in salt/minion/files/base/_modules: module_utils.py
Only in salt/minion/files/base/_modules: qubes_dom0_update.py
Only in salt/minion/files/base/_modules: qubes.py
Only in salt_before_clearcache/minion/files/base/qvm: anon-whonix.sls
Only in salt_before_clearcache/minion/files/base/qvm: personal.sls
Only in salt_before_clearcache/minion/files/base/qvm: sys-firewall.sls
Only in salt_before_clearcache/minion/files/base/qvm: sys-net.sls
Only in salt_before_clearcache/minion/files/base/qvm: sys-whonix.sls
Only in salt_before_clearcache/minion/files/base/qvm: template.jinja
Only in salt_before_clearcache/minion/files/base/qvm: untrusted.sls
Only in salt_before_clearcache/minion/files/base/qvm: vault.sls
Only in salt_before_clearcache/minion/files/base/qvm: work.sls
Only in salt/minion/files/base: _states
Only in salt_before_clearcache/minion/files/base/topd: init.sls
Only in salt_before_clearcache/minion/files/base/_tops/base: qvm.anon-whonix.top
Only in salt_before_clearcache/minion/files/base/_tops/base: qvm.personal.top
Only in salt_before_clearcache/minion/files/base/_tops/base: qvm.sys-firewall.top
Only in salt_before_clearcache/minion/files/base/_tops/base: qvm.sys-net.top
Only in salt_before_clearcache/minion/files/base/_tops/base: qvm.sys-whonix.top
Only in salt_before_clearcache/minion/files/base/_tops/base: qvm.untrusted.top
Only in salt_before_clearcache/minion/files/base/_tops/base: qvm.vault.top
Only in salt_before_clearcache/minion/files/base/_tops/base: qvm.work.top
Only in salt/minion/files/base/_utils: __init__.py
Only in salt/minion/files/base/_utils: nulltype.py
Only in salt/minion/files/base/_utils: qubes_utils.py
Only in salt/minion/roots/hash/base: _grains
Only in salt/minion/roots/hash/base/_modules: debug.py.hash.md5
Only in salt/minion/roots/hash/base/_modules: ext_module_qvm.py.hash.md5
Only in salt/minion/roots/hash/base/_modules: module_utils.py.hash.md5
Only in salt/minion/roots/hash/base/_modules: qubes_dom0_update.py.hash.md5
Only in salt/minion/roots/hash/base/_modules: qubes.py.hash.md5
Only in salt_before_clearcache/minion/roots/hash/base/qvm: anon-whonix.sls.hash.md5
Only in salt_before_clearcache/minion/roots/hash/base/qvm: personal.sls.hash.md5
Only in salt_before_clearcache/minion/roots/hash/base/qvm: sys-firewall.sls.hash.md5
Only in salt_before_clearcache/minion/roots/hash/base/qvm: sys-net.sls.hash.md5
Only in salt_before_clearcache/minion/roots/hash/base/qvm: sys-whonix.sls.hash.md5
Only in salt_before_clearcache/minion/roots/hash/base/qvm: template.jinja.hash.md5
Only in salt_before_clearcache/minion/roots/hash/base/qvm: untrusted.sls.hash.md5
Only in salt_before_clearcache/minion/roots/hash/base/qvm: vault.sls.hash.md5
Only in salt_before_clearcache/minion/roots/hash/base/qvm: work.sls.hash.md5
Only in salt/minion/roots/hash/base: _states
Only in salt_before_clearcache/minion/roots/hash/base/topd: init.sls.hash.md5
Only in salt_before_clearcache/minion/roots/hash/base/_tops/base: qvm.anon-whonix.top.hash.md5
Only in salt/minion/roots/hash/base/_utils: __init__.py.hash.md5
Only in salt/minion/roots/hash/base/_utils: nulltype.py.hash.md5
Only in salt/minion/roots/hash/base/_utils: qubes_utils.py.hash.md5
diff -ur salt_before_clearcache/minion/roots/mtime_map salt/minion/roots/mtime_map
--- salt_before_clearcache/minion/roots/mtime_map 2016-08-05 00:49:24.083373052 +0300
+++ salt/minion/roots/mtime_map 2016-08-05 00:50:20.540369921 +0300
@@ -31,7 +31,6 @@
/srv/salt/_utils/__init__.pyo:1468891902.0
/srv/salt/_modules/module_utils.pyo:1468891902.0
/srv/salt/_utils/pathutils.py:1465294192.0
-/srv/salt/_tops/base/qvm.anon-whonix.top:1468891954.0
/srv/formulas/base/virtual-machines-formula/qvm/template-debian-7.sls:1468891954.0
/srv/salt/_states/debug.pyc:1468891902.0
/srv/salt/_grains/pci_devs.py:1468891902.0
[root@dom0 cache]#
So there were many changes caused by clearing the cache. Let me know if you need tar archives from these cache directories.
WetwareLabs
commented
Aug 4, 2016
|
No problem with subsequent VM configuration after cache was cleared. Just enable them with qubesctl and invoke state.highstate. Only nuisance was that DNS didn't work in AppVMs right after these steps, but reboot fixed that. Marek, you were right about the missing ext_module_qvm.py in cache. I re-installed Qubes and got following logs:
So there were many changes caused by clearing the cache. Let me know if you need tar archives from these cache directories. |
marmarek
closed this
in
marmarek/qubes-installer-qubes-os@2a0a180
Aug 7, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 8, 2016
Member
Automated announcement from builder-github
The package pykickstart-2.13-3.fc23 has been pushed to the r3.2 testing repository for dom0.
To test this update, please install it with the following command:
sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing
|
Automated announcement from builder-github The package
|
marmarek
added
the
r3.2-dom0-cur-test
label
Aug 8, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 31, 2016
Member
Automated announcement from builder-github
The package pykickstart-2.13-3.fc23 has been pushed to the r3.2 stable repository for dom0.
To install this update, please use the standard update command:
sudo qubes-dom0-update
Or update dom0 via Qubes Manager.
|
Automated announcement from builder-github The package
Or update dom0 via Qubes Manager. |
WetwareLabs commentedJul 30, 2016
Qubes OS version R3.2 rc2 (also rc1)
Expected behavior:
During installation, sys-net and sys-firewall should be created (along with other VMs)
Actual behavior:
Error message is shown:
After installation, none of the VMs are actually installed (only template VMs exist)
General notes:
These error messages can be found in /var/log/salt/minion
Related issues: