Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upSalt can't manage debian based templates (qubesd errors) #3768
Comments
GAhlekzis
referenced this issue
Mar 30, 2018
Closed
Provide qubesctl switch to limit concurrent jobs #3655
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Mar 30, 2018
Member
Looks like #3655 would help. This is probably not enough memory to start 4 TemplateVMs + 4 DispVMs at once.
|
Looks like #3655 would help. This is probably not enough memory to start 4 TemplateVMs + 4 DispVMs at once. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
GAhlekzis
Mar 30, 2018
As I said I tried that and it didn't work. I detailed that in the issue.
Also I have 16GB of RAM, didn't have a problem with lots of VMs
GAhlekzis
commented
Mar 30, 2018
|
As I said I tried that and it didn't work. I detailed that in the issue. |
marmarek
closed this
in
marmarek/qubes-mgmt-salt@19975da
Mar 30, 2018
marmarek
reopened this
Mar 30, 2018
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Mar 30, 2018
Member
--max-concurrency should be fixed in qubes-mgmt-salt-admin-tools 4.0.8.
Another thing you may want to try is to start and stop qube pointed by qubes-prefs default_dispvm. If it was never started, DispVM initialize config files at startup, which may adds to startup time.
|
Another thing you may want to try is to start and stop qube pointed by |
andrewdavidwong
added
bug
C: mgmt
C: Debian
labels
Mar 31, 2018
andrewdavidwong
modified the milestones:
Release 4.0,
Release 4.0 updates
Mar 31, 2018
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
GAhlekzis
Apr 1, 2018
Thank you! It works with --max-concurrency 1 right now. Later I'll try 2.
It's still weird that my machine can't handle 4 but well... I don't really care that much.
Again, Thanks!
GAhlekzis
commented
Apr 1, 2018
|
Thank you! It works with --max-concurrency 1 right now. Later I'll try 2. It's still weird that my machine can't handle 4 but well... I don't really care that much. Again, Thanks! |
GAhlekzis commentedMar 30, 2018
Qubes OS version:
Fresh Installation of Qubes R4.0
Affected component(s):
Debian-based templates
Steps to reproduce the behavior:
sudo qubesctl --skip-dom0 --templates state.highstateExpected behavior:
My Configuration gets applied to all my templates.
Actual behavior:
It takes forever and most of the time debian based VMs/qubes fail:
The time i copied this 2 went through, but before I only got whonix-gw to work. It was never all of them.
But fedora-26 went through every time.
Look at my journalctl.txt
It contains everything from first boot on, I think.
There are some interesting "unhandled exceptions" and KeyErrors in there besides some "failed to start: qrexec couldn't connect for 60 seconds" and so on.
General notes:
I tried targeting debian-9 on it's own (
sudo qubesctl --target debian-9 state.highstate) and that worked.I thought maybe something in my system (read: hardware) is too slow to handle many salt-mgmt vms in parallel so I searched and found the #3655 issue which provided a new version of qubesctl which could set a maximum concurrency (parallel salt-vms). But it's not working (which i outlined in the issue).
So it seems the only thing I can do, is to write a script that gets a list of vms and calls qubesctl one vm at a time, which I am not happy about.
SaltStack should be able to handle this like it's nothing, shouldn't it?
I read about Qubes' SaltStack integration a long time ago and thought that it was perfect!
I wish to define all my template vms and more in salt states so I'll always have up-to-date templates and a list of the software that is installed. Salt would/should make this possible but it doesn't seem to work with my debian vms and I love me some debian.
Am I doing something wrong? Is it inherently broken? Please help.
Related issues:
Apart from #3655 I couldn't find any.