New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

broken /etc/resolv.conf, broken DNS resolution #1067

Closed
adrelanos opened this Issue Jul 14, 2015 · 68 comments

Comments

@adrelanos
Member

adrelanos commented Jul 14, 2015

sys-net and sys-firewall had working settings. /etc/resolv.conf:

nameserver 10.137.1.1
nameserver 10.137.1.254

All other AppVMs I tested (Debian and Fedora templates based), had broken DNS settings. /etc/resolv.conf:

nameserver 10.137.2.1
nameserver 10.137.2.254

After manually setting to the same settings as sys-firewall, DNS resolution was functional again.

Must be some bug at work here, that will result in a huge usability issue (no more internet access).

Qubes Q3 RC1

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jul 14, 2015

Member

On Tue, Jul 14, 2015 at 02:26:47PM -0700, Patrick Schleizer wrote:

sys-net and sys-firewall had working settings. /etc/resolv.conf:

nameserver 10.137.1.1
nameserver 10.137.1.254

All other AppVMs I tested (Debian and Fedora templates based), had broken DNS settings. /etc/resolv.conf:

nameserver 10.137.2.1
nameserver 10.137.2.254

After manually setting to the same settings as sys-firewall, DNS resolution was functional again.

Must be some bug at work here, that will result in a huge usability issue (no more internet access).

Qubes Q3 RC1

/etc/resolv.conf in AppVMs should point at sys-firewall, which should
then redirect to sys-net, based on its own /etc/resolv.conf. So the
/etc/resolv.conf you've shown seems to be valid. Maybe some problem
with DNS redirection in sys-firewall? Check qubes-firewall.service
state.

Rationale: VM (DNS) configuration should not depend on which netvm is
used by firewallvm.

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Jul 14, 2015

On Tue, Jul 14, 2015 at 02:26:47PM -0700, Patrick Schleizer wrote:

sys-net and sys-firewall had working settings. /etc/resolv.conf:

nameserver 10.137.1.1
nameserver 10.137.1.254

All other AppVMs I tested (Debian and Fedora templates based), had broken DNS settings. /etc/resolv.conf:

nameserver 10.137.2.1
nameserver 10.137.2.254

After manually setting to the same settings as sys-firewall, DNS resolution was functional again.

Must be some bug at work here, that will result in a huge usability issue (no more internet access).

Qubes Q3 RC1

/etc/resolv.conf in AppVMs should point at sys-firewall, which should
then redirect to sys-net, based on its own /etc/resolv.conf. So the
/etc/resolv.conf you've shown seems to be valid. Maybe some problem
with DNS redirection in sys-firewall? Check qubes-firewall.service
state.

Rationale: VM (DNS) configuration should not depend on which netvm is
used by firewallvm.

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Jul 14, 2015

Member

Maybe some problem with DNS redirection in sys-firewall?

I don't know. I did nothing in there. Overall, nothing fancy.

Maybe some problem with DNS redirection in sys-firewall? Check qubes-firewall.service state.

Looks like. See this:

user@personal:~$ sudo service qubes-firewall status
● qubes-firewall.service - Qubes firewall updater
   Loaded: loaded (/lib/systemd/system/qubes-firewall.service; enabled)
   Active: inactive (dead)
           start condition failed at Tue 2015-07-14 23:05:03 CEST; 1h 2min ago
           ConditionPathExists=/var/run/qubes-service/qubes-firewall was not met

Jul 14 23:05:03 personal systemd[1]: Started Qubes firewall updater.
Member

adrelanos commented Jul 14, 2015

Maybe some problem with DNS redirection in sys-firewall?

I don't know. I did nothing in there. Overall, nothing fancy.

Maybe some problem with DNS redirection in sys-firewall? Check qubes-firewall.service state.

Looks like. See this:

user@personal:~$ sudo service qubes-firewall status
● qubes-firewall.service - Qubes firewall updater
   Loaded: loaded (/lib/systemd/system/qubes-firewall.service; enabled)
   Active: inactive (dead)
           start condition failed at Tue 2015-07-14 23:05:03 CEST; 1h 2min ago
           ConditionPathExists=/var/run/qubes-service/qubes-firewall was not met

Jul 14 23:05:03 personal systemd[1]: Started Qubes firewall updater.
@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jul 14, 2015

Member

Check that in sys-firewall...

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Jul 14, 2015

Check that in sys-firewall...

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Jul 14, 2015

Member
[user@sys-firewall ~]$ sudo service qubes-firewall status
Redirecting to /bin/systemctl status  qubes-firewall.service
● qubes-firewall.service - Qubes firewall updater
   Loaded: loaded (/usr/lib/systemd/system/qubes-firewall.service; enabled)
   Active: active (running) since Tue 2015-07-14 22:47:18 CEST; 1h 30min ago
 Main PID: 518 (qubes-firewall)
   CGroup: /system.slice/qubes-firewall.service
           ├─ 518 /bin/sh /usr/sbin/qubes-firewall
           └─1772 /usr/bin/qubesdb-watch /qubes-iptables

Jul 14 22:47:18 sys-firewall systemd[1]: Started Qubes firewall updater.
Jul 14 22:48:42 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 22:51:14 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 22:51:37 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 23:04:58 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 23:07:35 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 23:09:42 sys-firewall qubes-firewall[518]: /qubes-iptables
[user@sys-firewall ~]$ 
Member

adrelanos commented Jul 14, 2015

[user@sys-firewall ~]$ sudo service qubes-firewall status
Redirecting to /bin/systemctl status  qubes-firewall.service
● qubes-firewall.service - Qubes firewall updater
   Loaded: loaded (/usr/lib/systemd/system/qubes-firewall.service; enabled)
   Active: active (running) since Tue 2015-07-14 22:47:18 CEST; 1h 30min ago
 Main PID: 518 (qubes-firewall)
   CGroup: /system.slice/qubes-firewall.service
           ├─ 518 /bin/sh /usr/sbin/qubes-firewall
           └─1772 /usr/bin/qubesdb-watch /qubes-iptables

Jul 14 22:47:18 sys-firewall systemd[1]: Started Qubes firewall updater.
Jul 14 22:48:42 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 22:51:14 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 22:51:37 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 23:04:58 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 23:07:35 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 23:09:42 sys-firewall qubes-firewall[518]: /qubes-iptables
[user@sys-firewall ~]$ 
@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jul 14, 2015

Member

Looks fine...

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Jul 14, 2015

Looks fine...

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Jul 15, 2015

Member

What else could have caused this? Didn't happen again after reboot.

Member

adrelanos commented Jul 15, 2015

What else could have caused this? Didn't happen again after reboot.

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Jul 15, 2015

Member

Hit this issue again after another reboot.

Member

adrelanos commented Jul 15, 2015

Hit this issue again after another reboot.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jul 15, 2015

Member

Hmm, one more idea: check nat table, PR-QBS chain. Also worth checking
iptables.service status. And system logs for anything mentioning
iptables fail...

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Jul 15, 2015

Hmm, one more idea: check nat table, PR-QBS chain. Also worth checking
iptables.service status. And system logs for anything mentioning
iptables fail...

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jul 15, 2015

Member

On Wed, Jul 15, 2015 at 06:55:22PM +0200, Marek Marczykowski-Górecki wrote:

Hmm, one more idea: check nat table, PR-QBS chain. Also worth checking
iptables.service status. And system logs for anything mentioning
iptables fail...

All of this in sys-firewall.

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Jul 15, 2015

On Wed, Jul 15, 2015 at 06:55:22PM +0200, Marek Marczykowski-Górecki wrote:

Hmm, one more idea: check nat table, PR-QBS chain. Also worth checking
iptables.service status. And system logs for anything mentioning
iptables fail...

All of this in sys-firewall.

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Jul 16, 2015

Member

No PR-QBS chain, look like.

Yes, there are some failing messages in the logs.

Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Setting chains to policy ACCEPT: nat Another app is currently holding the xtables lock. Perhaps you want to use the -w option?

It could be a race condition? Because this is [apart from driver issues] a very fast system. Perhaps the iptables script runs before that's ready. A missing systemd requires perhaps? @nrgaway has some experiences with the need for the iptables --wait option.

[user@sys-net ~]$ sudo service iptables status
Redirecting to /bin/systemctl status  iptables.service
● iptables.service - IPv4 firewall with iptables
   Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled)
   Active: active (exited) since Thu 2015-07-16 13:52:56 CEST; 3h 13min ago
  Process: 395 ExecStart=/usr/libexec/iptables/iptables.init start (code=exited, status=0/SUCCESS)
 Main PID: 395 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/iptables.service


Jul 16 13:52:55 sys-net systemd[1]: Starting IPv4 firewall with iptables...
Jul 16 13:52:56 sys-net iptables.init[395]: iptables: Applying firewall rule...]
Jul 16 13:52:56 sys-net systemd[1]: Started IPv4 firewall with iptables.
Hint: Some lines were ellipsized, use -l to show in full.
[user@sys-net ~]$ sudo iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:us-cli
DROP       udp  --  anywhere             anywhere             udp dpt:bootpc
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     icmp --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DROP       all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
DROP       all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
[user@sys-net ~]$ 
[user@sys-net ~]$ sudo journalctl | grep iptables
Jul 12 16:33:50 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 16:33:50 fedora-21 iptables.init[370]: iptables: Applying firewall rules: [  OK  ]
Jul 12 16:33:50 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 17:29:32 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 17:29:32 fedora-21 iptables.init[360]: iptables: Applying firewall rules: [  OK  ]
Jul 12 17:29:32 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 17:30:18 fedora-21 systemd[1]: Stopping IPv4 firewall with iptables...
Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Setting chains to policy ACCEPT: nat Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
Jul 12 17:30:18 fedora-21 iptables.init[6184]: filter [FAILED]
Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Flushing firewall rules: [  OK  ]
Jul 12 17:30:19 fedora-21 iptables.init[6184]: iptables: Unloading modules: [  OK  ]
Jul 12 17:30:19 fedora-21 systemd[1]: Stopped IPv4 firewall with iptables.
Jul 12 18:08:57 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 18:08:57 fedora-21 iptables.init[389]: iptables: Applying firewall rules: [  OK  ]
Jul 12 18:08:57 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 18:47:23 fedora-21 iptables.init[5596]: iptables: Setting chains to policy ACCEPT: nat filter [  OK  ]
Jul 12 18:47:23 fedora-21 iptables.init[5596]: iptables: Flushing firewall rules: [  OK  ]
Jul 12 18:47:23 fedora-21 iptables.init[5596]: iptables: Unloading modules: [  OK  ]
Jul 12 18:47:23 fedora-21 systemd[1]: Stopped IPv4 firewall with iptables.
Jul 12 18:54:21 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 18:54:21 fedora-21 iptables.init[384]: iptables: Applying firewall rules: [  OK  ]
Jul 12 18:54:21 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 18:58:16 fedora-21 systemd[1]: Stopping IPv4 firewall with iptables...
Jul 12 18:58:16 fedora-21 iptables.init[2752]: iptables: Setting chains to policy ACCEPT: nat filter [  OK  ]
Jul 12 18:58:16 fedora-21 iptables.init[2752]: iptables: Flushing firewall rules: [  OK  ]
Jul 12 18:58:16 fedora-21 iptables.init[2752]: iptables: Unloading modules: [  OK  ]
Jul 12 18:58:16 fedora-21 systemd[1]: Stopped IPv4 firewall with iptables.
Jul 12 19:19:57 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 19:19:57 fedora-21 iptables.init[387]: iptables: Applying firewall rules: [  OK  ]
Jul 12 19:19:57 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 19:24:14 fedora-21 systemd[1]: Stopping IPv4 firewall with iptables...
Jul 12 19:24:14 fedora-21 iptables.init[2804]: iptables: Setting chains to policy ACCEPT: nat filter [  OK  ]
Jul 12 19:24:14 fedora-21 iptables.init[2804]: iptables: Flushing firewall rules: [  OK  ]
Jul 12 19:24:14 fedora-21 iptables.init[2804]: iptables: Unloading modules: [  OK  ]
Jul 12 19:24:14 fedora-21 systemd[1]: Stopped IPv4 firewall with iptables.
Jul 16 13:52:55 sys-net systemd[1]: Starting IPv4 firewall with iptables...
Jul 16 13:52:56 sys-net iptables.init[395]: iptables: Applying firewall rules: [  OK  ]
Jul 16 13:52:56 sys-net systemd[1]: Started IPv4 firewall with iptables.
Jul 16 17:06:44 sys-net sudo[2979]: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/sbin/service iptables status
Jul 16 17:06:56 sys-net sudo[2994]: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/sbin/iptables --list
[user@sys-net ~]$

Can you make head or tail of this bug or you need more?

Member

adrelanos commented Jul 16, 2015

No PR-QBS chain, look like.

Yes, there are some failing messages in the logs.

Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Setting chains to policy ACCEPT: nat Another app is currently holding the xtables lock. Perhaps you want to use the -w option?

It could be a race condition? Because this is [apart from driver issues] a very fast system. Perhaps the iptables script runs before that's ready. A missing systemd requires perhaps? @nrgaway has some experiences with the need for the iptables --wait option.

[user@sys-net ~]$ sudo service iptables status
Redirecting to /bin/systemctl status  iptables.service
● iptables.service - IPv4 firewall with iptables
   Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled)
   Active: active (exited) since Thu 2015-07-16 13:52:56 CEST; 3h 13min ago
  Process: 395 ExecStart=/usr/libexec/iptables/iptables.init start (code=exited, status=0/SUCCESS)
 Main PID: 395 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/iptables.service


Jul 16 13:52:55 sys-net systemd[1]: Starting IPv4 firewall with iptables...
Jul 16 13:52:56 sys-net iptables.init[395]: iptables: Applying firewall rule...]
Jul 16 13:52:56 sys-net systemd[1]: Started IPv4 firewall with iptables.
Hint: Some lines were ellipsized, use -l to show in full.
[user@sys-net ~]$ sudo iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:us-cli
DROP       udp  --  anywhere             anywhere             udp dpt:bootpc
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     icmp --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DROP       all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
DROP       all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
[user@sys-net ~]$ 
[user@sys-net ~]$ sudo journalctl | grep iptables
Jul 12 16:33:50 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 16:33:50 fedora-21 iptables.init[370]: iptables: Applying firewall rules: [  OK  ]
Jul 12 16:33:50 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 17:29:32 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 17:29:32 fedora-21 iptables.init[360]: iptables: Applying firewall rules: [  OK  ]
Jul 12 17:29:32 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 17:30:18 fedora-21 systemd[1]: Stopping IPv4 firewall with iptables...
Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Setting chains to policy ACCEPT: nat Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
Jul 12 17:30:18 fedora-21 iptables.init[6184]: filter [FAILED]
Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Flushing firewall rules: [  OK  ]
Jul 12 17:30:19 fedora-21 iptables.init[6184]: iptables: Unloading modules: [  OK  ]
Jul 12 17:30:19 fedora-21 systemd[1]: Stopped IPv4 firewall with iptables.
Jul 12 18:08:57 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 18:08:57 fedora-21 iptables.init[389]: iptables: Applying firewall rules: [  OK  ]
Jul 12 18:08:57 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 18:47:23 fedora-21 iptables.init[5596]: iptables: Setting chains to policy ACCEPT: nat filter [  OK  ]
Jul 12 18:47:23 fedora-21 iptables.init[5596]: iptables: Flushing firewall rules: [  OK  ]
Jul 12 18:47:23 fedora-21 iptables.init[5596]: iptables: Unloading modules: [  OK  ]
Jul 12 18:47:23 fedora-21 systemd[1]: Stopped IPv4 firewall with iptables.
Jul 12 18:54:21 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 18:54:21 fedora-21 iptables.init[384]: iptables: Applying firewall rules: [  OK  ]
Jul 12 18:54:21 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 18:58:16 fedora-21 systemd[1]: Stopping IPv4 firewall with iptables...
Jul 12 18:58:16 fedora-21 iptables.init[2752]: iptables: Setting chains to policy ACCEPT: nat filter [  OK  ]
Jul 12 18:58:16 fedora-21 iptables.init[2752]: iptables: Flushing firewall rules: [  OK  ]
Jul 12 18:58:16 fedora-21 iptables.init[2752]: iptables: Unloading modules: [  OK  ]
Jul 12 18:58:16 fedora-21 systemd[1]: Stopped IPv4 firewall with iptables.
Jul 12 19:19:57 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 19:19:57 fedora-21 iptables.init[387]: iptables: Applying firewall rules: [  OK  ]
Jul 12 19:19:57 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 19:24:14 fedora-21 systemd[1]: Stopping IPv4 firewall with iptables...
Jul 12 19:24:14 fedora-21 iptables.init[2804]: iptables: Setting chains to policy ACCEPT: nat filter [  OK  ]
Jul 12 19:24:14 fedora-21 iptables.init[2804]: iptables: Flushing firewall rules: [  OK  ]
Jul 12 19:24:14 fedora-21 iptables.init[2804]: iptables: Unloading modules: [  OK  ]
Jul 12 19:24:14 fedora-21 systemd[1]: Stopped IPv4 firewall with iptables.
Jul 16 13:52:55 sys-net systemd[1]: Starting IPv4 firewall with iptables...
Jul 16 13:52:56 sys-net iptables.init[395]: iptables: Applying firewall rules: [  OK  ]
Jul 16 13:52:56 sys-net systemd[1]: Started IPv4 firewall with iptables.
Jul 16 17:06:44 sys-net sudo[2979]: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/sbin/service iptables status
Jul 16 17:06:56 sys-net sudo[2994]: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/sbin/iptables --list
[user@sys-net ~]$

Can you make head or tail of this bug or you need more?

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jul 16, 2015

Member

On Thu, Jul 16, 2015 at 08:34:31AM -0700, Patrick Schleizer wrote:

No PR-QBS chain, look like.

Yes, there are some failing messages in the logs.

Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Setting chains to policy ACCEPT: nat Another app is currently holding the xtables lock. Perhaps you want to use the -w option?

It could be a race condition? Because this is [apart from driver issues] a very fast system. Perhaps the iptables script runs before that's ready. A missing systemd requires perhaps? @nrgaway has some experiences with the need for the iptables --wait option.

Yes, this may be a problem. Unfortunately iptables --wait doesn't solve all the cases.
Also apparently this time it was in Fedora-provided service...

[user@sys-net ~]$ sudo iptables --list

Check also -t nat.

Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Setting chains to policy ACCEPT: nat Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
Jul 12 17:30:18 fedora-21 iptables.init[6184]: filter [FAILED]
Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Flushing firewall rules: [ OK ]
Jul 12 17:30:19 fedora-21 iptables.init[6184]: iptables: Unloading modules: [ OK ]

This looks like the reason. iptables.init is provided by Fedoda
iptables-services package...

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Jul 16, 2015

On Thu, Jul 16, 2015 at 08:34:31AM -0700, Patrick Schleizer wrote:

No PR-QBS chain, look like.

Yes, there are some failing messages in the logs.

Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Setting chains to policy ACCEPT: nat Another app is currently holding the xtables lock. Perhaps you want to use the -w option?

It could be a race condition? Because this is [apart from driver issues] a very fast system. Perhaps the iptables script runs before that's ready. A missing systemd requires perhaps? @nrgaway has some experiences with the need for the iptables --wait option.

Yes, this may be a problem. Unfortunately iptables --wait doesn't solve all the cases.
Also apparently this time it was in Fedora-provided service...

[user@sys-net ~]$ sudo iptables --list

Check also -t nat.

Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Setting chains to policy ACCEPT: nat Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
Jul 12 17:30:18 fedora-21 iptables.init[6184]: filter [FAILED]
Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Flushing firewall rules: [ OK ]
Jul 12 17:30:19 fedora-21 iptables.init[6184]: iptables: Unloading modules: [ OK ]

This looks like the reason. iptables.init is provided by Fedoda
iptables-services package...

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Jul 17, 2015

Member
[user@sys-net ~]$ sudo iptables --list -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
PR-QBS     all  --  anywhere             anywhere            
PR-QBS-SERVICES  all  --  anywhere             anywhere            

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
MASQUERADE  all  --  anywhere             anywhere            

Chain PR-QBS (1 references)
target     prot opt source               destination         
DNAT       udp  --  anywhere             sys-net              udp dpt:domain to:192.168.0.1

Chain PR-QBS-SERVICES (1 references)
target     prot opt source               destination         
REDIRECT   tcp  --  anywhere             10.137.255.254       tcp dpt:us-cli
[user@sys-net ~]$

Look good? But still having this issue.

Member

adrelanos commented Jul 17, 2015

[user@sys-net ~]$ sudo iptables --list -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
PR-QBS     all  --  anywhere             anywhere            
PR-QBS-SERVICES  all  --  anywhere             anywhere            

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
MASQUERADE  all  --  anywhere             anywhere            

Chain PR-QBS (1 references)
target     prot opt source               destination         
DNAT       udp  --  anywhere             sys-net              udp dpt:domain to:192.168.0.1

Chain PR-QBS-SERVICES (1 references)
target     prot opt source               destination         
REDIRECT   tcp  --  anywhere             10.137.255.254       tcp dpt:us-cli
[user@sys-net ~]$

Look good? But still having this issue.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jul 18, 2015

Member

Looks good, including proper DNS redirection. What about the same
iptables dump in sys-firewall?

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Jul 18, 2015

Looks good, including proper DNS redirection. What about the same
iptables dump in sys-firewall?

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Jul 18, 2015

Member

Not having this issue at this very moment. (Reinstalled. [new ssd]) But if I experience this again, I will provide this info. However, I experienced this issue earlier. And found something interesting. Check out the following. Maybe that's it?

[user@sys-firewall ~]$ sudo journalctl | grep -i fail
Jul 18 14:42:05 localhost kernel: rtc_cmos: probe of rtc_cmos failed with error -38
Jul 18 14:47:59 fedora-21 systemd[1]: qubes-update-check.service: main process exited, code=exited, status=1/FAILURE
Jul 18 14:47:59 fedora-21 systemd[1]: Failed to start Qubes check for VM updates and notify dom0.
Jul 18 14:47:59 fedora-21 systemd[1]: Unit qubes-update-check.service entered failed state.
Jul 18 14:47:59 fedora-21 systemd[1]: qubes-update-check.service failed.
Jul 18 14:54:05 fedora-21 systemd[973]: Failed at step CGROUP spawning /usr/lib/systemd/systemd: No such file or directory
Jul 18 15:00:33 fedora-21 systemd[1]: qubes-gui-agent.service: main process exited, code=exited, status=1/FAILURE
Jul 18 15:00:33 fedora-21 systemd[1]: Unit qubes-gui-agent.service entered failed state.
Jul 18 15:00:33 fedora-21 systemd[1]: qubes-gui-agent.service failed.
Jul 18 15:00:33 fedora-21 su[1082]: pam_systemd(su:session): Failed to create session: Connection reset by peer
Jul 18 15:00:33 fedora-21 ip6tables.init[1055]: [FAILED]
Jul 18 15:00:33 fedora-21 iptables.init[1049]: filter [FAILED]
Jul 18 15:00:33 fedora-21 su[1126]: pam_systemd(su:session): Failed to create session: Connection reset by peer
Jul 18 15:00:34 fedora-21 systemd[1]: Unit qubes-mount-home.service entered faied state.
Jul 18 15:00:34 fedora-21 systemd[1]: qubes-mount-home.service failed.
Jul 18 15:00:34 fedora-21 systemd[1]: Failed unmounting /usr/lib/modules.
Jul 18 15:00:34 fedora-21 systemd[1]: Failed unmounting /proc/xen.
Jul 18 20:28:26 localhost kernel: rtc_cmos: probe of rtc_cmos failed with error -38
Jul 18 20:28:30 sys-firewall network-proxy-setup.sh[487]: iptables-restore: line 4 failed
Jul 18 22:54:05 sys-firewall systemd[1938]: Failed at step CGROUP spawning /usr/lib/systemd/systemd: No such file or directory
Jul 18 23:04:58 sys-firewall logger[2037]: /etc/xen/scripts/vif-route-qubes: ifdown vif4.0 failed
Jul 18 23:04:58 sys-firewall logger[2043]: /etc/xen/scripts/vif-route-qubes: ip route del 10.137.2.8 dev vif4.0 metric 32748 failed
Jul 18 23:04:58 sys-firewall logger[2054]: /etc/xen/scripts/vif-route-qubes: ip addr del 10.137.2.1/32 dev vif4.0 failed
Jul 18 23:06:14 sys-firewall logger[2252]: /etc/xen/scripts/vif-route-qubes: ifdown vif5.0 failed
Jul 18 23:06:14 sys-firewall logger[2258]: /etc/xen/scripts/vif-route-qubes: ip route del 10.137.2.11 dev vif5.0 metric 32747 failed
Jul 18 23:06:14 sys-firewall logger[2269]: /etc/xen/scripts/vif-route-qubes: ip addr del 10.137.2.1/32 dev vif5.0 failed
Jul 19 00:59:43 sys-firewall logger[3193]: /etc/xen/scripts/vif-route-qubes: ifdown vif7.0 failed
Jul 19 00:59:43 sys-firewall logger[3199]: /etc/xen/scripts/vif-route-qubes: ip route del 10.137.2.11 dev vif7.0 metric 32745 failed
Jul 19 00:59:43 sys-firewall logger[3210]: /etc/xen/scripts/vif-route-qubes: ip addr del 10.137.2.1/32 dev vif7.0 failed
[user@sys-firewall ~]$ 
Member

adrelanos commented Jul 18, 2015

Not having this issue at this very moment. (Reinstalled. [new ssd]) But if I experience this again, I will provide this info. However, I experienced this issue earlier. And found something interesting. Check out the following. Maybe that's it?

[user@sys-firewall ~]$ sudo journalctl | grep -i fail
Jul 18 14:42:05 localhost kernel: rtc_cmos: probe of rtc_cmos failed with error -38
Jul 18 14:47:59 fedora-21 systemd[1]: qubes-update-check.service: main process exited, code=exited, status=1/FAILURE
Jul 18 14:47:59 fedora-21 systemd[1]: Failed to start Qubes check for VM updates and notify dom0.
Jul 18 14:47:59 fedora-21 systemd[1]: Unit qubes-update-check.service entered failed state.
Jul 18 14:47:59 fedora-21 systemd[1]: qubes-update-check.service failed.
Jul 18 14:54:05 fedora-21 systemd[973]: Failed at step CGROUP spawning /usr/lib/systemd/systemd: No such file or directory
Jul 18 15:00:33 fedora-21 systemd[1]: qubes-gui-agent.service: main process exited, code=exited, status=1/FAILURE
Jul 18 15:00:33 fedora-21 systemd[1]: Unit qubes-gui-agent.service entered failed state.
Jul 18 15:00:33 fedora-21 systemd[1]: qubes-gui-agent.service failed.
Jul 18 15:00:33 fedora-21 su[1082]: pam_systemd(su:session): Failed to create session: Connection reset by peer
Jul 18 15:00:33 fedora-21 ip6tables.init[1055]: [FAILED]
Jul 18 15:00:33 fedora-21 iptables.init[1049]: filter [FAILED]
Jul 18 15:00:33 fedora-21 su[1126]: pam_systemd(su:session): Failed to create session: Connection reset by peer
Jul 18 15:00:34 fedora-21 systemd[1]: Unit qubes-mount-home.service entered faied state.
Jul 18 15:00:34 fedora-21 systemd[1]: qubes-mount-home.service failed.
Jul 18 15:00:34 fedora-21 systemd[1]: Failed unmounting /usr/lib/modules.
Jul 18 15:00:34 fedora-21 systemd[1]: Failed unmounting /proc/xen.
Jul 18 20:28:26 localhost kernel: rtc_cmos: probe of rtc_cmos failed with error -38
Jul 18 20:28:30 sys-firewall network-proxy-setup.sh[487]: iptables-restore: line 4 failed
Jul 18 22:54:05 sys-firewall systemd[1938]: Failed at step CGROUP spawning /usr/lib/systemd/systemd: No such file or directory
Jul 18 23:04:58 sys-firewall logger[2037]: /etc/xen/scripts/vif-route-qubes: ifdown vif4.0 failed
Jul 18 23:04:58 sys-firewall logger[2043]: /etc/xen/scripts/vif-route-qubes: ip route del 10.137.2.8 dev vif4.0 metric 32748 failed
Jul 18 23:04:58 sys-firewall logger[2054]: /etc/xen/scripts/vif-route-qubes: ip addr del 10.137.2.1/32 dev vif4.0 failed
Jul 18 23:06:14 sys-firewall logger[2252]: /etc/xen/scripts/vif-route-qubes: ifdown vif5.0 failed
Jul 18 23:06:14 sys-firewall logger[2258]: /etc/xen/scripts/vif-route-qubes: ip route del 10.137.2.11 dev vif5.0 metric 32747 failed
Jul 18 23:06:14 sys-firewall logger[2269]: /etc/xen/scripts/vif-route-qubes: ip addr del 10.137.2.1/32 dev vif5.0 failed
Jul 19 00:59:43 sys-firewall logger[3193]: /etc/xen/scripts/vif-route-qubes: ifdown vif7.0 failed
Jul 19 00:59:43 sys-firewall logger[3199]: /etc/xen/scripts/vif-route-qubes: ip route del 10.137.2.11 dev vif7.0 metric 32745 failed
Jul 19 00:59:43 sys-firewall logger[3210]: /etc/xen/scripts/vif-route-qubes: ip addr del 10.137.2.1/32 dev vif7.0 failed
[user@sys-firewall ~]$ 
@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jul 19, 2015

Member

On Sat, Jul 18, 2015 at 04:05:00PM -0700, Patrick Schleizer wrote:

Not having this issue at this very moment. (Reinstalled. [new ssd]) But if I experience this again, I will provide this info. However, I experienced this issue earlier. And found something interesting. Check out the following. Maybe that's it?

[user@sys-firewall ~]$ sudo journalctl | grep -i fail
Jul 18 14:42:05 localhost kernel: rtc_cmos: probe of rtc_cmos failed with error -38
Jul 18 14:47:59 fedora-21 systemd[1]: qubes-update-check.service: main process exited, code=exited, status=1/FAILURE
Jul 18 14:47:59 fedora-21 systemd[1]: Failed to start Qubes check for VM updates and notify dom0.
Jul 18 14:47:59 fedora-21 systemd[1]: Unit qubes-update-check.service entered failed state.
Jul 18 14:47:59 fedora-21 systemd[1]: qubes-update-check.service failed.

Probably you didn't have network access at that time.

Jul 18 14:54:05 fedora-21 systemd[973]: Failed at step CGROUP spawning /usr/lib/systemd/systemd: No such file or directory
Jul 18 15:00:33 fedora-21 systemd[1]: qubes-gui-agent.service: main process exited, code=exited, status=1/FAILURE
Jul 18 15:00:33 fedora-21 systemd[1]: Unit qubes-gui-agent.service entered failed state.
Jul 18 15:00:33 fedora-21 systemd[1]: qubes-gui-agent.service failed.
Jul 18 15:00:33 fedora-21 su[1082]: pam_systemd(su:session): Failed to create session: Connection reset by peer
Jul 18 15:00:33 fedora-21 ip6tables.init[1055]: [FAILED]
Jul 18 15:00:33 fedora-21 iptables.init[1049]: filter [FAILED]

I guess there are also lines like this:
lip 09 01:14:12 fedora-21 iptables.init[5891]: iptables: Setting chains
to policy ACCEPT: nat Another app is currently holding the xtables lock.
Perhaps you wa
nt to use the -w option?
lip 09 01:14:12 fedora-21 ip6tables.init[5892]: ip6tables: Setting
chains to policy ACCEPT: filter Another app is currently holding the
xtables lock. Perhaps y
ou want to use the -w option?

Apparently both iptables and ip6tables services are started at the same
time and both are failing because of that. Looks like a candidate for
bugzilla.redhat.com. Or we should done that Fedora-way, using
firewalld...
There are similar bugs already reported and most of them suggests using
firewalld to solve the problem. For example this one:
https://bugzilla.redhat.com/show_bug.cgi?id=1164243

(...)

Jul 18 20:28:30 sys-firewall network-proxy-setup.sh[487]: iptables-restore: line 4 failed

Can be because of iptables.init failed (so no PR-QBS chain), or the same
problem as above - xtables lock. Hard to say without exact message
(apparently not logged...)

Jul 18 23:04:58 sys-firewall logger[2037]: /etc/xen/scripts/vif-route-qubes: ifdown vif4.0 failed
Jul 18 23:04:58 sys-firewall logger[2043]: /etc/xen/scripts/vif-route-qubes: ip route del 10.137.2.8 dev vif4.0 metric 32748 failed
Jul 18 23:04:58 sys-firewall logger[2054]: /etc/xen/scripts/vif-route-qubes: ip addr del 10.137.2.1/32 dev vif4.0 failed

Interesting why do we have this code - when the interface is gone, all
of those things will vanish automatically.

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Jul 19, 2015

On Sat, Jul 18, 2015 at 04:05:00PM -0700, Patrick Schleizer wrote:

Not having this issue at this very moment. (Reinstalled. [new ssd]) But if I experience this again, I will provide this info. However, I experienced this issue earlier. And found something interesting. Check out the following. Maybe that's it?

[user@sys-firewall ~]$ sudo journalctl | grep -i fail
Jul 18 14:42:05 localhost kernel: rtc_cmos: probe of rtc_cmos failed with error -38
Jul 18 14:47:59 fedora-21 systemd[1]: qubes-update-check.service: main process exited, code=exited, status=1/FAILURE
Jul 18 14:47:59 fedora-21 systemd[1]: Failed to start Qubes check for VM updates and notify dom0.
Jul 18 14:47:59 fedora-21 systemd[1]: Unit qubes-update-check.service entered failed state.
Jul 18 14:47:59 fedora-21 systemd[1]: qubes-update-check.service failed.

Probably you didn't have network access at that time.

Jul 18 14:54:05 fedora-21 systemd[973]: Failed at step CGROUP spawning /usr/lib/systemd/systemd: No such file or directory
Jul 18 15:00:33 fedora-21 systemd[1]: qubes-gui-agent.service: main process exited, code=exited, status=1/FAILURE
Jul 18 15:00:33 fedora-21 systemd[1]: Unit qubes-gui-agent.service entered failed state.
Jul 18 15:00:33 fedora-21 systemd[1]: qubes-gui-agent.service failed.
Jul 18 15:00:33 fedora-21 su[1082]: pam_systemd(su:session): Failed to create session: Connection reset by peer
Jul 18 15:00:33 fedora-21 ip6tables.init[1055]: [FAILED]
Jul 18 15:00:33 fedora-21 iptables.init[1049]: filter [FAILED]

I guess there are also lines like this:
lip 09 01:14:12 fedora-21 iptables.init[5891]: iptables: Setting chains
to policy ACCEPT: nat Another app is currently holding the xtables lock.
Perhaps you wa
nt to use the -w option?
lip 09 01:14:12 fedora-21 ip6tables.init[5892]: ip6tables: Setting
chains to policy ACCEPT: filter Another app is currently holding the
xtables lock. Perhaps y
ou want to use the -w option?

Apparently both iptables and ip6tables services are started at the same
time and both are failing because of that. Looks like a candidate for
bugzilla.redhat.com. Or we should done that Fedora-way, using
firewalld...
There are similar bugs already reported and most of them suggests using
firewalld to solve the problem. For example this one:
https://bugzilla.redhat.com/show_bug.cgi?id=1164243

(...)

Jul 18 20:28:30 sys-firewall network-proxy-setup.sh[487]: iptables-restore: line 4 failed

Can be because of iptables.init failed (so no PR-QBS chain), or the same
problem as above - xtables lock. Hard to say without exact message
(apparently not logged...)

Jul 18 23:04:58 sys-firewall logger[2037]: /etc/xen/scripts/vif-route-qubes: ifdown vif4.0 failed
Jul 18 23:04:58 sys-firewall logger[2043]: /etc/xen/scripts/vif-route-qubes: ip route del 10.137.2.8 dev vif4.0 metric 32748 failed
Jul 18 23:04:58 sys-firewall logger[2054]: /etc/xen/scripts/vif-route-qubes: ip addr del 10.137.2.1/32 dev vif4.0 failed

Interesting why do we have this code - when the interface is gone, all
of those things will vanish automatically.

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Jul 21, 2015

Member

Does this bug report contain enough information to be fixable or do you need further information?

Member

adrelanos commented Jul 21, 2015

Does this bug report contain enough information to be fixable or do you need further information?

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jul 22, 2015

Member

I think so. But I haven't figured out how to fix it effectively - we
had at least three attempts...
The problem apparently is in iptables having some lock mechanism, which
prevents it being called multiple times simultaneously. There is an
option iptables --wait, which theoretically is exactly to fix this
problem, but according to some reports, it doesn't work either. Also
the problematic calls are not only in our scripts, but also in
iptables-services package (/usr/libexec/iptables/iptables.init called
from iptables.service).
Fedora-way fix would be migration to firewalld for handling iptables.
But I don't like this approach, because it would be too much
Fedora-specific. I guess we'll need to fix this very problem in other
distros anyway.

Any ideas? Maybe we should ditch away iptables-service package and write
such (simple) scripts ourself? It would be just iptables-restore call...

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Jul 22, 2015

I think so. But I haven't figured out how to fix it effectively - we
had at least three attempts...
The problem apparently is in iptables having some lock mechanism, which
prevents it being called multiple times simultaneously. There is an
option iptables --wait, which theoretically is exactly to fix this
problem, but according to some reports, it doesn't work either. Also
the problematic calls are not only in our scripts, but also in
iptables-services package (/usr/libexec/iptables/iptables.init called
from iptables.service).
Fedora-way fix would be migration to firewalld for handling iptables.
But I don't like this approach, because it would be too much
Fedora-specific. I guess we'll need to fix this very problem in other
distros anyway.

Any ideas? Maybe we should ditch away iptables-service package and write
such (simple) scripts ourself? It would be just iptables-restore call...

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Jul 22, 2015

Member

I am just sharing what I know. Brainstorming. Don't let my verbosity confuse you, slow down you.

In Whonix since Debian jessie we are using /etc/network/if-pre-up.d/30_whonix_firewall. That works well. It loads the custom firewall script just before networking gets up.

[Security bonus: If the script succeeds (exit 0), network comes up. Otherwise, if the script fails (exit non-zero), then network does not come up. In context of Whonix, this is very useful for leak prevention. Not sure this would be useful in context of Qubes. Maybe not. If that was the case, just overrule using || true or || some_logging to deactivate this.]

But perhaps that's ifupdown. Or specific to Debian?

I'll add it to the reasons on why Fedora should be removed (#1054 (comment)). :)


Custom scripts and systemd unit files could also work. With the right systemd Requires= it should be possible to hook this into the right places (before network comes up?). Or even more elegantly, using a systemd drop-in file ExecStartPre= extending network manager's systemd unit file? Or no! systemd provides something more generic:

network-pre.target

    This passive target unit may be pulled in by services that want to run before any network is set up, for example for the purpose of setting up a firewall. All network management software orders itself after this target, but does not pull it in.

Dunno if firewalld would be a good solution and/or distribution specific. I haven't used firewalld yet. Also available in Debian jessie:
https://packages.debian.org/jessie/firewalld

We are not using firewalld in Whonix, because there is currently no problem that I know that this would solve. The dynamic zones stuff sounds interesting, but I don't know if the extra complexity is worth it as long as this feature isn't needed.


I prefer writing iptables rules rather than having an extra layer on top that generates iptables rules. At some point I guess, we need to port from iptables to its successor nftables.

Member

adrelanos commented Jul 22, 2015

I am just sharing what I know. Brainstorming. Don't let my verbosity confuse you, slow down you.

In Whonix since Debian jessie we are using /etc/network/if-pre-up.d/30_whonix_firewall. That works well. It loads the custom firewall script just before networking gets up.

[Security bonus: If the script succeeds (exit 0), network comes up. Otherwise, if the script fails (exit non-zero), then network does not come up. In context of Whonix, this is very useful for leak prevention. Not sure this would be useful in context of Qubes. Maybe not. If that was the case, just overrule using || true or || some_logging to deactivate this.]

But perhaps that's ifupdown. Or specific to Debian?

I'll add it to the reasons on why Fedora should be removed (#1054 (comment)). :)


Custom scripts and systemd unit files could also work. With the right systemd Requires= it should be possible to hook this into the right places (before network comes up?). Or even more elegantly, using a systemd drop-in file ExecStartPre= extending network manager's systemd unit file? Or no! systemd provides something more generic:

network-pre.target

    This passive target unit may be pulled in by services that want to run before any network is set up, for example for the purpose of setting up a firewall. All network management software orders itself after this target, but does not pull it in.

Dunno if firewalld would be a good solution and/or distribution specific. I haven't used firewalld yet. Also available in Debian jessie:
https://packages.debian.org/jessie/firewalld

We are not using firewalld in Whonix, because there is currently no problem that I know that this would solve. The dynamic zones stuff sounds interesting, but I don't know if the extra complexity is worth it as long as this feature isn't needed.


I prefer writing iptables rules rather than having an extra layer on top that generates iptables rules. At some point I guess, we need to port from iptables to its successor nftables.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jul 23, 2015

Member

I also don't like firewalld idea - it looks like an overkill. Because
we've failed with default iptables services in Fedora, I think it's time
to give up with them and write own one. Simple call to iptables-restore,
that's all.

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Jul 23, 2015

I also don't like firewalld idea - it looks like an overkill. Because
we've failed with default iptables services in Fedora, I think it's time
to give up with them and write own one. Simple call to iptables-restore,
that's all.

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@marmarek marmarek added this to the Release 3.0 milestone Jul 23, 2015

@marmarek marmarek referenced this issue in QubesOS/qubes-core-agent-linux Aug 2, 2015

Merged

removed iptables-persistent from Depends to improve usablity #2

marmarek added a commit to marmarek/old-qubes-core-agent-linux that referenced this issue Aug 4, 2015

network: use own iptables service instead of repurposing existing one
There were multiple problems with reusing existing one:
 - need to sync with upstream changes (configuration path etc)
 - conflicts resolution on updates
 - lack of iptables --wait, which causes firewall fail to load sometimes

QubesOS/qubes-issues#1067
@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 4, 2015

Member

@adrelanos Take a look at commit pointed above, what do you think? Especially in terms of handling debian packaging, upgrade path etc.

Member

marmarek commented Aug 4, 2015

@adrelanos Take a look at commit pointed above, what do you think? Especially in terms of handling debian packaging, upgrade path etc.

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Aug 6, 2015

Member

I can't provide a comprehensive review at this point. I can surely test the package as soon as available from some repository.

  • You didn't remove iptables-persistent from Depends:. (But a simple commit on top would do just fine.)
  • Why a new sysinit script?
  • How will iptables-persistent be disabled for existing systems (upgrade path)?
Member

adrelanos commented Aug 6, 2015

I can't provide a comprehensive review at this point. I can surely test the package as soon as available from some repository.

  • You didn't remove iptables-persistent from Depends:. (But a simple commit on top would do just fine.)
  • Why a new sysinit script?
  • How will iptables-persistent be disabled for existing systems (upgrade path)?
@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Aug 6, 2015

Member

Do you have any feedback on marmarek/qubes-core-agent-linux@9f1de2b also? @nrgaway

Member

adrelanos commented Aug 6, 2015

Do you have any feedback on marmarek/qubes-core-agent-linux@9f1de2b also? @nrgaway

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 6, 2015

Member

On Wed, Aug 05, 2015 at 05:08:36PM -0700, Patrick Schleizer wrote:

I can't provide a comprehensive review at this point. I can surely test the package as soon as available from some repository.

  • You didn't remove iptables-persistent from Depends:. (But a simple commit on top would do just fine.)

Ah, sure. But after merging debian/control reflow. Anyways there is
your PR for that, as a reminder :)

  • Why a new sysinit script?

The whole point about this commit is to use own script, not repurpose
existing (from iptables-persistent in Debian case). Its also available
as sysvinit script for non-systemd cases (I don't know any currently
used, but we're trying to not abandon it totally yet). It shouldn't be
installed in /etc/init.d in case of systemd usage (Debian, Fedora
-systemd subpackage, Archlinux).

  • How will iptables-persistent be disabled for existing systems (upgrade path)?

That's a good question. Systemd drop-in?

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Aug 6, 2015

On Wed, Aug 05, 2015 at 05:08:36PM -0700, Patrick Schleizer wrote:

I can't provide a comprehensive review at this point. I can surely test the package as soon as available from some repository.

  • You didn't remove iptables-persistent from Depends:. (But a simple commit on top would do just fine.)

Ah, sure. But after merging debian/control reflow. Anyways there is
your PR for that, as a reminder :)

  • Why a new sysinit script?

The whole point about this commit is to use own script, not repurpose
existing (from iptables-persistent in Debian case). Its also available
as sysvinit script for non-systemd cases (I don't know any currently
used, but we're trying to not abandon it totally yet). It shouldn't be
installed in /etc/init.d in case of systemd usage (Debian, Fedora
-systemd subpackage, Archlinux).

  • How will iptables-persistent be disabled for existing systems (upgrade path)?

That's a good question. Systemd drop-in?

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Aug 6, 2015

Member

Marek Marczykowski-Górecki:

  • How will iptables-persistent be disabled for existing systems (upgrade path)?
    That's a good question. Systemd drop-in?

Yes.

(Under the assumption, that are expected to reboot after the upgrade so
changes take effect.)

Member

adrelanos commented Aug 6, 2015

Marek Marczykowski-Górecki:

  • How will iptables-persistent be disabled for existing systems (upgrade path)?
    That's a good question. Systemd drop-in?

Yes.

(Under the assumption, that are expected to reboot after the upgrade so
changes take effect.)

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 6, 2015

Member

(Under the assumption, that are expected to reboot after the upgrade so changes take effect.)

Yes. This assumption is always true in Qubes templates, shutting down the template and then restarting VMs is the only way the VMs can take up the changes.
Probably the only place where we should care for services/configuration without reboot, is the template itself - services used there. But this is really minimal set of things.

Member

marmarek commented Aug 6, 2015

(Under the assumption, that are expected to reboot after the upgrade so changes take effect.)

Yes. This assumption is always true in Qubes templates, shutting down the template and then restarting VMs is the only way the VMs can take up the changes.
Probably the only place where we should care for services/configuration without reboot, is the template itself - services used there. But this is really minimal set of things.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 8, 2015

Member

Am I right that this issue happens only sometimes, so isn't really critical?
I want to postpone merging the update to after my leave, because this change is pretty intrusive, specially in terms of upgrade path and we're not able to test it properly in such a short time. And I will not be able to upload fixed package for a few weeks...

Member

marmarek commented Aug 8, 2015

Am I right that this issue happens only sometimes, so isn't really critical?
I want to postpone merging the update to after my leave, because this change is pretty intrusive, specially in terms of upgrade path and we're not able to test it properly in such a short time. And I will not be able to upload fixed package for a few weeks...

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Aug 8, 2015

Member
Member

adrelanos commented Aug 8, 2015

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 8, 2015

Member

The question is how often this happens. If 1/100 or sth like this, I'd
say "the workaround is to restart the system, the proper fix will be in
a few weeks".

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Aug 8, 2015

The question is how often this happens. If 1/100 or sth like this, I'd
say "the workaround is to restart the system, the proper fix will be in
a few weeks".

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 8, 2015

Member

The alternative is to upload a fix which may work, but also may totally
screw some systems (that workaround with restarting would not help). So
the question is that better than the current state?

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Aug 8, 2015

The alternative is to upload a fix which may work, but also may totally
screw some systems (that workaround with restarting would not help). So
the question is that better than the current state?

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Aug 9, 2015

Member

I don't know that. Maybe that's why I never started a host operating
xen/linux distribution. At some point it has to go through the testers
repository. Then see the reports. No idea how you could get any more
safety than this.

Member

adrelanos commented Aug 9, 2015

I don't know that. Maybe that's why I never started a host operating
xen/linux distribution. At some point it has to go through the testers
repository. Then see the reports. No idea how you could get any more
safety than this.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 9, 2015

Member

Yes, of course through testing repository first. The problem is that I will not be able to upload fixed package promptly, which may be an issue, even for some testers.

Member

marmarek commented Aug 9, 2015

Yes, of course through testing repository first. The problem is that I will not be able to upload fixed package promptly, which may be an issue, even for some testers.

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Aug 9, 2015

Member
Member

adrelanos commented Aug 9, 2015

marmarek added a commit to marmarek/old-qubes-core-agent-linux that referenced this issue Aug 9, 2015

network: use own iptables service instead of repurposing existing one
There were multiple problems with reusing existing one:
 - need to sync with upstream changes (configuration path etc)
 - conflicts resolution on updates
 - lack of iptables --wait, which causes firewall fail to load sometimes

QubesOS/qubes-issues#1067

marmarek added a commit to marmarek/old-qubes-core-agent-linux that referenced this issue Aug 9, 2015

debian: disable netfilter-persistent.service
This is now handled by qubes-iptables.service

QubesOS/qubes-issues#1067
@pjacferreira

This comment has been minimized.

Show comment
Hide comment
@pjacferreira

pjacferreira Mar 16, 2016

Not to side-track the current thinking, but:

  1. I moved from a slower SSD to faster SSD (and the problem seems to be more pronounced, can't verify any longer :)
  2. I noticed today, that, if I'm speedy with opening up an appvm, for about a minute 1 or 2 after a reboot, I have network connectivity then it dies.

By the way, like @adrelanos said, this is really annoying bug...

Not to side-track the current thinking, but:

  1. I moved from a slower SSD to faster SSD (and the problem seems to be more pronounced, can't verify any longer :)
  2. I noticed today, that, if I'm speedy with opening up an appvm, for about a minute 1 or 2 after a reboot, I have network connectivity then it dies.

By the way, like @adrelanos said, this is really annoying bug...

@pjacferreira

This comment has been minimized.

Show comment
Hide comment
@pjacferreira

pjacferreira Mar 16, 2016

Too be annoying, but continuing on this problem, I had to use qubes-set-updates to disable update checking, because after a reboot I would get both sys-net and sys-firewall CPU spike to 100%.
I tracked it down to dnf update check.

I'm not sure if this a dnf problem, or just more of this dns problem.

Too be annoying, but continuing on this problem, I had to use qubes-set-updates to disable update checking, because after a reboot I would get both sys-net and sys-firewall CPU spike to 100%.
I tracked it down to dnf update check.

I'm not sure if this a dnf problem, or just more of this dns problem.

@pjacferreira

This comment has been minimized.

Show comment
Hide comment
@pjacferreira

pjacferreira Apr 7, 2016

I just wanted to document my experience in the hope that it will help somebody else.

Currently using Qubes 3.1:
I initially stated that the problem occured only when using WiFi.
Since then, I have had the problem occur on a wired connection (DHCP) even though it seems to occur only once in a blue moon. So I think the problem is generic.

Currently my work around has been to, restart the sys-firewall VM.

This is my opinion on the problem, and it's based on this:

  • I'm having this complete lack of network connectivity when I doing a cold boot of Qubes.
  • Normally I cold Boot, and
  • immediately login, as soon as the password prompt comes up

I sometimes, especially with WiFi, only receive the network attach notification, a couple of seconds (10 to 20) after the desktop is displayed. It seems that, in these conditions, in which the time to acquire an IP is long, that the sys-firewall fails to establish it's network connection.
NOTE:

  • If I do a ping from a Terminal in sys-net, it works
  • If I do the same ping, from sys-firewall, IT DOESN'T WORK

This is the reason I just simply restart sys-firewall, to re-establish network connectivity.

My reasoning on the subject:

  1. I assume that sys-firewall, when it tries to establish network connection, to sys-net, finds that sys-net still hasn't setup the network (Wifi+DHCP, or just slow DHCP) and therefore, fails to correctly setup the DNS part of the connection (and maybe even the IP forwarding part, haven't tested).
  2. With fixed IP setup in sys-net, I don't think this problem occurs (IDEA: Also not tested).

I think that this seems to be related to this problem (maybe even the same problem).
When I launch 2 VMs in quick succession, I have had, on MANY occasions, ONE of them fail to establish the network connection correctly (i.e. no PING to sys-firewall, therefore, no DNS).

Work-around, restart the offending VM.

NOTE, even though the problem with the connection sys-firewall->sys-net, is hard to diagnose (it doesn't happen all the time) the other part of launching simultaneously 2 VMs and having the network fail, seems to occur consistently. Currently, I pause between launches, just to make sure I don't have this problem.

I also think, this is more of an SSD issue, since the launch of VMs happen quicker than with slow HDs.

I just wanted to document my experience in the hope that it will help somebody else.

Currently using Qubes 3.1:
I initially stated that the problem occured only when using WiFi.
Since then, I have had the problem occur on a wired connection (DHCP) even though it seems to occur only once in a blue moon. So I think the problem is generic.

Currently my work around has been to, restart the sys-firewall VM.

This is my opinion on the problem, and it's based on this:

  • I'm having this complete lack of network connectivity when I doing a cold boot of Qubes.
  • Normally I cold Boot, and
  • immediately login, as soon as the password prompt comes up

I sometimes, especially with WiFi, only receive the network attach notification, a couple of seconds (10 to 20) after the desktop is displayed. It seems that, in these conditions, in which the time to acquire an IP is long, that the sys-firewall fails to establish it's network connection.
NOTE:

  • If I do a ping from a Terminal in sys-net, it works
  • If I do the same ping, from sys-firewall, IT DOESN'T WORK

This is the reason I just simply restart sys-firewall, to re-establish network connectivity.

My reasoning on the subject:

  1. I assume that sys-firewall, when it tries to establish network connection, to sys-net, finds that sys-net still hasn't setup the network (Wifi+DHCP, or just slow DHCP) and therefore, fails to correctly setup the DNS part of the connection (and maybe even the IP forwarding part, haven't tested).
  2. With fixed IP setup in sys-net, I don't think this problem occurs (IDEA: Also not tested).

I think that this seems to be related to this problem (maybe even the same problem).
When I launch 2 VMs in quick succession, I have had, on MANY occasions, ONE of them fail to establish the network connection correctly (i.e. no PING to sys-firewall, therefore, no DNS).

Work-around, restart the offending VM.

NOTE, even though the problem with the connection sys-firewall->sys-net, is hard to diagnose (it doesn't happen all the time) the other part of launching simultaneously 2 VMs and having the network fail, seems to occur consistently. Currently, I pause between launches, just to make sure I don't have this problem.

I also think, this is more of an SSD issue, since the launch of VMs happen quicker than with slow HDs.

@adrelanos

This comment has been minimized.

Show comment
Hide comment
@adrelanos

adrelanos Jul 19, 2016

Member

@adrelanos:

Instead of inventing Qubes native solution, another option is to implement this might be removing iptables-persistent and to keep netfilter-persistent. Looks quite good at first. It provides /usr/share/netfilter-persistent/plugins.d, which Qubes could reuse.

netfilter-persistent may not be ready for prime time.
https://phabricator.whonix.org/T487#9444

Member

adrelanos commented Jul 19, 2016

@adrelanos:

Instead of inventing Qubes native solution, another option is to implement this might be removing iptables-persistent and to keep netfilter-persistent. Looks quite good at first. It provides /usr/share/netfilter-persistent/plugins.d, which Qubes could reuse.

netfilter-persistent may not be ready for prime time.
https://phabricator.whonix.org/T487#9444

@Rudd-O

This comment has been minimized.

Show comment
Hide comment
@Rudd-O

Rudd-O Oct 17, 2016

I think my pull request above this comment may fix the original bug.

Rudd-O commented Oct 17, 2016

I think my pull request above this comment may fix the original bug.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Oct 18, 2016

Member

Automated announcement from builder-github

The package python2-dnf-plugins-qubes-hooks-3.2.12-1.fc23 has been pushed to the r3.2 testing repository for the Fedora fc23 template.
To test this update, please install it with the following command:

sudo yum update --enablerepo=qubes-vm-r3.2-current-testing

Changes included in this update

Member

marmarek commented Oct 18, 2016

Automated announcement from builder-github

The package python2-dnf-plugins-qubes-hooks-3.2.12-1.fc23 has been pushed to the r3.2 testing repository for the Fedora fc23 template.
To test this update, please install it with the following command:

sudo yum update --enablerepo=qubes-vm-r3.2-current-testing

Changes included in this update

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Oct 18, 2016

Member

Automated announcement from builder-github

The package python2-dnf-plugins-qubes-hooks-3.2.12-1.fc24 has been pushed to the r3.2 testing repository for the Fedora fc24 template.
To test this update, please install it with the following command:

sudo yum update --enablerepo=qubes-vm-r3.2-current-testing

Changes included in this update

Member

marmarek commented Oct 18, 2016

Automated announcement from builder-github

The package python2-dnf-plugins-qubes-hooks-3.2.12-1.fc24 has been pushed to the r3.2 testing repository for the Fedora fc24 template.
To test this update, please install it with the following command:

sudo yum update --enablerepo=qubes-vm-r3.2-current-testing

Changes included in this update

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Nov 17, 2016

Member

Automated announcement from builder-github

The package python2-dnf-plugins-qubes-hooks-3.2.12-1.fc23 has been pushed to the r3.2 stable repository for the Fedora fc23 template.
To install this update, please use the standard update command:

sudo yum update

Changes included in this update

Member

marmarek commented Nov 17, 2016

Automated announcement from builder-github

The package python2-dnf-plugins-qubes-hooks-3.2.12-1.fc23 has been pushed to the r3.2 stable repository for the Fedora fc23 template.
To install this update, please use the standard update command:

sudo yum update

Changes included in this update

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Nov 17, 2016

Member

Automated announcement from builder-github

The package python2-dnf-plugins-qubes-hooks-3.2.12-1.fc24 has been pushed to the r3.2 stable repository for the Fedora fc24 template.
To install this update, please use the standard update command:

sudo yum update

Changes included in this update

Member

marmarek commented Nov 17, 2016

Automated announcement from builder-github

The package python2-dnf-plugins-qubes-hooks-3.2.12-1.fc24 has been pushed to the r3.2 stable repository for the Fedora fc24 template.
To install this update, please use the standard update command:

sudo yum update

Changes included in this update

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Nov 18, 2016

Member

Automated announcement from builder-github

The package qubes-core-agent_3.2.13-1+deb8u1 has been pushed to the r3.2 testing repository for the Debian jessie template.
To test this update, first enable the testing repository in /etc/apt/sources.list.d/qubes-*.list by uncommenting the line containing jessie-testing, then use the standard update command:

sudo apt-get update && sudo apt-get dist-upgrade

Changes included in this update

Member

marmarek commented Nov 18, 2016

Automated announcement from builder-github

The package qubes-core-agent_3.2.13-1+deb8u1 has been pushed to the r3.2 testing repository for the Debian jessie template.
To test this update, first enable the testing repository in /etc/apt/sources.list.d/qubes-*.list by uncommenting the line containing jessie-testing, then use the standard update command:

sudo apt-get update && sudo apt-get dist-upgrade

Changes included in this update

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Nov 18, 2016

Member

Automated announcement from builder-github

The package qubes-core-agent_3.2.13-1+deb9u1 has been pushed to the r3.2 testing repository for the Debian stretch template.
To test this update, first enable the testing repository in /etc/apt/sources.list.d/qubes-*.list by uncommenting the line containing stretch-testing, then use the standard update command:

sudo apt-get update && sudo apt-get dist-upgrade

Changes included in this update

Member

marmarek commented Nov 18, 2016

Automated announcement from builder-github

The package qubes-core-agent_3.2.13-1+deb9u1 has been pushed to the r3.2 testing repository for the Debian stretch template.
To test this update, first enable the testing repository in /etc/apt/sources.list.d/qubes-*.list by uncommenting the line containing stretch-testing, then use the standard update command:

sudo apt-get update && sudo apt-get dist-upgrade

Changes included in this update

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Dec 4, 2016

Member

Automated announcement from builder-github

The package qubes-core-agent_3.2.13-1+deb8u1 has been pushed to the r3.2 stable repository for the Debian jessie template.
To install this update, please use the standard update command:

sudo apt-get update && sudo apt-get dist-upgrade

Changes included in this update

Member

marmarek commented Dec 4, 2016

Automated announcement from builder-github

The package qubes-core-agent_3.2.13-1+deb8u1 has been pushed to the r3.2 stable repository for the Debian jessie template.
To install this update, please use the standard update command:

sudo apt-get update && sudo apt-get dist-upgrade

Changes included in this update

@pjacferreira

This comment has been minimized.

Show comment
Hide comment
@pjacferreira

pjacferreira Dec 5, 2016

Hi,
I applied this update to a debian 8 (note I didn't do dist-upgrade just upgrade) and now I can't run any commands from XFCE on the VM....

ex:
qvm-run (domain) /usr/bin/xterm

does nothing....
I get a message on DOM0
Running command on VM: '(domain)'....

But nothing happens.

I actually had to do, what is stated in this wiki:
https://wiki.xenproject.org/wiki/Connecting_a_Console_to_DomU%27s
to connect to domains console.

I tried to access the template, from which I built the domain (to do a a downgrade, and the same thing happened, i.e. the terminal command won't run).

In the Qubes VM Manager, I have a yellow triangle, with a message,
Domain '(domain)': qrexec not connected.

As a continuation of this post:
After getting a console, I found that...

  1. Ping works (both by address and dns),.
  2. /etc/resolv.conf looks fine
  3. ***/dev/xvdb (i.e. /rw) wasn't mounted!? But, doing mount -a from the console mounts correctly /rw

Tried doing apt-get dist-upgrade, but nothing new happened.

I noticed in the apt-get upgrade (and dist-upgrade) that qubesdb-vm is being held back.

pjacferreira commented Dec 5, 2016

Hi,
I applied this update to a debian 8 (note I didn't do dist-upgrade just upgrade) and now I can't run any commands from XFCE on the VM....

ex:
qvm-run (domain) /usr/bin/xterm

does nothing....
I get a message on DOM0
Running command on VM: '(domain)'....

But nothing happens.

I actually had to do, what is stated in this wiki:
https://wiki.xenproject.org/wiki/Connecting_a_Console_to_DomU%27s
to connect to domains console.

I tried to access the template, from which I built the domain (to do a a downgrade, and the same thing happened, i.e. the terminal command won't run).

In the Qubes VM Manager, I have a yellow triangle, with a message,
Domain '(domain)': qrexec not connected.

As a continuation of this post:
After getting a console, I found that...

  1. Ping works (both by address and dns),.
  2. /etc/resolv.conf looks fine
  3. ***/dev/xvdb (i.e. /rw) wasn't mounted!? But, doing mount -a from the console mounts correctly /rw

Tried doing apt-get dist-upgrade, but nothing new happened.

I noticed in the apt-get upgrade (and dist-upgrade) that qubesdb-vm is being held back.

@pjacferreira

This comment has been minimized.

Show comment
Hide comment
@pjacferreira

pjacferreira Dec 5, 2016

I got the (initial) startup messages from the debian template

http://pastebin.com/dP5kY5Ly

Notice the SKIP on bot /rw and /home

I got the (initial) startup messages from the debian template

http://pastebin.com/dP5kY5Ly

Notice the SKIP on bot /rw and /home

@pjacferreira

This comment has been minimized.

Show comment
Hide comment
@pjacferreira

pjacferreira Dec 5, 2016

** FALSE ALARM **

Going back over the procedure, I noticed that, even though I had upgraded the System to 3.2 I had forgotten to upgrade the template to 3.2.

When I corrected the problem and did dist-upgrade, everything went back to working again.

I apologize for the false alarm.

pjacferreira commented Dec 5, 2016

** FALSE ALARM **

Going back over the procedure, I noticed that, even though I had upgraded the System to 3.2 I had forgotten to upgrade the template to 3.2.

When I corrected the problem and did dist-upgrade, everything went back to working again.

I apologize for the false alarm.

@qubesos-bot

This comment has been minimized.

Show comment
Hide comment
@qubesos-bot

qubesos-bot Dec 19, 2016

Automated announcement from builder-github

The package qubes-core-agent_3.2.13-1+deb9u1 has been pushed to the r3.2 stable repository for the Debian stretch template.
To install this update, please use the standard update command:

sudo apt-get update && sudo apt-get dist-upgrade

Changes included in this update

Automated announcement from builder-github

The package qubes-core-agent_3.2.13-1+deb9u1 has been pushed to the r3.2 stable repository for the Debian stretch template.
To install this update, please use the standard update command:

sudo apt-get update && sudo apt-get dist-upgrade

Changes included in this update

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment