Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upbroken /etc/resolv.conf, broken DNS resolution #1067
Comments
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 14, 2015
Member
On Tue, Jul 14, 2015 at 02:26:47PM -0700, Patrick Schleizer wrote:
sys-net and sys-firewall had working settings.
/etc/resolv.conf:nameserver 10.137.1.1 nameserver 10.137.1.254All other AppVMs I tested (Debian and Fedora templates based), had broken DNS settings.
/etc/resolv.conf:nameserver 10.137.2.1 nameserver 10.137.2.254After manually setting to the same settings as sys-firewall, DNS resolution was functional again.
Must be some bug at work here, that will result in a huge usability issue (no more internet access).
Qubes Q3 RC1
/etc/resolv.conf in AppVMs should point at sys-firewall, which should
then redirect to sys-net, based on its own /etc/resolv.conf. So the
/etc/resolv.conf you've shown seems to be valid. Maybe some problem
with DNS redirection in sys-firewall? Check qubes-firewall.service
state.
Rationale: VM (DNS) configuration should not depend on which netvm is
used by firewallvm.
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Tue, Jul 14, 2015 at 02:26:47PM -0700, Patrick Schleizer wrote:
Rationale: VM (DNS) configuration should not depend on which netvm is Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Jul 14, 2015
Member
Maybe some problem with DNS redirection in sys-firewall?
I don't know. I did nothing in there. Overall, nothing fancy.
Maybe some problem with DNS redirection in sys-firewall? Check
qubes-firewall.servicestate.
Looks like. See this:
user@personal:~$ sudo service qubes-firewall status
● qubes-firewall.service - Qubes firewall updater
Loaded: loaded (/lib/systemd/system/qubes-firewall.service; enabled)
Active: inactive (dead)
start condition failed at Tue 2015-07-14 23:05:03 CEST; 1h 2min ago
ConditionPathExists=/var/run/qubes-service/qubes-firewall was not met
Jul 14 23:05:03 personal systemd[1]: Started Qubes firewall updater.
I don't know. I did nothing in there. Overall, nothing fancy.
Looks like. See this:
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 14, 2015
Member
Check that in sys-firewall...
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
Check that in sys-firewall... Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Jul 14, 2015
Member
[user@sys-firewall ~]$ sudo service qubes-firewall status
Redirecting to /bin/systemctl status qubes-firewall.service
● qubes-firewall.service - Qubes firewall updater
Loaded: loaded (/usr/lib/systemd/system/qubes-firewall.service; enabled)
Active: active (running) since Tue 2015-07-14 22:47:18 CEST; 1h 30min ago
Main PID: 518 (qubes-firewall)
CGroup: /system.slice/qubes-firewall.service
├─ 518 /bin/sh /usr/sbin/qubes-firewall
└─1772 /usr/bin/qubesdb-watch /qubes-iptables
Jul 14 22:47:18 sys-firewall systemd[1]: Started Qubes firewall updater.
Jul 14 22:48:42 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 22:51:14 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 22:51:37 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 23:04:58 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 23:07:35 sys-firewall qubes-firewall[518]: /qubes-iptables
Jul 14 23:09:42 sys-firewall qubes-firewall[518]: /qubes-iptables
[user@sys-firewall ~]$
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 14, 2015
Member
Looks fine...
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
Looks fine... Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
What else could have caused this? Didn't happen again after reboot. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
Hit this issue again after another reboot. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 15, 2015
Member
Hmm, one more idea: check nat table, PR-QBS chain. Also worth checking
iptables.service status. And system logs for anything mentioning
iptables fail...
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
Hmm, one more idea: check nat table, PR-QBS chain. Also worth checking Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 15, 2015
Member
On Wed, Jul 15, 2015 at 06:55:22PM +0200, Marek Marczykowski-Górecki wrote:
Hmm, one more idea: check nat table, PR-QBS chain. Also worth checking
iptables.servicestatus. And system logs for anything mentioning
iptables fail...
All of this in sys-firewall.
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Wed, Jul 15, 2015 at 06:55:22PM +0200, Marek Marczykowski-Górecki wrote:
All of this in sys-firewall. Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Jul 16, 2015
Member
No PR-QBS chain, look like.
Yes, there are some failing messages in the logs.
Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Setting chains to policy ACCEPT: nat Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
It could be a race condition? Because this is [apart from driver issues] a very fast system. Perhaps the iptables script runs before that's ready. A missing systemd requires perhaps? @nrgaway has some experiences with the need for the iptables --wait option.
[user@sys-net ~]$ sudo service iptables status
Redirecting to /bin/systemctl status iptables.service
● iptables.service - IPv4 firewall with iptables
Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled)
Active: active (exited) since Thu 2015-07-16 13:52:56 CEST; 3h 13min ago
Process: 395 ExecStart=/usr/libexec/iptables/iptables.init start (code=exited, status=0/SUCCESS)
Main PID: 395 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/iptables.service
Jul 16 13:52:55 sys-net systemd[1]: Starting IPv4 firewall with iptables...
Jul 16 13:52:56 sys-net iptables.init[395]: iptables: Applying firewall rule...]
Jul 16 13:52:56 sys-net systemd[1]: Started IPv4 firewall with iptables.
Hint: Some lines were ellipsized, use -l to show in full.
[user@sys-net ~]$ sudo iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:us-cli
DROP udp -- anywhere anywhere udp dpt:bootpc
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
DROP all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[user@sys-net ~]$
[user@sys-net ~]$ sudo journalctl | grep iptables
Jul 12 16:33:50 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 16:33:50 fedora-21 iptables.init[370]: iptables: Applying firewall rules: [ OK ]
Jul 12 16:33:50 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 17:29:32 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 17:29:32 fedora-21 iptables.init[360]: iptables: Applying firewall rules: [ OK ]
Jul 12 17:29:32 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 17:30:18 fedora-21 systemd[1]: Stopping IPv4 firewall with iptables...
Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Setting chains to policy ACCEPT: nat Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
Jul 12 17:30:18 fedora-21 iptables.init[6184]: filter [FAILED]
Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Flushing firewall rules: [ OK ]
Jul 12 17:30:19 fedora-21 iptables.init[6184]: iptables: Unloading modules: [ OK ]
Jul 12 17:30:19 fedora-21 systemd[1]: Stopped IPv4 firewall with iptables.
Jul 12 18:08:57 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 18:08:57 fedora-21 iptables.init[389]: iptables: Applying firewall rules: [ OK ]
Jul 12 18:08:57 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 18:47:23 fedora-21 iptables.init[5596]: iptables: Setting chains to policy ACCEPT: nat filter [ OK ]
Jul 12 18:47:23 fedora-21 iptables.init[5596]: iptables: Flushing firewall rules: [ OK ]
Jul 12 18:47:23 fedora-21 iptables.init[5596]: iptables: Unloading modules: [ OK ]
Jul 12 18:47:23 fedora-21 systemd[1]: Stopped IPv4 firewall with iptables.
Jul 12 18:54:21 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 18:54:21 fedora-21 iptables.init[384]: iptables: Applying firewall rules: [ OK ]
Jul 12 18:54:21 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 18:58:16 fedora-21 systemd[1]: Stopping IPv4 firewall with iptables...
Jul 12 18:58:16 fedora-21 iptables.init[2752]: iptables: Setting chains to policy ACCEPT: nat filter [ OK ]
Jul 12 18:58:16 fedora-21 iptables.init[2752]: iptables: Flushing firewall rules: [ OK ]
Jul 12 18:58:16 fedora-21 iptables.init[2752]: iptables: Unloading modules: [ OK ]
Jul 12 18:58:16 fedora-21 systemd[1]: Stopped IPv4 firewall with iptables.
Jul 12 19:19:57 fedora-21 systemd[1]: Starting IPv4 firewall with iptables...
Jul 12 19:19:57 fedora-21 iptables.init[387]: iptables: Applying firewall rules: [ OK ]
Jul 12 19:19:57 fedora-21 systemd[1]: Started IPv4 firewall with iptables.
Jul 12 19:24:14 fedora-21 systemd[1]: Stopping IPv4 firewall with iptables...
Jul 12 19:24:14 fedora-21 iptables.init[2804]: iptables: Setting chains to policy ACCEPT: nat filter [ OK ]
Jul 12 19:24:14 fedora-21 iptables.init[2804]: iptables: Flushing firewall rules: [ OK ]
Jul 12 19:24:14 fedora-21 iptables.init[2804]: iptables: Unloading modules: [ OK ]
Jul 12 19:24:14 fedora-21 systemd[1]: Stopped IPv4 firewall with iptables.
Jul 16 13:52:55 sys-net systemd[1]: Starting IPv4 firewall with iptables...
Jul 16 13:52:56 sys-net iptables.init[395]: iptables: Applying firewall rules: [ OK ]
Jul 16 13:52:56 sys-net systemd[1]: Started IPv4 firewall with iptables.
Jul 16 17:06:44 sys-net sudo[2979]: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/sbin/service iptables status
Jul 16 17:06:56 sys-net sudo[2994]: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/sbin/iptables --list
[user@sys-net ~]$
Can you make head or tail of this bug or you need more?
|
No PR-QBS chain, look like. Yes, there are some failing messages in the logs.
It could be a race condition? Because this is [apart from driver issues] a very fast system. Perhaps the iptables script runs before that's ready. A missing systemd requires perhaps? @nrgaway has some experiences with the need for the iptables --wait option.
Can you make head or tail of this bug or you need more? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 16, 2015
Member
On Thu, Jul 16, 2015 at 08:34:31AM -0700, Patrick Schleizer wrote:
No PR-QBS chain, look like.
Yes, there are some failing messages in the logs.
Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Setting chains to policy ACCEPT: nat Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
It could be a race condition? Because this is [apart from driver issues] a very fast system. Perhaps the iptables script runs before that's ready. A missing systemd requires perhaps? @nrgaway has some experiences with the need for the iptables --wait option.
Yes, this may be a problem. Unfortunately iptables --wait doesn't solve all the cases.
Also apparently this time it was in Fedora-provided service...
[user@sys-net ~]$ sudo iptables --list
Check also -t nat.
Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Setting chains to policy ACCEPT: nat Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
Jul 12 17:30:18 fedora-21 iptables.init[6184]: filter [FAILED]
Jul 12 17:30:18 fedora-21 iptables.init[6184]: iptables: Flushing firewall rules: [ OK ]
Jul 12 17:30:19 fedora-21 iptables.init[6184]: iptables: Unloading modules: [ OK ]
This looks like the reason. iptables.init is provided by Fedoda
iptables-services package...
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Thu, Jul 16, 2015 at 08:34:31AM -0700, Patrick Schleizer wrote:
Yes, this may be a problem. Unfortunately iptables --wait doesn't solve all the cases.
Check also -t nat.
This looks like the reason. iptables.init is provided by Fedoda Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Jul 17, 2015
Member
[user@sys-net ~]$ sudo iptables --list -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
PR-QBS all -- anywhere anywhere
PR-QBS-SERVICES all -- anywhere anywhere
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
MASQUERADE all -- anywhere anywhere
Chain PR-QBS (1 references)
target prot opt source destination
DNAT udp -- anywhere sys-net udp dpt:domain to:192.168.0.1
Chain PR-QBS-SERVICES (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 10.137.255.254 tcp dpt:us-cli
[user@sys-net ~]$
Look good? But still having this issue.
Look good? But still having this issue. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 18, 2015
Member
Looks good, including proper DNS redirection. What about the same
iptables dump in sys-firewall?
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
Looks good, including proper DNS redirection. What about the same Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Jul 18, 2015
Member
Not having this issue at this very moment. (Reinstalled. [new ssd]) But if I experience this again, I will provide this info. However, I experienced this issue earlier. And found something interesting. Check out the following. Maybe that's it?
[user@sys-firewall ~]$ sudo journalctl | grep -i fail
Jul 18 14:42:05 localhost kernel: rtc_cmos: probe of rtc_cmos failed with error -38
Jul 18 14:47:59 fedora-21 systemd[1]: qubes-update-check.service: main process exited, code=exited, status=1/FAILURE
Jul 18 14:47:59 fedora-21 systemd[1]: Failed to start Qubes check for VM updates and notify dom0.
Jul 18 14:47:59 fedora-21 systemd[1]: Unit qubes-update-check.service entered failed state.
Jul 18 14:47:59 fedora-21 systemd[1]: qubes-update-check.service failed.
Jul 18 14:54:05 fedora-21 systemd[973]: Failed at step CGROUP spawning /usr/lib/systemd/systemd: No such file or directory
Jul 18 15:00:33 fedora-21 systemd[1]: qubes-gui-agent.service: main process exited, code=exited, status=1/FAILURE
Jul 18 15:00:33 fedora-21 systemd[1]: Unit qubes-gui-agent.service entered failed state.
Jul 18 15:00:33 fedora-21 systemd[1]: qubes-gui-agent.service failed.
Jul 18 15:00:33 fedora-21 su[1082]: pam_systemd(su:session): Failed to create session: Connection reset by peer
Jul 18 15:00:33 fedora-21 ip6tables.init[1055]: [FAILED]
Jul 18 15:00:33 fedora-21 iptables.init[1049]: filter [FAILED]
Jul 18 15:00:33 fedora-21 su[1126]: pam_systemd(su:session): Failed to create session: Connection reset by peer
Jul 18 15:00:34 fedora-21 systemd[1]: Unit qubes-mount-home.service entered faied state.
Jul 18 15:00:34 fedora-21 systemd[1]: qubes-mount-home.service failed.
Jul 18 15:00:34 fedora-21 systemd[1]: Failed unmounting /usr/lib/modules.
Jul 18 15:00:34 fedora-21 systemd[1]: Failed unmounting /proc/xen.
Jul 18 20:28:26 localhost kernel: rtc_cmos: probe of rtc_cmos failed with error -38
Jul 18 20:28:30 sys-firewall network-proxy-setup.sh[487]: iptables-restore: line 4 failed
Jul 18 22:54:05 sys-firewall systemd[1938]: Failed at step CGROUP spawning /usr/lib/systemd/systemd: No such file or directory
Jul 18 23:04:58 sys-firewall logger[2037]: /etc/xen/scripts/vif-route-qubes: ifdown vif4.0 failed
Jul 18 23:04:58 sys-firewall logger[2043]: /etc/xen/scripts/vif-route-qubes: ip route del 10.137.2.8 dev vif4.0 metric 32748 failed
Jul 18 23:04:58 sys-firewall logger[2054]: /etc/xen/scripts/vif-route-qubes: ip addr del 10.137.2.1/32 dev vif4.0 failed
Jul 18 23:06:14 sys-firewall logger[2252]: /etc/xen/scripts/vif-route-qubes: ifdown vif5.0 failed
Jul 18 23:06:14 sys-firewall logger[2258]: /etc/xen/scripts/vif-route-qubes: ip route del 10.137.2.11 dev vif5.0 metric 32747 failed
Jul 18 23:06:14 sys-firewall logger[2269]: /etc/xen/scripts/vif-route-qubes: ip addr del 10.137.2.1/32 dev vif5.0 failed
Jul 19 00:59:43 sys-firewall logger[3193]: /etc/xen/scripts/vif-route-qubes: ifdown vif7.0 failed
Jul 19 00:59:43 sys-firewall logger[3199]: /etc/xen/scripts/vif-route-qubes: ip route del 10.137.2.11 dev vif7.0 metric 32745 failed
Jul 19 00:59:43 sys-firewall logger[3210]: /etc/xen/scripts/vif-route-qubes: ip addr del 10.137.2.1/32 dev vif7.0 failed
[user@sys-firewall ~]$
|
Not having this issue at this very moment. (Reinstalled. [new ssd]) But if I experience this again, I will provide this info. However, I experienced this issue earlier. And found something interesting. Check out the following. Maybe that's it?
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 19, 2015
Member
On Sat, Jul 18, 2015 at 04:05:00PM -0700, Patrick Schleizer wrote:
Not having this issue at this very moment. (Reinstalled. [new ssd]) But if I experience this again, I will provide this info. However, I experienced this issue earlier. And found something interesting. Check out the following. Maybe that's it?
[user@sys-firewall ~]$ sudo journalctl | grep -i fail Jul 18 14:42:05 localhost kernel: rtc_cmos: probe of rtc_cmos failed with error -38 Jul 18 14:47:59 fedora-21 systemd[1]: qubes-update-check.service: main process exited, code=exited, status=1/FAILURE Jul 18 14:47:59 fedora-21 systemd[1]: Failed to start Qubes check for VM updates and notify dom0. Jul 18 14:47:59 fedora-21 systemd[1]: Unit qubes-update-check.service entered failed state. Jul 18 14:47:59 fedora-21 systemd[1]: qubes-update-check.service failed.
Probably you didn't have network access at that time.
Jul 18 14:54:05 fedora-21 systemd[973]: Failed at step CGROUP spawning /usr/lib/systemd/systemd: No such file or directory
Jul 18 15:00:33 fedora-21 systemd[1]: qubes-gui-agent.service: main process exited, code=exited, status=1/FAILURE
Jul 18 15:00:33 fedora-21 systemd[1]: Unit qubes-gui-agent.service entered failed state.
Jul 18 15:00:33 fedora-21 systemd[1]: qubes-gui-agent.service failed.
Jul 18 15:00:33 fedora-21 su[1082]: pam_systemd(su:session): Failed to create session: Connection reset by peer
Jul 18 15:00:33 fedora-21 ip6tables.init[1055]: [FAILED]
Jul 18 15:00:33 fedora-21 iptables.init[1049]: filter [FAILED]
I guess there are also lines like this:
lip 09 01:14:12 fedora-21 iptables.init[5891]: iptables: Setting chains
to policy ACCEPT: nat Another app is currently holding the xtables lock.
Perhaps you wa
nt to use the -w option?
lip 09 01:14:12 fedora-21 ip6tables.init[5892]: ip6tables: Setting
chains to policy ACCEPT: filter Another app is currently holding the
xtables lock. Perhaps y
ou want to use the -w option?
Apparently both iptables and ip6tables services are started at the same
time and both are failing because of that. Looks like a candidate for
bugzilla.redhat.com. Or we should done that Fedora-way, using
firewalld...
There are similar bugs already reported and most of them suggests using
firewalld to solve the problem. For example this one:
https://bugzilla.redhat.com/show_bug.cgi?id=1164243
(...)
Jul 18 20:28:30 sys-firewall network-proxy-setup.sh[487]: iptables-restore: line 4 failed
Can be because of iptables.init failed (so no PR-QBS chain), or the same
problem as above - xtables lock. Hard to say without exact message
(apparently not logged...)
Jul 18 23:04:58 sys-firewall logger[2037]: /etc/xen/scripts/vif-route-qubes: ifdown vif4.0 failed
Jul 18 23:04:58 sys-firewall logger[2043]: /etc/xen/scripts/vif-route-qubes: ip route del 10.137.2.8 dev vif4.0 metric 32748 failed
Jul 18 23:04:58 sys-firewall logger[2054]: /etc/xen/scripts/vif-route-qubes: ip addr del 10.137.2.1/32 dev vif4.0 failed
Interesting why do we have this code - when the interface is gone, all
of those things will vanish automatically.
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Sat, Jul 18, 2015 at 04:05:00PM -0700, Patrick Schleizer wrote:
Probably you didn't have network access at that time.
I guess there are also lines like this: Apparently both iptables and ip6tables services are started at the same (...)
Can be because of iptables.init failed (so no PR-QBS chain), or the same
Interesting why do we have this code - when the interface is gone, all Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Jul 21, 2015
Member
Does this bug report contain enough information to be fixable or do you need further information?
|
Does this bug report contain enough information to be fixable or do you need further information? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 22, 2015
Member
I think so. But I haven't figured out how to fix it effectively - we
had at least three attempts...
The problem apparently is in iptables having some lock mechanism, which
prevents it being called multiple times simultaneously. There is an
option iptables --wait, which theoretically is exactly to fix this
problem, but according to some reports, it doesn't work either. Also
the problematic calls are not only in our scripts, but also in
iptables-services package (/usr/libexec/iptables/iptables.init called
from iptables.service).
Fedora-way fix would be migration to firewalld for handling iptables.
But I don't like this approach, because it would be too much
Fedora-specific. I guess we'll need to fix this very problem in other
distros anyway.
Any ideas? Maybe we should ditch away iptables-service package and write
such (simple) scripts ourself? It would be just iptables-restore call...
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
I think so. But I haven't figured out how to fix it effectively - we Any ideas? Maybe we should ditch away iptables-service package and write Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Jul 22, 2015
Member
I am just sharing what I know. Brainstorming. Don't let my verbosity confuse you, slow down you.
In Whonix since Debian jessie we are using /etc/network/if-pre-up.d/30_whonix_firewall. That works well. It loads the custom firewall script just before networking gets up.
[Security bonus: If the script succeeds (exit 0), network comes up. Otherwise, if the script fails (exit non-zero), then network does not come up. In context of Whonix, this is very useful for leak prevention. Not sure this would be useful in context of Qubes. Maybe not. If that was the case, just overrule using || true or || some_logging to deactivate this.]
But perhaps that's ifupdown. Or specific to Debian?
I'll add it to the reasons on why Fedora should be removed (#1054 (comment)). :)
Custom scripts and systemd unit files could also work. With the right systemd Requires= it should be possible to hook this into the right places (before network comes up?). Or even more elegantly, using a systemd drop-in file ExecStartPre= extending network manager's systemd unit file? Or no! systemd provides something more generic:
network-pre.target
This passive target unit may be pulled in by services that want to run before any network is set up, for example for the purpose of setting up a firewall. All network management software orders itself after this target, but does not pull it in.
Dunno if firewalld would be a good solution and/or distribution specific. I haven't used firewalld yet. Also available in Debian jessie:
https://packages.debian.org/jessie/firewalld
We are not using firewalld in Whonix, because there is currently no problem that I know that this would solve. The dynamic zones stuff sounds interesting, but I don't know if the extra complexity is worth it as long as this feature isn't needed.
I prefer writing iptables rules rather than having an extra layer on top that generates iptables rules. At some point I guess, we need to port from iptables to its successor nftables.
|
I am just sharing what I know. Brainstorming. Don't let my verbosity confuse you, slow down you. In Whonix since Debian jessie we are using [Security bonus: If the script succeeds (exit 0), network comes up. Otherwise, if the script fails (exit non-zero), then network does not come up. In context of Whonix, this is very useful for leak prevention. Not sure this would be useful in context of Qubes. Maybe not. If that was the case, just overrule using But perhaps that's ifupdown. Or specific to Debian? I'll add it to the reasons on why Fedora should be removed (#1054 (comment)). :) Custom scripts and systemd unit files could also work. With the right systemd
Dunno if firewalld would be a good solution and/or distribution specific. I haven't used firewalld yet. Also available in Debian jessie: We are not using firewalld in Whonix, because there is currently no problem that I know that this would solve. The dynamic zones stuff sounds interesting, but I don't know if the extra complexity is worth it as long as this feature isn't needed. I prefer writing iptables rules rather than having an extra layer on top that generates iptables rules. At some point I guess, we need to port from iptables to its successor nftables. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 23, 2015
Member
I also don't like firewalld idea - it looks like an overkill. Because
we've failed with default iptables services in Fedora, I think it's time
to give up with them and write own one. Simple call to iptables-restore,
that's all.
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
I also don't like firewalld idea - it looks like an overkill. Because Best Regards, |
marmarek
added
bug
C: core
P: major
labels
Jul 23, 2015
marmarek
added this to the Release 3.0 milestone
Jul 23, 2015
marmarek
referenced this issue
in QubesOS/qubes-core-agent-linux
Aug 2, 2015
Merged
removed iptables-persistent from Depends to improve usablity #2
added a commit
to marmarek/old-qubes-core-agent-linux
that referenced
this issue
Aug 4, 2015
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 4, 2015
Member
@adrelanos Take a look at commit pointed above, what do you think? Especially in terms of handling debian packaging, upgrade path etc.
|
@adrelanos Take a look at commit pointed above, what do you think? Especially in terms of handling debian packaging, upgrade path etc. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Aug 6, 2015
Member
I can't provide a comprehensive review at this point. I can surely test the package as soon as available from some repository.
- You didn't remove
iptables-persistentfromDepends:. (But a simple commit on top would do just fine.) - Why a new sysinit script?
- How will
iptables-persistentbe disabled for existing systems (upgrade path)?
|
I can't provide a comprehensive review at this point. I can surely test the package as soon as available from some repository.
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Aug 6, 2015
Member
Do you have any feedback on marmarek/qubes-core-agent-linux@9f1de2b also? @nrgaway
|
Do you have any feedback on marmarek/qubes-core-agent-linux@9f1de2b also? @nrgaway |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 6, 2015
Member
On Wed, Aug 05, 2015 at 05:08:36PM -0700, Patrick Schleizer wrote:
I can't provide a comprehensive review at this point. I can surely test the package as soon as available from some repository.
- You didn't remove
iptables-persistentfromDepends:. (But a simple commit on top would do just fine.)
Ah, sure. But after merging debian/control reflow. Anyways there is
your PR for that, as a reminder :)
- Why a new sysinit script?
The whole point about this commit is to use own script, not repurpose
existing (from iptables-persistent in Debian case). Its also available
as sysvinit script for non-systemd cases (I don't know any currently
used, but we're trying to not abandon it totally yet). It shouldn't be
installed in /etc/init.d in case of systemd usage (Debian, Fedora
-systemd subpackage, Archlinux).
- How will
iptables-persistentbe disabled for existing systems (upgrade path)?
That's a good question. Systemd drop-in?
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Wed, Aug 05, 2015 at 05:08:36PM -0700, Patrick Schleizer wrote:
Ah, sure. But after merging
The whole point about this commit is to use own script, not repurpose
That's a good question. Systemd drop-in? Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Aug 6, 2015
Member
Marek Marczykowski-Górecki:
- How will
iptables-persistentbe disabled for existing systems (upgrade path)?
That's a good question. Systemd drop-in?
Yes.
(Under the assumption, that are expected to reboot after the upgrade so
changes take effect.)
|
Marek Marczykowski-Górecki:
Yes. (Under the assumption, that are expected to reboot after the upgrade so |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 6, 2015
Member
(Under the assumption, that are expected to reboot after the upgrade so changes take effect.)
Yes. This assumption is always true in Qubes templates, shutting down the template and then restarting VMs is the only way the VMs can take up the changes.
Probably the only place where we should care for services/configuration without reboot, is the template itself - services used there. But this is really minimal set of things.
Yes. This assumption is always true in Qubes templates, shutting down the template and then restarting VMs is the only way the VMs can take up the changes. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 8, 2015
Member
Am I right that this issue happens only sometimes, so isn't really critical?
I want to postpone merging the update to after my leave, because this change is pretty intrusive, specially in terms of upgrade path and we're not able to test it properly in such a short time. And I will not be able to upload fixed package for a few weeks...
|
Am I right that this issue happens only sometimes, so isn't really critical? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Aug 8, 2015
Member
|
Define critical. ;) If you are not a hardcore linux user as I am, it can
totally destroys your Qubes experiences. Should be in R3 RC2, if possible.
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 8, 2015
Member
The question is how often this happens. If 1/100 or sth like this, I'd
say "the workaround is to restart the system, the proper fix will be in
a few weeks".
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
The question is how often this happens. If 1/100 or sth like this, I'd Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 8, 2015
Member
The alternative is to upload a fix which may work, but also may totally
screw some systems (that workaround with restarting would not help). So
the question is that better than the current state?
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
The alternative is to upload a fix which may work, but also may totally Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Aug 9, 2015
Member
I don't know that. Maybe that's why I never started a host operating
xen/linux distribution. At some point it has to go through the testers
repository. Then see the reports. No idea how you could get any more
safety than this.
|
I don't know that. Maybe that's why I never started a host operating |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 9, 2015
Member
Yes, of course through testing repository first. The problem is that I will not be able to upload fixed package promptly, which may be an issue, even for some testers.
|
Yes, of course through testing repository first. The problem is that I will not be able to upload fixed package promptly, which may be an issue, even for some testers. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Aug 9, 2015
Member
|
Hm. Another issue. ;) Testers aren't real testers. Is there a
real-testers repository? :) If you'd upload it to some development repo
that doesn't totally break my machine, I can also try it beforehand.
|
added a commit
to marmarek/old-qubes-core-agent-linux
that referenced
this issue
Aug 9, 2015
added a commit
to marmarek/old-qubes-core-agent-linux
that referenced
this issue
Aug 9, 2015
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
pjacferreira
Mar 16, 2016
Not to side-track the current thinking, but:
- I moved from a slower SSD to faster SSD (and the problem seems to be more pronounced, can't verify any longer :)
- I noticed today, that, if I'm speedy with opening up an appvm, for about a minute 1 or 2 after a reboot, I have network connectivity then it dies.
By the way, like @adrelanos said, this is really annoying bug...
pjacferreira
commented
Mar 16, 2016
|
Not to side-track the current thinking, but:
By the way, like @adrelanos said, this is really annoying bug... |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
pjacferreira
Mar 16, 2016
Too be annoying, but continuing on this problem, I had to use qubes-set-updates to disable update checking, because after a reboot I would get both sys-net and sys-firewall CPU spike to 100%.
I tracked it down to dnf update check.
I'm not sure if this a dnf problem, or just more of this dns problem.
pjacferreira
commented
Mar 16, 2016
|
Too be annoying, but continuing on this problem, I had to use qubes-set-updates to disable update checking, because after a reboot I would get both sys-net and sys-firewall CPU spike to 100%. I'm not sure if this a dnf problem, or just more of this dns problem. |
adrelanos
referenced this issue
Mar 17, 2016
Open
Documentation: network issues debug information #1849
marmarek
referenced this issue
Mar 17, 2016
Closed
last Qubes R3 stable upgrade broke all networking #1848
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
pjacferreira
Apr 7, 2016
I just wanted to document my experience in the hope that it will help somebody else.
Currently using Qubes 3.1:
I initially stated that the problem occured only when using WiFi.
Since then, I have had the problem occur on a wired connection (DHCP) even though it seems to occur only once in a blue moon. So I think the problem is generic.
Currently my work around has been to, restart the sys-firewall VM.
This is my opinion on the problem, and it's based on this:
- I'm having this complete lack of network connectivity when I doing a cold boot of Qubes.
- Normally I cold Boot, and
- immediately login, as soon as the password prompt comes up
I sometimes, especially with WiFi, only receive the network attach notification, a couple of seconds (10 to 20) after the desktop is displayed. It seems that, in these conditions, in which the time to acquire an IP is long, that the sys-firewall fails to establish it's network connection.
NOTE:
- If I do a ping from a Terminal in sys-net, it works
- If I do the same ping, from sys-firewall, IT DOESN'T WORK
This is the reason I just simply restart sys-firewall, to re-establish network connectivity.
My reasoning on the subject:
- I assume that sys-firewall, when it tries to establish network connection, to sys-net, finds that sys-net still hasn't setup the network (Wifi+DHCP, or just slow DHCP) and therefore, fails to correctly setup the DNS part of the connection (and maybe even the IP forwarding part, haven't tested).
- With fixed IP setup in sys-net, I don't think this problem occurs (IDEA: Also not tested).
I think that this seems to be related to this problem (maybe even the same problem).
When I launch 2 VMs in quick succession, I have had, on MANY occasions, ONE of them fail to establish the network connection correctly (i.e. no PING to sys-firewall, therefore, no DNS).
Work-around, restart the offending VM.
NOTE, even though the problem with the connection sys-firewall->sys-net, is hard to diagnose (it doesn't happen all the time) the other part of launching simultaneously 2 VMs and having the network fail, seems to occur consistently. Currently, I pause between launches, just to make sure I don't have this problem.
I also think, this is more of an SSD issue, since the launch of VMs happen quicker than with slow HDs.
pjacferreira
commented
Apr 7, 2016
|
I just wanted to document my experience in the hope that it will help somebody else. Currently using Qubes 3.1: Currently my work around has been to, restart the sys-firewall VM. This is my opinion on the problem, and it's based on this:
I sometimes, especially with WiFi, only receive the network attach notification, a couple of seconds (10 to 20) after the desktop is displayed. It seems that, in these conditions, in which the time to acquire an IP is long, that the sys-firewall fails to establish it's network connection.
This is the reason I just simply restart sys-firewall, to re-establish network connectivity. My reasoning on the subject:
I think that this seems to be related to this problem (maybe even the same problem). Work-around, restart the offending VM. NOTE, even though the problem with the connection sys-firewall->sys-net, is hard to diagnose (it doesn't happen all the time) the other part of launching simultaneously 2 VMs and having the network fail, seems to occur consistently. Currently, I pause between launches, just to make sure I don't have this problem. I also think, this is more of an SSD issue, since the launch of VMs happen quicker than with slow HDs. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Jul 19, 2016
Member
Instead of inventing Qubes native solution, another option is to implement this might be removing iptables-persistent and to keep netfilter-persistent. Looks quite good at first. It provides
/usr/share/netfilter-persistent/plugins.d, which Qubes could reuse.
netfilter-persistent may not be ready for prime time.
https://phabricator.whonix.org/T487#9444
|
marmarek
referenced this issue
Aug 4, 2016
Closed
Weird race condition that makes DNS ProxyVM rules disappear #2227
marmarek
referenced this issue
in QubesOS/qubes-core-agent-linux
Oct 16, 2016
Merged
Eliminate race condition with qubes-setup-dnat-to-ns #20
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Rudd-O
commented
Oct 17, 2016
|
I think my pull request above this comment may fix the original bug. |
marmarek
closed this
in
marmarek/old-qubes-core-agent-linux@b7c7b4a
Oct 18, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Oct 18, 2016
Member
Automated announcement from builder-github
The package python2-dnf-plugins-qubes-hooks-3.2.12-1.fc23 has been pushed to the r3.2 testing repository for the Fedora fc23 template.
To test this update, please install it with the following command:
sudo yum update --enablerepo=qubes-vm-r3.2-current-testing
|
Automated announcement from builder-github The package
|
marmarek
added
the
r3.2-fc23-cur-test
label
Oct 18, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Oct 18, 2016
Member
Automated announcement from builder-github
The package python2-dnf-plugins-qubes-hooks-3.2.12-1.fc24 has been pushed to the r3.2 testing repository for the Fedora fc24 template.
To test this update, please install it with the following command:
sudo yum update --enablerepo=qubes-vm-r3.2-current-testing
|
Automated announcement from builder-github The package
|
marmarek
added
the
r3.2-fc24-cur-test
label
Oct 18, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Nov 17, 2016
Member
Automated announcement from builder-github
The package python2-dnf-plugins-qubes-hooks-3.2.12-1.fc23 has been pushed to the r3.2 stable repository for the Fedora fc23 template.
To install this update, please use the standard update command:
sudo yum update
|
Automated announcement from builder-github The package
|
marmarek
added
r3.2-fc23-stable
and removed
r3.2-fc23-cur-test
labels
Nov 17, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Nov 17, 2016
Member
Automated announcement from builder-github
The package python2-dnf-plugins-qubes-hooks-3.2.12-1.fc24 has been pushed to the r3.2 stable repository for the Fedora fc24 template.
To install this update, please use the standard update command:
sudo yum update
|
Automated announcement from builder-github The package
|
marmarek
added
r3.2-fc24-stable
and removed
r3.2-fc24-cur-test
labels
Nov 17, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Nov 18, 2016
Member
Automated announcement from builder-github
The package qubes-core-agent_3.2.13-1+deb8u1 has been pushed to the r3.2 testing repository for the Debian jessie template.
To test this update, first enable the testing repository in /etc/apt/sources.list.d/qubes-*.list by uncommenting the line containing jessie-testing, then use the standard update command:
sudo apt-get update && sudo apt-get dist-upgrade
|
Automated announcement from builder-github The package
|
marmarek
added
the
r3.2-jessie-cur-test
label
Nov 18, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Nov 18, 2016
Member
Automated announcement from builder-github
The package qubes-core-agent_3.2.13-1+deb9u1 has been pushed to the r3.2 testing repository for the Debian stretch template.
To test this update, first enable the testing repository in /etc/apt/sources.list.d/qubes-*.list by uncommenting the line containing stretch-testing, then use the standard update command:
sudo apt-get update && sudo apt-get dist-upgrade
|
Automated announcement from builder-github The package
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Dec 4, 2016
Member
Automated announcement from builder-github
The package qubes-core-agent_3.2.13-1+deb8u1 has been pushed to the r3.2 stable repository for the Debian jessie template.
To install this update, please use the standard update command:
sudo apt-get update && sudo apt-get dist-upgrade
|
Automated announcement from builder-github The package
|
marmarek
added
r3.2-jessie-stable
and removed
r3.2-jessie-cur-test
labels
Dec 4, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
pjacferreira
Dec 5, 2016
Hi,
I applied this update to a debian 8 (note I didn't do dist-upgrade just upgrade) and now I can't run any commands from XFCE on the VM....
ex:
qvm-run (domain) /usr/bin/xterm
does nothing....
I get a message on DOM0
Running command on VM: '(domain)'....
But nothing happens.
I actually had to do, what is stated in this wiki:
https://wiki.xenproject.org/wiki/Connecting_a_Console_to_DomU%27s
to connect to domains console.
I tried to access the template, from which I built the domain (to do a a downgrade, and the same thing happened, i.e. the terminal command won't run).
In the Qubes VM Manager, I have a yellow triangle, with a message,
Domain '(domain)': qrexec not connected.
As a continuation of this post:
After getting a console, I found that...
- Ping works (both by address and dns),.
- /etc/resolv.conf looks fine
- ***/dev/xvdb (i.e. /rw) wasn't mounted!? But, doing mount -a from the console mounts correctly /rw
Tried doing apt-get dist-upgrade, but nothing new happened.
I noticed in the apt-get upgrade (and dist-upgrade) that qubesdb-vm is being held back.
pjacferreira
commented
Dec 5, 2016
•
|
Hi, ex: does nothing.... But nothing happens. I actually had to do, what is stated in this wiki: I tried to access the template, from which I built the domain (to do a a downgrade, and the same thing happened, i.e. the terminal command won't run). In the Qubes VM Manager, I have a yellow triangle, with a message, As a continuation of this post:
Tried doing apt-get dist-upgrade, but nothing new happened. I noticed in the apt-get upgrade (and dist-upgrade) that qubesdb-vm is being held back. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
pjacferreira
Dec 5, 2016
I got the (initial) startup messages from the debian template
Notice the SKIP on bot /rw and /home
pjacferreira
commented
Dec 5, 2016
|
I got the (initial) startup messages from the debian template Notice the SKIP on bot /rw and /home |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
pjacferreira
Dec 5, 2016
** FALSE ALARM **
Going back over the procedure, I noticed that, even though I had upgraded the System to 3.2 I had forgotten to upgrade the template to 3.2.
When I corrected the problem and did dist-upgrade, everything went back to working again.
I apologize for the false alarm.
pjacferreira
commented
Dec 5, 2016
•
|
** FALSE ALARM ** Going back over the procedure, I noticed that, even though I had upgraded the System to 3.2 I had forgotten to upgrade the template to 3.2. When I corrected the problem and did dist-upgrade, everything went back to working again. I apologize for the false alarm. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesos-bot
Dec 19, 2016
Automated announcement from builder-github
The package qubes-core-agent_3.2.13-1+deb9u1 has been pushed to the r3.2 stable repository for the Debian stretch template.
To install this update, please use the standard update command:
sudo apt-get update && sudo apt-get dist-upgrade
qubesos-bot
commented
Dec 19, 2016
|
Automated announcement from builder-github The package
|
adrelanos commentedJul 14, 2015
sys-net and sys-firewall had working settings.
/etc/resolv.conf:All other AppVMs I tested (Debian and Fedora templates based), had broken DNS settings.
/etc/resolv.conf:After manually setting to the same settings as sys-firewall, DNS resolution was functional again.
Must be some bug at work here, that will result in a huge usability issue (no more internet access).
Qubes Q3 RC1