Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upStop leaking 10.137.x.x addresses to VMs #1143
Comments
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
unman
Aug 23, 2015
Member
It's trivial in many ways to identify a qubes user.
The STUN problem in particular has been discussed on the lists. If this is an issue for you then you shouldn't be using WebRTC or torrents at all. If you're using Tor then you wouldn't be using them anyway.
It isnt quite clear to me exactly what you are proposing, or what risk it addresses, and I don't see how random addresses within a non-default subnet would prevent correlation. If anything it seems to make it more likely. That is, i think, why it's better for all users to use the same subnet.
On one detail, I believe that the most common router address is 192.168.1.X, although there are significant diversions (e.g. apple uses 10.0.0.1, MS 192.168.2.1, and DLink uses a range of other 192.168 and 10. addresses.)
|
It's trivial in many ways to identify a qubes user. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Aug 23, 2015
Member
Several applications such as Firefox with WebRTC support and BitTorrent clients leak the local IP address, so this can often be detected across the network in addition to within an exploited VM.
The problem is, two different attack vectors were mixed into this ticket.
- Exploited VM: the exploit can trivially easy detect, that it's running inside Qubes, because stuff like qubes-(core|gui)-agent is installed. And more stuff. Probably impossible to avoid.
- protocol level leaks: (Example WebRTC, that you mentioned and others.) Legit. It would indeed be nice if the same local IP addresses as non-Qubes-Whonix would be used.
The problem is, two different attack vectors were mixed into this ticket.
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Aug 24, 2015
Yes, the important issue is the protocol level leak (and also the related "leak into logs and problem reports" issue).
this is an issue for you then you shouldn't be using WebRTC or torrents at all
That's like saying "if you want to be secure, don't use remotely exploitable software".
The issue is that it's impossible to know how software behaves (the Firefox WebRTC leak was probably a surprise to everyone not directly involved), and you may want to run misbehaving software anyway.
It isnt quite clear to me exactly what you are proposing, or what risk it addresses, and I don't see how random addresses within a non-default subnet would prevent correlation. If anything it seems to make it more likely. That is, i think, why it's better for all users to use the same subnet.
The two important goals are these (assuming that the internal IP address is being sent over the network along with any traffic):
- "Avoid inter-VM correlation": prevent detecting that the network traffic generated by two AnonVMs in the same Qubes instance has the same author: this requires making sure that the common part of their IP addresses is used by a significant number of other Tor users
- "Avoid temporal correlation": prevent detecting that the network traffic generated today by an AnonVM has the same author as the traffic generated tomorrow: this requires making sure that its IP address is used by a significant number of other Tor users, or that the IP address is changed frequently among a subnet that is used by a significant number of other Tor users
There seem to be two ways to achieve these goals:
- Having all VMs for all users use the same very common private IP address (or one of a list of very few such addresses for all VMs). Best candidate is probably 192.168.1.128
- Randomly assign IP addresses from a large commonly used private subnet with the ability to exclude some ranges if needed, with different addresses for each VM and changing the address at least for every reboot of the VMs. Best candidate is probably 10.x.x.x
If the default is a single IP address, then it should be possible to choose an alternative address or random choice in a subnet instead since some users they want to access other hosts in their LAN with conflicting IP addresses that they want to connect to from VMs. It should be possible to do so on a per-VM basis.
Exploited VM: the exploit can trivially easy detect, that it's running inside Qubes, because stuff like qubes-(core|gui)-agent is installed. And more stuff. Probably impossible to avoid.
That's true, making Qubes undetectable from an exploited VM is not feasible.
However, having a generic IP address might slightly mitigate against untargeted fingerprinting/surveillance malware not designed to explicitly detect Qubes that is likely to record the internal IP address but might be less likely to record anything else relevant.
qubesuser
commented
Aug 24, 2015
|
Yes, the important issue is the protocol level leak (and also the related "leak into logs and problem reports" issue).
That's like saying "if you want to be secure, don't use remotely exploitable software". The issue is that it's impossible to know how software behaves (the Firefox WebRTC leak was probably a surprise to everyone not directly involved), and you may want to run misbehaving software anyway.
The two important goals are these (assuming that the internal IP address is being sent over the network along with any traffic):
There seem to be two ways to achieve these goals:
If the default is a single IP address, then it should be possible to choose an alternative address or random choice in a subnet instead since some users they want to access other hosts in their LAN with conflicting IP addresses that they want to connect to from VMs. It should be possible to do so on a per-VM basis.
That's true, making Qubes undetectable from an exploited VM is not feasible. However, having a generic IP address might slightly mitigate against untargeted fingerprinting/surveillance malware not designed to explicitly detect Qubes that is likely to record the internal IP address but might be less likely to record anything else relevant. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
unman
Aug 24, 2015
Member
That's like saying "if you want to be secure, don't use remotely exploitable software".
No, it's saying, if you want to be secure learn about the risks and avoid unnecessary ones.
The issue is that it's impossible to know how software behaves (the Firefox WebRTC leak was probably a surprise to everyone not directly involved), and you may want to run misbehaving software anyway.
This is a good example, because the risks of WebRTC leakage were well known - for example, TBB hasn't included it for the last three years. And if you choose to run misbehaving software then you're just playing.
Qubes isn't a magic bullet. Tor isn't a magic bullet. Both require users to change their habits and learn new, more secure, ways of doing things. I don't see any way around this.
That said, I don't like the 10.137 addresses, and don't generally use them. I think whonix uses 10.152.152 - I cant say I like that. I do like the idea of reassigning IP address on each VM boot.
No, it's saying, if you want to be secure learn about the risks and avoid unnecessary ones.
This is a good example, because the risks of WebRTC leakage were well known - for example, TBB hasn't included it for the last three years. And if you choose to run misbehaving software then you're just playing. That said, I don't like the 10.137 addresses, and don't generally use them. I think whonix uses 10.152.152 - I cant say I like that. I do like the idea of reassigning IP address on each VM boot. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Aug 25, 2015
And if you choose to run misbehaving software then you're just playing.
You might need to use the software. For example, you might be maintaining a website anonymously and need to test it for compatibility in all browsers (perhaps including older versions with known security holes) with JavaScript and Internet access enabled, and obviously not have the resources to audit and patch all browsers.
That's the kind of thing that Qubes and Whonix should allow you to do safely and conveniently.
That said, I don't like the 10.137 addresses, and don't generally use them.
Is there an easy way to change that in current Qubes other than locally patching the Python code?
I think whonix uses 10.152.152 - I cant say I like that.
Yes, I think non-Qubes Whonix should be changed to use the same scheme that Qubes uses, if possible.
qubesuser
commented
Aug 25, 2015
You might need to use the software. For example, you might be maintaining a website anonymously and need to test it for compatibility in all browsers (perhaps including older versions with known security holes) with JavaScript and Internet access enabled, and obviously not have the resources to audit and patch all browsers. That's the kind of thing that Qubes and Whonix should allow you to do safely and conveniently.
Is there an easy way to change that in current Qubes other than locally patching the Python code?
Yes, I think non-Qubes Whonix should be changed to use the same scheme that Qubes uses, if possible. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Aug 25, 2015
Member
this is an issue for you then you shouldn't be using WebRTC or torrents at all
That's like saying "if you want to be secure, don't use remotely exploitable software".
No, it's saying, if you want to be secure learn about the risks and avoid unnecessary ones.
There is already too much required knowledge to stay safe. And stuff like WebRTC, torrent, local IP leak means nothing to novice users. It cannot realistically be expected to have a considerable fraction of users being aware of it and acting accordingly. Ideally we could solve such issues, so there is nothing to be pointed out to users, nothing they can do wrong. Secure by default.
There is already too much required knowledge to stay safe. And stuff like WebRTC, torrent, local IP leak means nothing to novice users. It cannot realistically be expected to have a considerable fraction of users being aware of it and acting accordingly. Ideally we could solve such issues, so there is nothing to be pointed out to users, nothing they can do wrong. Secure by default. |
This was referenced Aug 30, 2015
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Aug 30, 2015
I implemented this in the 3 pull requests that should show up on this page.
This code adds a new "ip_address_mode" variable to qvm-prefs, with these possible values:
- "internal": current behavior, set VM IP address to internal 10.137.x.x address
- "anonymous": set VM IP address to 192.168.1.128/24
- "custom": set VM IP address based on custom_ip_address/gateway/netmask qvm-pref variables
- "auto": anonymous for AppVMs and HVM templates, internal for NetVMs, ProxyVMs, and PV templateVMs
- Future work could also add "netvm" and "external" modes that would use the NetVM address or the externally visible IP address
The IP address set this way is separate from the 10.137.x.x IP address that is then seen in the ProxyVM and everywhere else in the system, which is unchanged, and there is a mechanism that translates between them.
The mechanism works by requesting the new "vif-route-qubes-nat" instead of "vif-routes-qubes", which then sets up a network namespace in the proxy VM for each VIF with iptables SNAT/DNAT rules to translate between the addresses.
Since this is all done in a separate network namespace, there are no changes visible from the normal ProxyVMs/NetVMs environment, except for the fact that vif#.# is now a veth instead of a physical interface (the actual physical interface is hidden inside the network namespace).
Custom firewall rules, port mappings and custom ProxyVM iptables rules should thus continue to work unchanged.
The only action required is to upgrade core-agent-linux in all VMs that are netvms for AppVMs and reconfigure networking on all HVMs or set them to the internal IP address mode.
The code seems to work, but it could definitely use some review.
qubesuser
commented
Aug 30, 2015
|
I implemented this in the 3 pull requests that should show up on this page. This code adds a new "ip_address_mode" variable to qvm-prefs, with these possible values:
The IP address set this way is separate from the 10.137.x.x IP address that is then seen in the ProxyVM and everywhere else in the system, which is unchanged, and there is a mechanism that translates between them. The mechanism works by requesting the new "vif-route-qubes-nat" instead of "vif-routes-qubes", which then sets up a network namespace in the proxy VM for each VIF with iptables SNAT/DNAT rules to translate between the addresses. Since this is all done in a separate network namespace, there are no changes visible from the normal ProxyVMs/NetVMs environment, except for the fact that vif#.# is now a veth instead of a physical interface (the actual physical interface is hidden inside the network namespace). Custom firewall rules, port mappings and custom ProxyVM iptables rules should thus continue to work unchanged. The only action required is to upgrade core-agent-linux in all VMs that are netvms for AppVMs and reconfigure networking on all HVMs or set them to the internal IP address mode. The code seems to work, but it could definitely use some review. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Aug 30, 2015
Member
For Qubes-Whonix we really could use static IP addresses. The current dpkg-trigger/search/replace of internal IP addresses in config files [that don't support variables] approach is really something I don't like because of the extra complexity layer that adds on top.
Although ideally we could keep the 10.152.152.10 IP range. Was painful when we changed from 192 to 10. (https://forums.whonix.org/t/implemented-feature-request-suggestion-gw-network-settings/118) More scalable. Provides more IP addresses. Leaks even less likely, due to different subnets. Less confusion by "does it conflict with my router".
|
For Qubes-Whonix we really could use static IP addresses. The current dpkg-trigger/search/replace of internal IP addresses in config files [that don't support variables] approach is really something I don't like because of the extra complexity layer that adds on top. Although ideally we could keep the 10.152.152.10 IP range. Was painful when we changed from 192 to 10. (https://forums.whonix.org/t/implemented-feature-request-suggestion-gw-network-settings/118) More scalable. Provides more IP addresses. Leaks even less likely, due to different subnets. Less confusion by "does it conflict with my router". |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 30, 2015
Member
@qubesuser maybe instead of all that network namespace code, we could simply SNAT "invalid" addresses to the one expected by ProxyVM/NetVM? This would go instead of the current DROP rule in raw table.
|
@qubesuser maybe instead of all that network namespace code, we could simply SNAT "invalid" addresses to the one expected by ProxyVM/NetVM? This would go instead of the current DROP rule in |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Aug 30, 2015
The problem I see with 10.152.152.10 is that if it gets leaked over the network, it reveals that Whonix is being used, which most likely significantly reduces the anonymity set compared to just knowing that Tor is being used.
Provides more IP addresses.
What for? As far as I can tell you only really need two (IP and gateway IP), since with this scheme different AppVMs connected to the same ProxyVM can share the same address.
Intra-AppVM traffic happens via a different addressing scheme, which is currently the 10.137.x.x already in use by Qubes (but by default none of these addresses should be visible to preserve anonymity). It would be nice to make this scheme configurable, but that's a separate issue.
Leaks even less likely, due to different subnets
What does this mean exactly? Subnets different from which other subnet? How does that impact leaks?
Less confusion by "does it conflict with my router".
Anonymous VMs cannot access the home router anyway since everything is routed to Tor.
There might indeed be an issue with non-anonymous VM and non-technical users not realizing why the router is not accessible.
Could maybe run an HTTP server with an help message in the AppVM and redirect connections to port 80/443 to it.
qubesuser
commented
Aug 30, 2015
|
The problem I see with 10.152.152.10 is that if it gets leaked over the network, it reveals that Whonix is being used, which most likely significantly reduces the anonymity set compared to just knowing that Tor is being used.
Intra-AppVM traffic happens via a different addressing scheme, which is currently the 10.137.x.x already in use by Qubes (but by default none of these addresses should be visible to preserve anonymity). It would be nice to make this scheme configurable, but that's a separate issue.
What does this mean exactly? Subnets different from which other subnet? How does that impact leaks?
Anonymous VMs cannot access the home router anyway since everything is routed to Tor. There might indeed be an issue with non-anonymous VM and non-technical users not realizing why the router is not accessible. Could maybe run an HTTP server with an help message in the AppVM and redirect connections to port 80/443 to it. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Aug 30, 2015
@marmarek That was the first thing I tried, but the problem is that SNAT can only be used in POSTROUTING in the nat table, and there doesn't seem to be any other "patch source address" or "patch arbitrary u32" functionality in iptables.
Without a way of patching the address in the raw table a network namespace seems the best solution since both conntrack and routing needs to be separate from the main system and it also nicely encapsulates the whole thing keeping compatibility.
But maybe there's some other way I missed.
qubesuser
commented
Aug 30, 2015
|
@marmarek That was the first thing I tried, but the problem is that SNAT can only be used in POSTROUTING in the nat table, and there doesn't seem to be any other "patch source address" or "patch arbitrary u32" functionality in iptables. Without a way of patching the address in the raw table a network namespace seems the best solution since both conntrack and routing needs to be separate from the main system and it also nicely encapsulates the whole thing keeping compatibility. But maybe there's some other way I missed. |
marmarek
added this to the Release 3.1 milestone
Aug 30, 2015
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 30, 2015
Member
On Sun, Aug 30, 2015 at 05:48:35AM -0700, qubesuser wrote:
The problem I see with 10.152.152.10 is that if it gets leaked over the network, it reveals that Whonix is being used, which most likely significantly reduces the anonymity set compared to just knowing that Tor is being used.
If that would be generic "anonymous" IP, it would only leak that some
AnonVM on Qubes is used. And I think there are many simpler ways to
learn that.
Less confusion by "does it conflict with my router".
Anonymous VMs cannot access the home router anyway since everything is routed to Tor.
There might indeed be an issue with non-anonymous VM and non-technical users not realizing why the router is not accessible.
Yes, exactly. And this could be hard to debug for non-technical users.
I think the single-IP scheme could be extended to general use, not only
AnonVMs. This would greatly ease network configuration code inside of VM
(static configuration using generic tools, instead of custom tools using
QubesDB). At least for client-only VMs (no inter-VM networking enabled).
Anyway, this is surely too late for such change to go into R3.0, queued
for R3.1.
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Sun, Aug 30, 2015 at 05:48:35AM -0700, qubesuser wrote:
If that would be generic "anonymous" IP, it would only leak that some
Yes, exactly. And this could be hard to debug for non-technical users. I think the single-IP scheme could be extended to general use, not only Anyway, this is surely too late for such change to go into R3.0, queued Best Regards, |
marmarek
added
enhancement
C: core
P: major
labels
Aug 30, 2015
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Aug 30, 2015
Member
Provides more IP addresses.
What for?
Whonix works also outside of Qubes. [Such as data centers using physically isolated Whonix-Gateway. Maybe one day we also have a physically isolated Qubes-Whonix-Gateway. And no, deprecating all non-Qubes would be a bad strategic move. Would kill a major source of scrutiny.]
Less confusion by "does it conflict with my router".
Anonymous VMs cannot access the home router anyway since everything is routed to Tor.
There might indeed be an issue with non-anonymous VM and non-technical users not realizing why the router is not accessible.
Yes, exactly. And this could be hard to debug for non-technical users.
Constantly calming people, explaining this, to stop the FUD is also a waste of project time.
Leaks even less likely, due to different subnets
What does this mean exactly? Subnets different from which other subnet? How does that impact leaks?
Internal network interface, external network interface. Connecting different subnets doesn't happen by accident. Search term: "connect different subnet"
Whonix works also outside of Qubes. [Such as data centers using physically isolated Whonix-Gateway. Maybe one day we also have a physically isolated Qubes-Whonix-Gateway. And no, deprecating all non-Qubes would be a bad strategic move. Would kill a major source of scrutiny.]
Constantly calming people, explaining this, to stop the FUD is also a waste of project time.
Internal network interface, external network interface. Connecting different subnets doesn't happen by accident. Search term: "connect different subnet" |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 30, 2015
Member
I still think that using network namespaces are unnecessary complex
thing here. Doing double NAT at each ProxyVM/NetVM doesn't also sounds
easy to debug and has probably some performance impact. Maybe we can
simply abandon usage of IP addresses in firewall rules and rely on
source interface name? Then the single MASQUERADE at the end of the
chain would cover all we need.
Such a change could be combined with creating separate chains for each
source VM, so there will a single rule for matching given VM (actually
two of them in case of HVM - as there will be two interfaces: one for
emulated device, and one for PV). This would make firewall much more
readable and also somehow easier to customize
(/rw/config/qubes-firewall-user-script) and somehow better performance
(but probably negligible). I was planning such a change for some time...
But before that we need to think if we really can abandon guarding IP
address. Some possible issues:
- inter-VM traffic - does the destination VM need reliable source IP
address? probably yes - whonix-gw/TorVM - AFAIR stream isolation between VMs relies on source
IP; @adrelanos, am I correct? - some network analyze tools, traffic inspection etc (tcpdump, netflow,
etc)
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
I still think that using network namespaces are unnecessary complex Such a change could be combined with creating separate chains for each But before that we need to think if we really can abandon guarding IP
Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
unman
Aug 30, 2015
Member
Maybe we can simply abandon usage of IP addresses in firewall rules and rely on source interface name? Then the single MASQUERADE at the end of the chain would cover all we need.
This assumes that the firewall is only operating at 1 level depth, but there are use cases where this is not so - e.g fw in front of torVM. In that case, to achieve stream isolation you cant use MASQUERADE or interface name at the torVM.
This assumes that the firewall is only operating at 1 level depth, but there are use cases where this is not so - e.g fw in front of torVM. In that case, to achieve stream isolation you cant use MASQUERADE or interface name at the torVM. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Aug 31, 2015
@marmarek I did it like this to preserve compatibility with existing firewall setups, not require custom kernel modules or routing packets to userspace and to have a less weird configuration (the network namespace itself is a bit weird, but you can think it as "embedding" a VM between the ProxyVM/NetVM and the AppVM and then it's just normal NAT routing).
If the firewall design is completely changed, then the single MASQUERADE will indeed work. There is still the issue that DNAT can only be used in prerouting, which means that you need to perform the routing decision with iptables rules in the PREROUTING chain (so that you know what to DNAT to) and then MARK the packet to pass the routing decision to Linux routing, and remove routes based on IP addresses.
The advantage of this scheme is that it's faster and does not require having the 10.137.x.x addresses at all, and allows to either have no internal addresses or have any scheme including using IPv6 addresses. The disadvantage is breaking compatibility and having a very unconventional routing setup.
It might indeed be necessary to patch Tor to isolate based on incoming interface as well as IP though (actually it would be even nicer to be able to embed an "isolation ID" in an IP option so that stream isolation works with chained ProxyVMs and patch everything to use it, not sure if realistic).
Such as data centers using physically isolated Whonix-Gateway
Having 256 addresses should suffice though, if someone needs more they can probably afford to spend some time renumbering.
Constantly calming people, explaining this, to stop the FUD is also a waste of project time.
I think it should be possible to route packets to an external 192.168.1.1 even if it's the internal gateway, except for the DNS and DHCP ports, and that should be enough for accessing the home router.
It should also be possible on Linux AppVMs to route packets to 192.168.1.128 externally even if it's the address of the local interface (by removing it from the "local" routing table), but that might break some software that doesn't expect that.
Internal network interface, external network interface. Connecting different subnets doesn't happen by accident. Search term: "connect different subnet"
Ah you mean accidentally routing packets to the external network bypassing Tor because it has the same subnet used for private addressing.
That's indeed worrying, but proper firewall/routing should prevent that and it can be an issue regardless of IP choice if the external network happens to have the same IP.
qubesuser
commented
Aug 31, 2015
|
@marmarek I did it like this to preserve compatibility with existing firewall setups, not require custom kernel modules or routing packets to userspace and to have a less weird configuration (the network namespace itself is a bit weird, but you can think it as "embedding" a VM between the ProxyVM/NetVM and the AppVM and then it's just normal NAT routing). If the firewall design is completely changed, then the single MASQUERADE will indeed work. There is still the issue that DNAT can only be used in prerouting, which means that you need to perform the routing decision with iptables rules in the PREROUTING chain (so that you know what to DNAT to) and then MARK the packet to pass the routing decision to Linux routing, and remove routes based on IP addresses. The advantage of this scheme is that it's faster and does not require having the 10.137.x.x addresses at all, and allows to either have no internal addresses or have any scheme including using IPv6 addresses. The disadvantage is breaking compatibility and having a very unconventional routing setup. It might indeed be necessary to patch Tor to isolate based on incoming interface as well as IP though (actually it would be even nicer to be able to embed an "isolation ID" in an IP option so that stream isolation works with chained ProxyVMs and patch everything to use it, not sure if realistic).
Having 256 addresses should suffice though, if someone needs more they can probably afford to spend some time renumbering.
I think it should be possible to route packets to an external 192.168.1.1 even if it's the internal gateway, except for the DNS and DHCP ports, and that should be enough for accessing the home router. It should also be possible on Linux AppVMs to route packets to 192.168.1.128 externally even if it's the address of the local interface (by removing it from the "local" routing table), but that might break some software that doesn't expect that.
Ah you mean accidentally routing packets to the external network bypassing Tor because it has the same subnet used for private addressing. That's indeed worrying, but proper firewall/routing should prevent that and it can be an issue regardless of IP choice if the external network happens to have the same IP. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Aug 31, 2015
Member
Ah you mean accidentally routing packets to the external network bypassing Tor because it has the same subnet used for private addressing.
That's indeed worrying, but proper firewall/routing should prevent that and it can be an issue regardless of IP choice if the external network happens to have the same IP.
It should. Sure. But sometimes there are obscure bugs and leaks. Unseen by anyone for years. This one is the best example coming to my mind:
- https://lists.torproject.org/pipermail/tor-talk/2014-March/032503.html
- https://www.whonix.org/wiki/Dev/Leak_Tests#FIN_ACK_.2F_RST_ACK_-_Leak_Test
That's why I appreciate the extra protection by separate subnets.
inter-VM traffic - does the destination VM need reliable source IP address? probably yes
Yes. And womehow included free ARP spoofing defense preventing impersonating other internal LAN IPs would be a bonus.
- whonix-gw/TorVM - AFAIR stream isolation between VMs relies on source IP; @adrelanos, am I correct?
Yes. (IsolateClientAddr)
Static IP addresses are very useful for Whonix. Such as for setting up Tor hidden services. And other stuff. But if you can abolish that need with iptables skills, I am all ears.
It should. Sure. But sometimes there are obscure bugs and leaks. Unseen by anyone for years. This one is the best example coming to my mind:
That's why I appreciate the extra protection by separate subnets.
Yes. And womehow included free ARP spoofing defense preventing impersonating other internal LAN IPs would be a bonus.
Yes. ( Static IP addresses are very useful for Whonix. Such as for setting up Tor hidden services. And other stuff. But if you can abolish that need with iptables skills, I am all ears. |
adrelanos
referenced this issue
Nov 10, 2015
Closed
Whonix 12 rinetd starting before qubes-whonix-postinit replace-ips #1398
marmarek
modified the milestones:
Release 4.0,
Release 3.1
Feb 8, 2016
andrewdavidwong
added
the
privacy
label
Apr 7, 2016
added a commit
that referenced
this issue
May 31, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
andrewdavidwong
Jun 12, 2016
Member
For tracking purposes, what is the current status of this issue?
@qubesuser's three pull requests are still open. Does more work need to be done before the PRs can be merged?
|
For tracking purposes, what is the current status of this issue? @qubesuser's three pull requests are still open. Does more work need to be done before the PRs can be merged? |
added a commit
that referenced
this issue
Jun 12, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jun 12, 2016
Member
I think at least VM part is (almost?) ready to merge. I still need to read it one more time to better understand and check its correctness, but probably no changes are needed.
Dom0 part probably is completely incompatible with core3 (Qubes 4.0). But on the other hand, it should be trivial to reimplement the same in core3 (this is exactly we we've decided to rewrite Qubes core scripts - to make it easier to work with it).
In short: next step is me merging the changes. In case of unexpected work needed, will leave a comment here.
|
I think at least VM part is (almost?) ready to merge. I still need to read it one more time to better understand and check its correctness, but probably no changes are needed. |
added a commit
that referenced
this issue
Jun 12, 2016
marmarek
referenced this issue
in marmarek/old-qubes-core-admin
Oct 19, 2016
Open
Randomize MAC address on each VM boot #3
added a commit
to marmarek/old-qubes-core-agent-linux
that referenced
this issue
Oct 31, 2016
added a commit
to marmarek/old-qubes-core-agent-linux
that referenced
this issue
Oct 31, 2016
added a commit
to marmarek/old-qubes-core-agent-linux
that referenced
this issue
Oct 31, 2016
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Oct 31, 2016
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Oct 31, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Oct 31, 2016
Member
Done. See marmarek/qubes-core-admin@2c6c476 message for details. Will need to be documented (based on this ticket and that commit message)
|
Done. See marmarek/qubes-core-admin@2c6c476 message for details. Will need to be documented (based on this ticket and that commit message) |
marmarek
closed this
Oct 31, 2016
andrewdavidwong
referenced this issue
Oct 31, 2016
Closed
Document features that allow hiding the real IP address from a VM #2410
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Oct 31, 2016
Member
@adrelanos this require cooperation from ProxyVM. If ProxyVM do not cooperate (for example outdated package, or no support for this feature - like in MirageOS currently), AppVM may learn its "real" IP (10.137.x.y), or more likely have not working network. I see two options:
- do not allow setting such not-cooperating netvm when this feature is enabled,
- allow it, but disable this feature and issue a warning - and enable it back when connected netvm is changed/start supporting it.
Separate issue is how to detect whether such ProxyVM support this feature, but it can be done similar way as Windows tools are detected - VM (template in this case) will expose supported features in QubesDB at startup and it will be recorded as VMs based on this template do support it. BTW the same mechanism can be used to configure Whonix-specific defaults for VM (like enabling this feature automatically).
|
@adrelanos this require cooperation from ProxyVM. If ProxyVM do not cooperate (for example outdated package, or no support for this feature - like in MirageOS currently), AppVM may learn its "real" IP (10.137.x.y), or more likely have not working network. I see two options:
Separate issue is how to detect whether such ProxyVM support this feature, but it can be done similar way as Windows tools are detected - VM (template in this case) will expose supported features in QubesDB at startup and it will be recorded as VMs based on this template do support it. BTW the same mechanism can be used to configure Whonix-specific defaults for VM (like enabling this feature automatically). |
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Nov 4, 2016
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Nov 4, 2016
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Nov 4, 2016
added a commit
to marmarek/old-qubes-core-agent-linux
that referenced
this issue
Nov 4, 2016
added a commit
to marmarek/old-qubes-core-agent-linux
that referenced
this issue
Nov 4, 2016
added a commit
to marmarek/old-qubes-core-agent-linux
that referenced
this issue
Nov 4, 2016
marmarek
referenced this issue
in QubesOS/qubes-core-admin
Nov 4, 2016
Merged
Core3 hidden/custom IP implementation #67
added a commit
that referenced
this issue
Nov 6, 2016
marmarek
referenced this issue
in QubesOS/qubes-core-agent-linux
Dec 13, 2016
Closed
Introduce $QREXEC_REMOTE_DOMAIN_*ID* in VMs #31
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesos-bot
Jun 9, 2017
Automated announcement from builder-github
The package python2-dnf-plugins-qubes-hooks-4.0.0-1.fc24 has been pushed to the r4.0 testing repository for the Fedora fc24 template.
To test this update, please install it with the following command:
sudo yum update --enablerepo=qubes-vm-r4.0-current-testing
qubesos-bot
commented
Jun 9, 2017
|
Automated announcement from builder-github The package
|
qubesos-bot
added
the
r4.0-fc24-cur-test
label
Jun 9, 2017
qubesos-bot
referenced this issue
in QubesOS/updates-status
Jun 9, 2017
Closed
core-agent-linux v4.0.0 (r4.0) #68
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesos-bot
Jun 9, 2017
Automated announcement from builder-github
The package python2-dnf-plugins-qubes-hooks-4.0.0-1.fc25 has been pushed to the r4.0 testing repository for the Fedora fc25 template.
To test this update, please install it with the following command:
sudo yum update --enablerepo=qubes-vm-r4.0-current-testing
qubesos-bot
commented
Jun 9, 2017
|
Automated announcement from builder-github The package
|
qubesos-bot
added
the
r4.0-fc25-cur-test
label
Jun 9, 2017
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesos-bot
Jun 9, 2017
Automated announcement from builder-github
The package qubes-core-agent_4.0.0-1+deb8u1 has been pushed to the r4.0 testing repository for the Debian jessie template.
To test this update, first enable the testing repository in /etc/apt/sources.list.d/qubes-*.list by uncommenting the line containing jessie-testing, then use the standard update command:
sudo apt-get update && sudo apt-get dist-upgrade
qubesos-bot
commented
Jun 9, 2017
|
Automated announcement from builder-github The package
|
qubesos-bot
added
the
r4.0-jessie-cur-test
label
Jun 9, 2017
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesos-bot
Jun 9, 2017
Automated announcement from builder-github
The package qubes-core-agent_4.0.0-1+deb9u1 has been pushed to the r4.0 testing repository for the Debian stretch template.
To test this update, first enable the testing repository in /etc/apt/sources.list.d/qubes-*.list by uncommenting the line containing stretch-testing, then use the standard update command:
sudo apt-get update && sudo apt-get dist-upgrade
qubesos-bot
commented
Jun 9, 2017
|
Automated announcement from builder-github The package
|
qubesuser commentedAug 21, 2015
Currently Qubes exposes 10.137.x.x address to VM, which means it's trivial to detect that someone is using Qubes, and it's also possible to tell which gateway a VM is using, as well as the order it was assigned the address.
Several applications such as Firefox with WebRTC support and BitTorrent clients leak the local IP address, so this can often be detected across the network in addition to within an exploited VM.
Instead, all VMs should use the most common local IP address in use (which I think should be either 192.168.1.2 or 192.168.1.128 with gateway 192.168.1.1, but some actual research should be done on this).
If the IP address conflicts with the numbering on a LAN the user wants to access, Qubes should allow to change the subnet: in this case it would be prudent to choose an address at random for each VM within the netmask specified by the user, to prevent correlation between VMs on the same host with non-default addressing (since detecting Qubes is going to be possible anyway on an exploited VM).
Firewall VMs should then NAT those addresses to an internal scheme such as 10.137.x.x (but I think 10.R.x.x where R is by default random and configurable is a better choice) so that internal networking and port forwarding can be supported.