Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pasta: figure out how to deal with /etc/{hosts,resolv.conf} entries #19213

Open
Luap99 opened this issue Jul 12, 2023 · 30 comments
Open

pasta: figure out how to deal with /etc/{hosts,resolv.conf} entries #19213

Luap99 opened this issue Jul 12, 2023 · 30 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. pasta pasta(1) bugs or features stale-issue

Comments

@Luap99
Copy link
Member

Luap99 commented Jul 12, 2023

There are some basic problem with our hosts and resolv.conf handling when we the pasta network mode is used.

For resolv.conf unless custom dns servers are specified via config or cli then podman will read your hosts resolv.conf and add entries to the containers resolv.conf, except that we filter out localhost addresses because they will not be reachable from within the container netns.
Because pasta by default uses no NAT and reuses the host this causes the side effect that the host ip in the host netns is not reachable as well. This means that if you by default have you host ip in resolv.conf then podman adds an entry that is actually not reachable. This is a bad user experience so we should figure out how to best handle this.
This also gets more complicated depending on what pasta options the users specifies (--map-gw, --address, --gateway). They all will result in some NAT for the container and may make certain addresses unavailable.

For /etc/hosts it is not as problematic but there is still the host.containers.internal entry which should point to the host ip. Right now this is just the first non localhost ip we find. By default this will be very often the same ip used by pasta so then again you would not actually reach the host but stay in the container netns. Many application use host.containers.internal to connect to services running on the host so with pasta this will not work. Again this may also be impacted by the pasta option --address.

We need to figure out what pasta options effect this behavior and how to deal with that accordingly. This is something we should address before making pasta the default as this has a high change of causing regressions if we do not deal with it correctly.

@Luap99 Luap99 added kind/bug Categorizes issue or PR as related to a bug. pasta pasta(1) bugs or features labels Jul 12, 2023
@Luap99
Copy link
Member Author

Luap99 commented Jul 12, 2023

cc @sbrivio-rh @dgibson

@dgibson
Copy link
Collaborator

dgibson commented Jul 13, 2023

@Luap99 a couple of background queries to help me understand the problem better.

  1. Was there a specific rationale for invoking pasta by default with --no-map-gw? With that option, there's pretty fundamentally no way to access the host from the container. We hope to make that a bit more flexible with the future forwarding model we have in mind, but that might be a while before it's implemented.

  2. Does podman have the infrastructure to allocate IP addresses (from some private network), or did it always rely on other components for that? If so we should be able to re-use that along with the DNS specific NAT options to handle resolv.conf and name resolution. But we need to get an IP from somewhere, and pasta doesn't have enough view of the surrounding network to really do so. This is kind of the inevitable tradeoff for avoiding NAT in most cases.

@Luap99
Copy link
Member Author

Luap99 commented Jul 13, 2023

Was there a specific rationale for invoking pasta by default with --no-map-gw? With that option, there's pretty fundamentally no way to access the host from the container. We hope to make that a bit more flexible with the future forwarding model we have in mind, but that might be a while before it's implemented.

I think my concern was (still is) that a container must never have access to processes listing on 127.0.0.1 on the host ns, at least by default. That decision requires user opt in (i.e. allow_host_loopback=true for slirp4netns). As I understand in pasta by default the gateway ip is mapped to localhost on the host so it bypasses that guarantee. If you could map it to the actual host ip then I would not have any problems with it because this one can be accessed by all the other network modes as well. But then keep in mind that means we can no longer connect to the actual network gw and at least on common home network setups the home router will set itself as dns server which means a lot of users would be hit by this problem.

Does podman have the infrastructure to allocate IP addresses (from some private network), or did it always rely on other components for that? If so we should be able to re-use that along with the DNS specific NAT options to handle resolv.conf and name resolution. But we need to get an IP from somewhere, and pasta doesn't have enough view of the surrounding network to really do so. This is kind of the inevitable tradeoff for avoiding NAT in most cases.

Exactly that is the problem, for rootful we just assume 10.88.0.0/16 is free (if not a user has to change it in the config manually) with slirp4netns it uses their default of 10.0.2.0/24 (can also be changed in the config). Both are not great obviously as if you already use those subnets it will not work out of the box.
And now that I think of it the resolv.conf problem with potentially adding ips that are not reachable would exists there too.

With pasta we have the unique advantage that we only loose a single ip with is much better and I love that. Certainly we could just define a specific ip and I assume we could set --dns-forward by default to implement that? If we assign a ip we need to keep in mind that we must keep backwards compatibility in mind. But I think we could just pick a ip from a reserved range such as 169.254.0.0/16 which should not cause problems for users?

@dgibson
Copy link
Collaborator

dgibson commented Jul 19, 2023

Was there a specific rationale for invoking pasta by default with --no-map-gw? With that option, there's pretty fundamentally no way to access the host from the container. We hope to make that a bit more flexible with the future forwarding model we have in mind, but that might be a while before it's implemented.

I think my concern was (still is) that a container must never have access to processes listing on 127.0.0.1 on the host ns, at least by default. That decision requires user opt in (i.e. allow_host_loopback=true for slirp4netns). As I understand in pasta by default the gateway ip is mapped to localhost on the host so it bypasses that guarantee.

Your understanding is correct, so, yes, that constraint absolutely rules out map-gw in its present form.

If you could map it to the actual host ip then I would not have any problems with it because this one can be accessed by all the other network modes as well.

So, we want to allow that, but it's harder than it sounds. There are, alas, some assumptions about where things are mapped that influences how the port tracking stuff in the UDP code works. Sorting that out is definitely planned, but it's not that easy.

But then keep in mind that means we can no longer connect to the actual network gw and at least on common home network setups the home router will set itself as dns server which means a lot of users would be hit by this problem.

Right.

Does podman have the infrastructure to allocate IP addresses (from some private network), or did it always rely on other components for that? If so we should be able to re-use that along with the DNS specific NAT options to handle resolv.conf and name resolution. But we need to get an IP from somewhere, and pasta doesn't have enough view of the surrounding network to really do so. This is kind of the inevitable tradeoff for avoiding NAT in most cases.

Exactly that is the problem, for rootful we just assume 10.88.0.0/16 is free (if not a user has to change it in the config manually) with slirp4netns it uses their default of 10.0.2.0/24 (can also be changed in the config). Both are not great obviously as if you already use those subnets it will not work out of the box. And now that I think of it the resolv.conf problem with potentially adding ips that are not reachable would exists there too.

In the short to medium term, my inclination here would be to allocate a fake DNS server from the 10.88.0.0/16 range, and pass that to the --dns-forward option. Obviously that can break down if that subnet is in use, but it seems like that's probably a better option than preventing either the host, or the (real) local gateway from being the DNS server.

With pasta we have the unique advantage that we only loose a single ip with is much better and I love that. Certainly we could just define a specific ip and I assume we could set --dns-forward by default to implement that? If we assign a ip we need to keep in mind that we must keep backwards compatibility in mind. But I think we could just pick a ip from a reserved range such as 169.254.0.0/16 which should not cause problems for users?

Well... it depends what "reserved" means, exactly. Obviously 10.0.0.0/8 or 192.168.0.0/16 can fail easily if those are used for a private network on the host. Something like 192.0.2.0/24 would probably work in practice, but not if you're trying to run this inside an example environment already using that range - and it's not really what RFC5737 says you should use it for. Most of the other reserverd ranges have similar issues.

The link local range, 169.254.0.0/16, specifically is an interesting case. Because it's link-local we can potentially use it safely even if it's also in use on the host side. This then comes down to a general question of how to handle link local addresses (both IPv4 and IPv6) in pasta. One option is to treat the "link" as purely between the guest/container and pasta, in which case we can freely assign and use link-local addresses - but it means anything only accessible to the host via link-local addressing is not accessible to the guest/container. Another is for pasta to act as though it's a window out onto one of the host's link-local spaces. At present, we're a bit of an unholy mix of the two. My long term plan is to allow either of these options - there's actually a bunch of other curly edge cases where it becomes clearer what to do when we explicitly choose one of these two options. But, again, that will require a fair bit of work to reach.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Aug 22, 2023

@Luap99 Any update on this?

@Luap99
Copy link
Member Author

Luap99 commented Aug 23, 2023

No

@github-actions
Copy link

github-actions bot commented Oct 4, 2023

A friendly reminder that this issue had no activity for 30 days.

@Luap99
Copy link
Member Author

Luap99 commented Mar 5, 2024

@mheon @dgibson @sbrivio-rh I lost track of this one. I think we need to take look here again and figure out how to fix this is the best optimal way.

For /etc/hosts it is not as problematic but there is still the host.containers.internal entry which should point to the host ip. Right now this is just the first non localhost ip we find. By default this will be very often the same ip used by pasta so then again you would not actually reach the host but stay in the container netns. Many application use host.containers.internal to connect to services running on the host so with pasta this will not work. Again this may also be impacted by the pasta option --address.

This is still a issue, today we already lookup the pasta ip by checking the interface inside the netns so one easy thing to do would be to keep looking for another ip on the host if it is the same as the one pasta uses. However this only works if a system has more than one non localhost ip address which may not be common. Keep in mind that we pass --no-map-gw to pasta so using the gw address is not possible, and even if we would like to use it it is not suitable for us as it remaps to localhost on the host which we consider insecure and as such is a non starter.
So what we would would ideally need from pasta is the possibility to remap some ip in the container netns to the host ip that pasta uses and that should not connect to localhost.

I am not sure were the passt/pasta work on this but I think you were talking about working on some form of generic ip remapping. So maybe this is something we could implement today? It is totally fine if podman needs to pass a new option.

And then if we can do that, we could reuse it if the nameserver is the host ip because then podman could just write the ip we used to remap to the containers resolv.conf and it should just work.

The alternative of course is that podman could pass --address,--gateway with the old slirp4netns addresses to force NAT but I don't think this is what any of us want.

@sbrivio-rh
Copy link
Collaborator

So what we would would ideally need from pasta is the possibility to remap some ip in the container netns to the host ip that pasta uses and that should not connect to localhost.

Can --dns-forward ADDR help? You would tell pasta to map ADDR to the first configured resolver, where ADDR should match whatever Podman configures in the container's /etc/resolv.conf.

I am not sure were the passt/pasta work on this but I think you were talking about working on some form of generic ip remapping. So maybe this is something we could implement today? It is totally fine if podman needs to pass a new option.

This is still work in progress, I think we're quite far from having options that could be used now, unless @dgibson sees a way to do that.

@Luap99
Copy link
Member Author

Luap99 commented Mar 5, 2024

So what we would would ideally need from pasta is the possibility to remap some ip in the container netns to the host ip that pasta uses and that should not connect to localhost.

Can --dns-forward ADDR help? You would tell pasta to map ADDR to the first configured resolver, where ADDR should match whatever Podman configures in the container's /etc/resolv.conf.

This option only handles dns remapping so it does not to fix the generic host.containers.internal issue.
Also because we use --no-map-gw this will not work for only localhost resolvers (systemd-resolved). Or well that is at least what I think from reading the code because it throws the Couldn't get any nameserver address, however it seems to work in this case which totally confuses me.

$ grep nameserver /etc/resolv.conf 
nameserver 127.0.0.53
$ pasta --config-net --no-map-gw  --dns-forward 192.168.0.1 nslookup google.com 192.168.0.1
No routable interface for IPv6: IPv6 is disabled
Couldn't get any nameserver address
Server:		192.168.0.1
Address:	192.168.0.1#53

Non-authoritative answer:
Name:	google.com
Address: 216.58.212.142
Name:	google.com
Address: 2a00:1450:4001:82a::200e


So it is certainly an option to fix some of the problems, however it has the same problem in that it maps to 127.0.0.1 so it will not in the case were the host ip is used as nameserver and only listens on that ip (e.g. eth0) but not on localhost.

@sbrivio-rh
Copy link
Collaborator

So what we would would ideally need from pasta is the possibility to remap some ip in the container netns to the host ip that pasta uses and that should not connect to localhost.

Can --dns-forward ADDR help? You would tell pasta to map ADDR to the first configured resolver, where ADDR should match whatever Podman configures in the container's /etc/resolv.conf.

This option only handles dns remapping so it does not to fix the generic host.containers.internal issue. Also because we use --no-map-gw this will not work for only localhost resolvers (systemd-resolved). Or well that is at least what I think from reading the code because it throws the Couldn't get any nameserver address, however it seems to work in this case which totally confuses me.

I have to admit it confuses me as well. It might be a side effect of commit bad252687271 ("conf, udp: Allow any loopback address to be used as resolver"). I need to look into this a bit further.

So it is certainly an option to fix some of the problems, however it has the same problem in that it maps to 127.0.0.1 so it will not in the case were the host ip is used as nameserver and only listens on that ip (e.g. eth0) but not on localhost.

...is this case actually a thing? I've never seen systemd-resolved or dnsmasq binding to a specific address or interface.

@Luap99
Copy link
Member Author

Luap99 commented Mar 5, 2024

So it is certainly an option to fix some of the problems, however it has the same problem in that it maps to 127.0.0.1 so it will not in the case were the host ip is used as nameserver and only listens on that ip (e.g. eth0) but not on localhost.

...is this case actually a thing? I've never seen systemd-resolved or dnsmasq binding to a specific address or interface.

Yeah not for systemd/dnsmasq when used as local resolvers. However one place where I have done this is running a dns server in podman. So I put my eth0 ip in resolv.conf as I want to use it also from within all my other containers and podman has to skip localhost resolvers. Now I could bind all addresses (0.0.0.0) but there is a catch with that as well. Podman uses aardvark-dns which by default listens on the bridge ip on port 53 to offer name resolution for container names. So this would fail if there is already a dns sever running on 0.0.0.0.
I am well aware that this is a totally obscure example and maybe nobody besides me has ever done that and there are plenty of ways to work around it/set up differently in a way that it would work so I do not really worry about it personally.

Just saying this because technically you ignore the nameserver ip in this specific case and remap it to 127.0.0.1 which I find weird.

@dgibson dgibson self-assigned this Mar 6, 2024
@dgibson
Copy link
Collaborator

dgibson commented Mar 6, 2024

Both pasta and slirp4netns need to make a design tradeoff to deal with the fact that they don't have the capacity to allocate a genuinely new IP for their guest. Each has chosen a different option, and to some extent this issue is a fundamental consequence of that choice:

  • slirp4netns chooses to put the guest on its own NATted subnetwork. That makes things simple for internal address handling, but it has the usual problems of NAT.
  • pasta chooses to avoid NAT and instead have the guest share the host IP. This has a number of advantages, but the cost is that its now impossible to directly address the host from the guest.

The gw mapping option is pasta's attempt to mitigate the trade-off. It allows access to the host, but at the cost of not allowing access to the original gateway. It's also inflexible, in that it doesn't allow the user to control what address is mapped to the host, or to control which host port it maps to.

I'm intending to make this NAT special case more flexible, allowing the user (podman in this case) to choose some arbitrary address which can be mapped to the host, or even several different addresses which can be mapped to different host addresses. However, implementing this sanely has a fair bit of prerequisite work. I'm gradually getting there, but it's a pretty long road.

@sbrivio-rh
Copy link
Collaborator

sbrivio-rh commented Mar 6, 2024

This option only handles dns remapping so it does not to fix the generic host.containers.internal issue. Also because we use --no-map-gw this will not work for only localhost resolvers (systemd-resolved).

Doesn't aardvark-dns resolve host.containers.internal to a non-local address for a host interface? Because even with --no-map-gw, one can do this:

$ ip -4 ad sh dev enp9s0
2: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    inet 88.198.0.164/27 brd 88.198.0.191 scope global enp9s0
       valid_lft forever preferred_lft forever
$ pasta --config-net --no-map-gw
# grep host\.containers\.internal /etc/hosts
88.198.0.164	host.containers.internal
# ping -nc1 host.containers.internal
PING host.containers.internal (88.198.0.164) 56(84) bytes of data.
64 bytes from 88.198.0.164: icmp_seq=1 ttl=64 time=0.032 ms

--- host.containers.internal ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms

Or well that is at least what I think from reading the code because it throws the Couldn't get any nameserver address, however it seems to work in this case which totally confuses me.

$ grep nameserver /etc/resolv.conf 
nameserver 127.0.0.53
$ pasta --config-net --no-map-gw  --dns-forward 192.168.0.1 nslookup google.com 192.168.0.1
No routable interface for IPv6: IPv6 is disabled
Couldn't get any nameserver address
Server:		192.168.0.1
Address:	192.168.0.1#53

Non-authoritative answer:
Name:	google.com
Address: 216.58.212.142
Name:	google.com
Address: 2a00:1450:4001:82a::200e

I think this works because if --dns-forward is given, we set dns_match in conf.c, and not dns_host, so that's 0.0.0.0. At that point, we'll just use 0.0.0.0 as destination, which means "this host".

So it is certainly an option to fix some of the problems, however it has the same problem in that it maps to 127.0.0.1 so it will not in the case were the host ip is used as nameserver and only listens on that ip (e.g. eth0) but not on localhost.

We can also pass another address, not 127.0.0.1, with the --dns option.

If this works, would there be any remaining issue?

@Luap99
Copy link
Member Author

Luap99 commented Mar 6, 2024

This option only handles dns remapping so it does not to fix the generic host.containers.internal issue. Also because we use --no-map-gw this will not work for only localhost resolvers (systemd-resolved).

Doesn't aardvark-dns resolve host.containers.internal to a non-local address for a host interface? Because even with --no-map-gw, one can do this:

aardvark-dns is not in the picture here, we do not use it for host.containers.internal, this entry is added to /etc/hosts in the container. I don't understand what you trying to show in your example, the host ip 88.198.0.164 would of course be able to ping inside the container as this is locally inside the netns. host.containers.internal has to resolve to a ip that reaches the host side, if a container connects to that address they must be able to talk to services listing on the host. Or are you saying enp9s0 is not your default interface used by pasta, then yes using another ip on the host will work but requires that such other ip exist.

So it is certainly an option to fix some of the problems, however it has the same problem in that it maps to 127.0.0.1 so it will not in the case were the host ip is used as nameserver and only listens on that ip (e.g. eth0) but not on localhost.

We can also pass another address, not 127.0.0.1, with the --dns option.

That means we will read resolv.conf on our side? Seems kinda silly considering that pasta already does that.
I think just using --dns-forward would work if pasta wouldn't throw a warning that it cannot use localhost resolvers with --no-map-gw as this clearly is not the case.

@Luap99
Copy link
Member Author

Luap99 commented Mar 6, 2024

I'm intending to make this NAT special case more flexible, allowing the user (podman in this case) to choose some arbitrary address which can be mapped to the host, or even several different addresses which can be mapped to different host addresses. However, implementing this sanely has a fair bit of prerequisite work. I'm gradually getting there, but it's a pretty long road.

Yeah that does sound useful to me for to map to the actual host. We would need to choose some arbitrary address inside the netns but this shouldn't be a problem.

@sbrivio-rh
Copy link
Collaborator

That means we will read resolv.conf on our side? Seems kinda silly considering that pasta already does that.

Hmm, yes, right.

I think just using --dns-forward would work if pasta wouldn't throw a warning that it cannot use localhost resolvers with --no-map-gw as this clearly is not the case.

Okay, so I can prepare a patch for pasta that avoids the warning in that case. Then we need to pass another option in Podman. Should I try to make that change as well? (I'd rather leave it to you or somebody else at the moment, if possible)

@Luap99
Copy link
Member Author

Luap99 commented Mar 7, 2024

Then we need to pass another option in Podman. Should I try to make that change as well? (I'd rather leave it to you or somebody else at the moment, if possible)

I can do that.

Luap99 added a commit to Luap99/common that referenced this issue Mar 13, 2024
This reverts commit 92784a2.
I plan on using --dns-forward now so we do not want to disable dns by
default, see [1].

[1] containers/podman#19213
Luap99 added a commit to Luap99/common that referenced this issue Mar 13, 2024
This reverts commit 92784a2.
I plan on using --dns-forward now so we do not want to disable dns by
default, see [1].

[1] containers/podman#19213

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/common that referenced this issue Mar 14, 2024
This reverts commit 92784a2.
I plan on using --dns-forward now so we do not want to disable dns by
default, see [1].

[1] containers/podman#19213

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/common that referenced this issue Mar 14, 2024
This reverts commit 92784a2.
I plan on using --dns-forward now so we do not want to disable dns by
default, see [1].

[1] containers/podman#19213

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/common that referenced this issue Mar 14, 2024
This reverts commit 92784a2.
I plan on using --dns-forward now so we do not want to disable dns by
default, see [1].

[1] containers/podman#19213

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/common that referenced this issue Mar 15, 2024
This reverts commit 92784a2.
I plan on using --dns-forward now so we do not want to disable dns by
default, see [1].

[1] containers/podman#19213

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
hswong3i pushed a commit to alvistack/passt-top-passt that referenced this issue Mar 19, 2024
Starting from commit 3a2afde ("conf, udp: Drop mostly duplicated
dns_send arrays, rename related fields"), we won't add to c->ip4.dns
and c->ip6.dns nameservers that can't be used by the guest or
container, and we won't advertise them.

However, the fact that we don't advertise any nameserver doesn't mean
that we didn't find any, and we should warn only if we couldn't find
any.

This is particularly relevant in case both --dns-forward and
--no-map-gw are passed, and a single loopback address is listed in
/etc/resolv.conf: we'll forward queries directed to the address
specified by --dns-forward to the loopback address we found, we
won't advertise that address, so we shouldn't warn: this is a
perfectly legitimate usage.

Reported-by: Paul Holzinger <pholzing@redhat.com>
Link: containers/podman#19213
Fixes: 3a2afde ("conf, udp: Drop mostly duplicated dns_send arrays, rename related fields")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Tested-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/common that referenced this issue Mar 21, 2024
This reverts commit 92784a2.
I plan on using --dns-forward now so we do not want to disable dns by
default, see [1].

[1] containers/podman#19213

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
@chris42
Copy link

chris42 commented Apr 25, 2024

So I stumbled upon this while setting up my little own server and struggling with not being able to set 127.0.0.1 via the --dns option. Only way it worked for me is to edit the resolv.conf.
Reading this, I am not sure if you have/had the same problem and how the setup now looks like? Can one of you explain please?

Why am I trying to set 127.0.0.1: I am running my own mailserver. Certain spam functionality (e.g. DNS blocklist lookup at spamhaus) rely on my own little DNS, as using the "big ones" out there does not allow them to monitor # of requests and hence allow fair use for small servers. Hence I have a DNS running in my Pod, which makes it run on 127.0.0.1

Edit: I am running on 4.9.3

@Luap99
Copy link
Member Author

Luap99 commented Apr 25, 2024

@chris42 This is not related to this issue, I suggest you file a new one or a discussion https://github.com/containers/podman/discussions to better understand what you doing. Maybe you are talking about #20562 (comment)?

@chris42
Copy link

chris42 commented Apr 25, 2024

Not sure, I got triggered by your comment in the beginning, that you filter localhost out of set --dns

"There are some basic problem with our hosts and resolv.conf handling when we the pasta network mode is used.

For resolv.conf unless custom dns servers are specified via config or cli then podman will read your hosts resolv.conf and add entries to the containers resolv.conf, except that we filter out localhost addresses because they will not be reachable from within the container netns."

@SebTM
Copy link

SebTM commented Apr 26, 2024

Hey, I just wanted to report another use-case which is affected by this when upgrading from podman v4 to v5 - using e.g. xDebug in the container with your IDE on the host.

For now I added this to my containers.con:

[network]
pasta_options = ["-a", "10.0.2.0", "-n", "24", "-g", "10.0.2.2", "--dns-forward", "10.0.2.3"]

but that's not the best solution as it requires users to setup this manually, or can this be set project-wide via compose-file somehow?

@Luap99
Copy link
Member Author

Luap99 commented Apr 26, 2024

Hey, I just wanted to report another use-case which is affected by this when upgrading from podman v4 to v5 - using e.g. xDebug in the container with your IDE on the host.

For now I added this to my containers.con:

[network]
pasta_options = ["-a", "10.0.2.0", "-n", "24", "-g", "10.0.2.2", "--dns-forward", "10.0.2.3"]

but that's not the best solution as it requires users to setup this manually, or can this be set project-wide via compose-file somehow?

Depends what you are doing, if you use named (user-defined) networks (default in compose) then the only way to set it is in containers.conf. If you use the pasta network mode then something like network_mode: pasta:-a,10.0.2.0,... should work in the compose file but that means no inter container connectivity

@SebTM
Copy link

SebTM commented Apr 26, 2024

if you use named (user-defined) networks (default in compose)

This, are there plans to change this or restore the functionality of host.containers.internal somehow else?

@bmenant
Copy link

bmenant commented Apr 30, 2024

@SebTM consider reverting your default networking tool to slirp4netns instead of pasta as described here: https://blog.podman.io/2024/03/podman-5-0-breaking-changes-in-detail/

@SebTM
Copy link

SebTM commented Apr 30, 2024

@bmenant Thanks for the suggestion, I read the article - but that does not solve my issue that the required setup e.g. in a projects compose-files isn't self-contained anymore as each user will have to set this up in his system containers.conf that's what I'm trying to avoid as there are (beside the xdebug use-case) less technical users for them I ship the project with a compose-setup and one-commend to run it so far ✌🏻

@bmenant
Copy link

bmenant commented Apr 30, 2024

@SebTM Maybe set the network mode to slirp4netns from your compose file? https://docs.podman.io/en/latest/markdown/podman-pod-create.1.html#network-mode-net

Pasta works fine in a similar stack of mine (pod setup, not compose) as soon as there’s another ip address on the host to assign to host.containers.internal (podman 5.0.2 picks up a virtual network interface on my workstation for instance), as mentioned here: #22502 (comment)

@wangmaster
Copy link

@SebTM consider reverting your default networking tool to slirp4netns instead of pasta as described here: https://blog.podman.io/2024/03/podman-5-0-breaking-changes-in-detail/

Thanks for posting that link. I'd missed that on the podman blog. The second option (to assign an alternate IP address for the containers) worked to provide access to the host. What I'm having a hard time finding is a good documentation clarifying what the ramifications are (and why isn't this the default behavior as it seems more like the slirp4netns behavior). The pasta man page is.. about as clear as mud to me, probably because I haven't found the time to understand pasta.

@dgibson
Copy link
Collaborator

dgibson commented May 1, 2024

@SebTM consider reverting your default networking tool to slirp4netns instead of pasta as described here: https://blog.podman.io/2024/03/podman-5-0-breaking-changes-in-detail/

Thanks for posting that link. I'd missed that on the podman blog. The second option (to assign an alternate IP address for the containers) worked to provide access to the host. What I'm having a hard time finding is a good documentation clarifying what the ramifications are (and why isn't this the default behavior as it seems more like the slirp4netns behavior). The pasta man page is.. about as clear as mud to me, probably because I haven't found the time to understand pasta.

Both pasta and slirp4netns need to deal with the fact that we can't allocate "real" IP addresses. pasta has chosen a different approach here, which we think is better in more cases, but it's an unavoidable tradeoff, so there are some situations where pasta's approach causes trouble.

slirp's approach is to NAT the guest/container - it sees a private (usually 10.0.2.0/24) network, but packets sent from the container appear from the outside to have come from (one of) the host's IP. This approach means there's a private IP range where we can allocate the guest's address, and anything else we need. However, NAT means that the address the container sees isn't an address that's meaningful to anything outside, so anything which tries to communicate its IP out to the world will fail.

pasta instead chooses not to NAT; the container sees the IP from which its packets will appear on the outside - (one of) the host's IPs. This both simplifies the logic and avoids the problems above. The downside is that since it share's the host IP, there's no easy way for those to communicate with each other. Standlone pasta, by default, implements a special case NAT to handle that case, but it's rather limited and involves some other tradeoffs. podman disables that by default (you can re-enable it with --map-gw). We're aiming to have a more flexible set up for these special case NATs which should be usable in more situations, but it's still a ways off.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. pasta pasta(1) bugs or features stale-issue
Projects
None yet
Development

No branches or pull requests

8 participants