Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
How to access container from the LAN? #1343
Comments
|
More information about target system.
I have no idea what |
IshwarKanse
commented
Nov 24, 2015
|
Step 1: Create a bridge on your host, follow your distribution guide for the same. Here is an example configuration from my machine. I'm using Ubuntu 15.10.
Step 2: Create a new profile or you can edit the default profile.
Step 3: Edit the profile and add your bridge to it.
Step 4: While launching new containers you can use this profile or you can apply it to an existing container.
or
Restart the container if your applying it to an existing container. Step 5: You'll need to assign static ip to your container if you don't have dhcp in your network. |
My
This changes the meaning for Note also that host |
|
On Tue, Nov 24, 2015 at 01:45:57AM -0800, anatoly techtonik wrote:
Is there /etc/network/interfaces.d/eth0? is network-manager running? lxcbr0 is created by the init job 'lxc-net' (either /etc/init/lxc-net.conf |
No.
Probably, because host internet connection is up. How to check?
I see reference to |
|
@stgraber https://linuxcontainers.org/lxd/news/#lxd-024-release-announcement-8th-of-december-2015 says we now have macvlan available. Wouldn't this solve this issue? |
|
Probably. Still need to figure out how to use it. My use case:
|
|
@stgraber Is there some documentation on the new macvlan functionality? |
|
Well, the various fields are documented in specs/configuration.md It's basically:
Note that it cannot work with WiFi networks (which is why we've never made it the default in LXC or LXD) and similarly cannot be used on links that do per-MAC 802.1X authentication. |
Thanks a lot. That clarifies it for me. @techtonik Considering all pieces, I would still go with the routing (DHCP) solution. |
|
Yep. It will be a pain if it doesn't work through WiFi. |
|
Seems like this issue is settled, @techtonik ? |
|
@srkunze, not really. So far I see no clear recipe in this thread. The answer needs to be summarized, ideally with some pictures. |
I have no idea how to do this for all types of routers. UI changes too quickly and all routers/DHCP servers can be configured differently. @stgraber Maybe, there is another even easier solution? |
|
@srkunze summarizing up to a point where it is clear why you need a router and where is sufficient for now. But note that there are three possible cases:
I actually think about 3rd variant - why don't use already opened channel to fetch traffic to and from running container? With netcat, for example. |
Annakan
commented
Feb 25, 2016
|
I would like to pinch in because every time I tried to use LXC/LXD I encountered the problem without a clear and "simple" solution. Explanation and digressionskip this if too long If I may suggest I think there are 3 or 4 kinds of configuration that are useful and should be able to be setup easily with a simple profile choice (at least the first two would be game changing) :
End of digressionHaving two profiles after install ( Back to the issueI tried both ways : First @IshwarKanse wayI added a bridge to my /etc/network/interface but what puzzled at what IP I should use here
Given that my host IP is DHCP configured (fixed but the lease need to be kept ...) . second @stgraber way with macvlanI did exactly this : Stop
`lxc profile apply containername mvlan`
Start `containername`
But the container did not get any IP at start up, the network interface is here with a mac but the associated dhclient can't get an IP. I had a look at https://www.flockport.com/lxc-macvlan-networking/ even if I know LXC and LXD are slightly different beasts, the lxc way seemed to also set up a dedicated bridge on the host and a |
|
@techtonik What's wrong with plain old routing? At least it solves this issue: accessing a container from the LAN. I don't see much use of port forwarding right now. :) @Annakan Don't you think this is the other way round? This issue here is about how to access a container FROM the LAN. Given the routing of the LAN is properly configured that just works. |
Annakan
commented
Feb 25, 2016
|
Thanks for your answer A computer crash made me loose my long answer, you will thus be spared it ;) I don't think it is the other way round since that means you have to manage on the host something that concerns the container. You can't use a container without doing at least some port or IP mapping and that's something you have to do with the IP of the container. Thus, you have to retrieve that IP and expose it on the host, a sure sign that it is something that should be managed by the container manager and not manually on the host. Or else, you have to keep tabs manually on the host of the rules you create for the container. Container migration is also complicated because you have to find a way to reapply the rules on the target host. Ipchains, as far as I know does not offer a way to tag or group rules making this even more complicated and relying on the IP of the containers and the "exact identity" of rules to manage them. Besides as far I was able to see, the official documentation does not offer a template of such rules, and the ones I googgled seemed really awkward with strange uses of "nat", but I confess I am not an ipchain expert, they did not work for me in a reliable way. The larger "view"Isolated containers and complex service discovery and transfer of rules, total independence from the file-system and automatic orchestration are a fine theoretical nirvana but it does concerns only 0.001 % of the people and companies out there, the ones who dynamically spans 1000 of containers across multiple data-centers. But it starts by being able to spawn a LX container and have it grab an IP from the host How simple I wish it to be ;)I really think that pass by being able to write :
And get a container that is reachable from the network. Simple, useful, and immediately rewarding. |
|
That's quite some explanation. Thanks :-) So, the argument goes that in order to do the "routing config" step, one needs to know the container's IP in the first place. Quite true. Manually doable but automatically would be better. Which brings me to my next question: the to-be-configured DHCP server does not run on the host necessarily but on another network-centric host. How should LXD authenticate there to add routes? |
Annakan
commented
Feb 26, 2016
|
Yes, I would make it even more precise saying that only the container know its purpose and thus the connectivity and ports he needs to expose, so however you see it, providing it with the resources (port mapping, ip) is something you need to query it to achieve, and that might be problematic if it is not yet running. As for the last part of your answer I suspect we have misunderstanding (unless you talk about the last, 4th, case of my long answer who is more an open thinking than the first 2). My "only" wish is either/both 1.To have a a way to make port mapping and routing a part of the container (either its definition, or a launch time value or a profile definition, I suspect a launch/definition time value would be best). And have run/launch take care of firewall rules and bridge configuration) The various answers in this thead (from @IshwarKanse , through routing, and @stgraber , through macvalan) are supposed to give just that, except I (and the OP it seems) were not able to get them working manually, and I wish they could be automatically set up by either a profile or a launch configuration. Unless you are talking about DHCP security through option 82 ? PS : I edited my previous post to clear things up |
|
I think I got it now. :) Well, that's something for @stgraber to decide. :) |
|
@Annakan did you try using macvlan with the parent set to the host interface? |
|
Oh, I see you mentioned it earlier. macvlan should do basically what you want, the one catch though is that your container can't talk to the host then, so if your host is the dhcp server, that'd be a problem. |
Annakan
commented
Feb 26, 2016
|
Thanks for the answers
Container startup fails
Yields :
|
|
Can you paste "lxc config show --expanded OpenResty"? |
Annakan
commented
Feb 26, 2016
|
I assumed you meant the "show" subcommand
hum .. parent: brwan ? |
|
ok, what about "lxc config show OpenResty" (no expanded)? |
Annakan
commented
Feb 26, 2016
|
I might have got it, I used the same container experimenting with @IshwarKanse solution and at that point I tried to setup a secondary bridge (hence the brwan name of the profile). I suspect some previous profile configuration are lingering on. Or some dependency I don't understand yet. right ? |
|
Yeah, my guess is that you have local network configuration set on that container with the same device name of "eth0". Container specific config takes precedence over whatever came from profiles, so your change to the profile was effectively ignored. |
|
@stgraber How come that "macvlan should do basically what you want, the one catch though is that your container can't talk to the host then"? |
Annakan
commented
Feb 26, 2016
with
gives
so it works *_SORRY *_for the trouble. I don't understand precisely your sentence :
When you say "local network" you mean in the container ? the fact that both inside and outside nic are named eth0 ? |
|
@srkunze it's an odd property of macvlan, macvlan can talk to the outside and between themselves but cannot talk to the parent device, so the host in this case. |
|
@Annakan What I mean is that if you look at "lxc config show CONTAINER_NAME", you'll most likely find a "eth0" device listed there. LXD when building a container configuration applies all the profiles first (in the order they were specified) and then applies the local container configuration, so if you have profiles with "eth0" as a device name and your container does too in its local config, then the container's entry will override whatever came from the profiles. In other words:
At that point, the container has a eth0 device coming from its profile which is a bridged interface.
After that, the container ignores the eth0 device coming from its profile and instead uses a macvlan interface. |
|
@stgraber Great. -.- |
markc
commented
Feb 27, 2016
|
FWIW my simplistic SOHO non-enterprise approach to exposing containers to my LAN and/or public IPs is to disable the default lxcbr0 setup (10.0.3.0) and create my own lxcbr0 bridge on the host with my LAN IP (192.168.0.0). I then install the standard |
IshwarKanse
commented
Feb 27, 2016
|
I've been using LXD in production for 5 months now. I work as System and Network Administrator at University of Mumbai. Many services in our University like OpenNMS for network monitoring, OTRS for helpdesk, Owncloud for file sharing, websites of many departments, Zabbix for server monitoring etc are running in LXD containers. As mentioned in my previous post, I've created a bridge br0 on all the container hosts and created a profile called bridged which tells LXD to use the br0 bridge when launching containers. For those of you who have worked with KVM are familier with bridges. When launching new containers, these containers get IP address from the DHCP in our network but I usually change it to a static address. Migration is also easy, just move the container to another host and it works, no need to change anything on the host. One of my friends recently joined a company called RKSV as a Senior Solutions Architect. It is a financial company. They provide online trading platform. One of the applications they were working on is Upstox, it is a mobile app to buy and sell stocks. Being a financial app it pulls in lots of stock data in real time, They needed a platform which provided high performance to deploy their application backend with. My friend started testing the application in various virtualization platforms. He used Xenserver which was not working for the application. Some of the developers were fans of Docker, they insisted running their application with it. Running a application on your laptop using Docker is easy but running it in production is a different thing. You need to think about logging, monitoring, high availability, multi host network and all that stuff. Traditional solutions doesn't really work. You need services that are developed to work with Docker. In the last company that I worked for I was working with deployment of various solutions with Docker. It was fun working with small multi container applications. But for large complicated apps, we usually preferred KVM. In my friend's case their application was experiencing very high latency in the network performance with Docker. They use multicast in their network but they couldn’t get the containers to work with multicast. I suggested my friend to try LXD for their application We used Ubuntu 14.04 for the base machines and Ubuntu 14.04 as container guests. Deployed the application with all its dependency services nodejs, mongodb, redis etc. Used Netflix Vector and other tools to test the performance. The performance was really great, we basically were running the application on bare metal. We used the bridged method. Multicast was working inside the containers. Traditional monitoring and logging solutions were working. They later switched their QA to LXD. After a month of testing they were using LXD in production. They had few problems with their Mongo instances, but switching them to macvlan solved the problem. They are now building images with Ansible. Using Jenkins + Ansible for automated deployment. Using Gitlab for source code management. Everything running with LXD. Whenever I meet my friend I always ask how is LXD working for them. He says they never had a problem. Whenever a new LXD version is released, we test it for few days and then later do the upgrade for our machines. |
|
Thanks @Annakan for taking it from where I left it. I especially like this part:
So it looks like people in this thread have found the right recipe (even two) that work as a solution. Now the only thing that is left is to somehow make a short instruction targeted for folks like web component designers who never had to deal with network stack deeper than HTTP connections. |
Annakan
commented
Feb 29, 2016
|
I am willing to draft such a short how to.
I would then write a "how to" document to cover all those options for a person starting with LXC/D either coming from docker or not. |
|
The only way I'm aware of to overcome the macvlan issue is by having the host itself use a macvlan interface for its IP, leaving the parent device (e.g. eth0) without any IP. |
|
Sounds suboptimal. At least regarding the convenience all other participants of this thread strive to accomplish. |
|
There's only so much we can do with the kernel interfaces being offered to us, I'm sure there would be a lot of interested people if you were to develop a new mode for the macvlan kernel driver which acts like the bridge mode but without its one drawback. |
|
@stgraber where is the bug tracker for this |
|
The upstream kernel doesn't really have bug trackers, when someone has a fix to contribute, the patch is just sent to a mailing-list for review. Well, they have https://bugzilla.kernel.org/ but it's not very actively looked at. |
Annakan
commented
Mar 1, 2016
|
Ok I am trying now to understand the "simple bridged" solution to get containers on the general network the way @IshwarKanse tried to explain on top of this thread. When I look at the "code/recipe" I see him build a new bridge with the Unless I am mistaken that means the He also assign an non routable IP address to the bridge but I suspect that IP class should be the same as the external Second he sets up a new profile that is technically identical to the default profile but point to the new bridge. So the only difference I see is that this new bridge is not managed by the Is that a moderately good analysis or am I way of base ? |
markc
commented
Mar 1, 2016
•
|
You could try something like this, assuming a recent ubuntu host, main router is 192.168.0.1 and this host will be 192.168.0.2, containers will be assigned 192.168.0.3 to 192.168.0.99... . set USE_LXC_BRIDGE="false" in /etc/default/lxc-net
. install the regular dnsmasq package
You may have to disable the DHCP server on your main router so that there is only a single DHCP server on this 192.168.0.* network segment but that is okay because any other (perhaps wifi) hosts on this network will now also be allocated IPs from the above /etc/dnsmasq.conf file. Now try Very simple, no multiple bridges, no iptables, no vlans, no fancy routing, no special container profiles. The trickiest part is making sure the host only has "nameserver 192.168.0.2" in /etc/resolv.conf and any container or other DHCP client (if you want to take advantage of local DNS caching and customised example.lan resolution). In my case I set my first container (192.168.0.3) as the DMZ on my router so that example.lan is a real domain accessible from the outside world on the routers external IP as well as 192.168.0.3 internally. Update: just to clarify the point @Annakan made about relying on the original DHCP server. My strategy here works fine with a remote DHCP server by commenting out below the DHCP section in /etc/dnsmasq.conf, which effectively removes DHCP functionality from dnsmasq, and getting IPs allocated from the remote/router DHCP server. I happen to use this particular method because I find it convenient to manage ALL DHCP and DNS requests via this single host and (because this dnsmasq servers DHCP/DNS ports are visible to the entire 192.168.0.0/24 network) it also works for all other devices I care to use, including wifi devices that connect directly to my router with the disabled DHCP server. Those devices get an IP from my hosts dnsmasq server because it's now the only one on the network and therefor gives me a simpler way to manage ALL DHCP/DNS for any machine on this LAN and any LXD container running on multiple LXD hosts as long as everything is on the same network segment. |
Annakan
commented
Mar 3, 2016
|
Thanks a lot for your detailed contribution. The goal on this thread is to be able to reuse a currently existing DHCP server on the network and make some containers full thin machines on the host network, without precluding the use of the host for isolated containers, and that means without destroying the base The working solution so far :use the Other method consideredIdeally we want would like to be able to set up two bridges, one for "normal" containers who get a non-routable IP from the LXC/D managed But one nic can't belong to two bridges. So I see no way of doing this. Real Vlans ? Can we configure the network inside some containers to get an IP on the host external network ? I don't see how for now since the lxcbr0 bridge sits on 10.0.3.x network. Other situation to documentLast we need to document the template firewall rules to expose port xxx in container "toto" on port yyy on the host, for the use case of isolated containers "à la" docker. |
melato
commented
Jul 17, 2016
|
I use shorewall and pound to forward incoming http requests to the appropriate container. In addition, I have a reverse proxy "pound" container (which could also run directly on the host itself, but why not put it in a container). |
melato
commented
Jul 17, 2016
|
I also use shorewall to setup a separate external ssh port for each container. For example, I ssh to port 7011 of the host which is forwarded to port 22 of the container with internal ip 10.x.x.11. Here's the configuration line in /etc/shorewall/rules for this: DNAT net lxc:10.17.92.11:22 tcp 7011 |
zeroware
commented
Aug 31, 2016
|
Hello, Just started to use LXD and so far it's awesome.
|
|
On Wed, Aug 31, 2016 at 12:18:02PM -0700, Zero wrote:
Sure, you can create a private bridge which doesn't have any outgoing |
emprovements
commented
Dec 18, 2016
•
|
@hallyn would you mind to give me some hints how to do that? I am fairly new in linux networking and I started to dig into LXD. I often have only wlan network accessible so in order to keep it as simple as possible and also keep an internet access for the containers and host (keep the lxdbr0 untouched) but still be able to reach the containers from host, I think this private bridge is perfect idea. Than I would attach my eth0 into the bridge so I can have internet over the wlan0 and reach container services in private "subnetwork" over the eth0. |
|
@emprovements what is your distro/release? |
stgraber
added
the
Discussion
label
Mar 8, 2017
psheets
commented
Mar 29, 2017
|
I ran into an issue getting containers to retrieve IPs from the local DNS using all of these solutions. The issue had to do with the vmware vSwitch the os was connected too. I was able get it to work by changing promiscuous mode to accept on the vSwitch. It is outlined here: |
Remigius2011
commented
Apr 6, 2017
|
I don't know whether this helps anybody, but in my setup (lxd 2.12 on xenial, upgraded from 2.0 installed as ubuntu package) I have launched a single test container with a bridged network named lxdbr0, then all I had to do was add a static route (currently on my windows machine, but I'll add it to my firewall):
where 10.205.0.0/24 is the bridge network and 192.168.1.99 is the LXD host (having a second IP 10.205.0.1 for adapter lxdbr0). Assuming the container has IP 10.205.0.241, you can now ping the host and the container:
(or at least I could...). This means, the lxd host acts as a gateway to the internal bridged network - more or less out of the box. |
|
After 1.5 years I finally managed to ping my container from the LAN using
|
techtonik
closed this
May 3, 2017
DougBrunelle
commented
May 8, 2017
•
|
Ubuntu containers is a very cool technology which frees us from having to use other virtualization techniques such as Virtualbox, but I perceive that there are still some usability issues that the developers might address for those of us trying to test or implement it. Perhaps there needs to be a more comprehensive 'lxd init'. My experience after days of trying various configurations and reinstallations of lxc/lxd is that I can access services on a container such as apache2, etc. from my host machine, but not from other machines on my LAN. Suggestion for an expanded version of lxd setup follows. I am assuming that most people will want their containers accessible/networkable from not only their local machine hosting the containers, but also from other machines on their LAN, for prototyping services, testing, etc.. I think that the options should be additive, in the following manner:
It seems obvious that if we're going to make containers visible outside the local LAN, we should retain the capability of networking to them from the LAN as well as the host machine, for maintenance. Maybe these options already exist and can be configured, if you understand the intricacies of lxc/lxd and networking on ubuntu, but for the casual user who wants to learn more about container usage and how to configure them for use outside of the host machine, these options would definitely be helpful. It would also help sell the technology to those who are just sticking their toes into the water to see how it feels. |
dann1
commented
May 8, 2017
|
LXD is awesome like it is, if you want to configure the network, then you need to know at least basic networking. That happens in VMs too, the differnece is VBox has a GUI. |
DougBrunelle
commented
May 9, 2017
|
I think you're probably right, dann1. When in doubt, rtfm. I think what I need is a step-by-step linux networking manual that will take me from beginner to guru in five easy steps. :-) |
markc
commented
May 9, 2017
•
|
@DougBrunelle depends on what you want to do but if it's Option: 3 then I find the easiest way is to create a bridge on the host called lxdbr0* (duh) then during
|
Remigius2011
commented
May 9, 2017
|
@DougBrunelle , in my experience, the default when saying yes to create a bridged network is 2, only the computers in the network don't know how to reach it by default, as the address range of the assigned IPs is outside the address range of your local network. This means you need to establish a static route, either from the pc you're sitting at or from the default gateway of the network it is connected to. For option 3, the best is probably to have a public facing reverse proxy, like nginx or HAProxy which distributes requests to the right endpoints. Of course, there's some learning curve to get there, but the internet is full of help and nginx is easy to configure (compared to apache httpd). |
root-prasanna
commented
Jul 8, 2017
|
@stgraber how to get access to a container which is inside virutal box (i.e. ubuntu machine inside virtual box) remotely. Virtual box and container can ping each other but container and remote machine cannot ping each other. All the machine have same ip range. |
techtonik commentedNov 24, 2015
Support you configured your LXD server for remote access and now can manage containers on remote machine. How do you actually run a web server on your container and access it from network?
First, let's say that your container is able to access the network already through
lxcbr0interface created automatically on host by LXC. But this interface is allocated for NAT (which is for one way connections), so to be able to listen to incoming connections, you need to create another interface likelxcbr0(called bridge) and link it to the network card (eth0) where you want to listen for incoming stuff.So the final setup should be:
The target system is Ubuntu 15.10