Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to access container from the LAN? #1343

Closed
techtonik opened this issue Nov 24, 2015 · 61 comments

Comments

@techtonik
Copy link
Contributor

commented Nov 24, 2015

Support you configured your LXD server for remote access and now can manage containers on remote machine. How do you actually run a web server on your container and access it from network?

First, let's say that your container is able to access the network already through lxcbr0 interface created automatically on host by LXC. But this interface is allocated for NAT (which is for one way connections), so to be able to listen to incoming connections, you need to create another interface like lxcbr0 (called bridge) and link it to the network card (eth0) where you want to listen for incoming stuff.

So the final setup should be:

  • lxcbr0 - mapped to eth0 on guest - NAT
  • lxcbr1 - mapped to eth1 on guest - LAN that gets address from LAN DHCP and listens for connection

The target system is Ubuntu 15.10

@techtonik

This comment has been minimized.

Copy link
Contributor Author

commented Nov 24, 2015

More information about target system.

$ ls -la /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

$ ls -la /etc/network/interfaces.d/
total 8
drwxr-xr-x 2 root root 4096 Apr 16  2015 .
drwxr-xr-x 7 root root 4096 Aug 20 00:42 ..

$ ip addr
1: lo: ...
2: eth0: ...
3. lxcbr0: ...
4. vethKWL1L8: ...

I have no idea what vethKWL1L8 is and why /etc/network/interfaces is empty.

@IshwarKanse

This comment has been minimized.

Copy link

commented Nov 24, 2015

Step 1: Create a bridge on your host, follow your distribution guide for the same. Here is an example configuration from my machine. I'm using Ubuntu 15.10.

sudo vim /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
    address 172.31.31.35
    netmask 255.255.255.0
    gateway 172.31.31.2
    dns-nameservers 8.8.8.8 8.8.4.4
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

Step 2: Create a new profile or you can edit the default profile.

lxc profile create bridged

Step 3: Edit the profile and add your bridge to it.

lxc profile edit bridged
name: bridged
config: {}
devices:
  eth0:
    nictype: bridged
    parent: br0
    type: nic

Step 4: While launching new containers you can use this profile or you can apply it to an existing container.

lxc launch trusty -p bridged newcontainer

or

lxc profile apply containername bridged 

Restart the container if your applying it to an existing container.

Step 5: You'll need to assign static ip to your container if you don't have dhcp in your network.

@techtonik

This comment has been minimized.

Copy link
Contributor Author

commented Nov 24, 2015

Step 1: Create a bridge on your host...

My /etc/network/interfaces is empty, but I already have eth0 and lxcbr0 configured. Where does this happen?
What are other configuration differences between my current lxcbr0 and proposed br0?
The address for host eth0 is handled dynamically by local DHCP server and I want the same for guest.

Step 3: Edit the profile and add your bridge to it.

This changes the meaning for eth0 on guest and I need new interface eth1 on guest that is LAN attached. I edited the issue to mean that I have DHCP running in the network.

Note also that host eth0 is the NAT interface already for lxcbr0 (if I understand correctly host eth0 is already bridged to lxcbr0) and it should also be LAN interface.

@hallyn

This comment has been minimized.

Copy link
Member

commented Nov 26, 2015

On Tue, Nov 24, 2015 at 01:45:57AM -0800, anatoly techtonik wrote:

Step 1: Create a bridge on your host...

My /etc/network/interfaces is empty, but I already have eth0 and lxcbr0 configured. Where does this happen?

Is there /etc/network/interfaces.d/eth0? is network-manager running?

lxcbr0 is created by the init job 'lxc-net' (either /etc/init/lxc-net.conf
or /lib/systemd/system/lxc-net.service)

@techtonik

This comment has been minimized.

Copy link
Contributor Author

commented Nov 27, 2015

Is there /etc/network/interfaces.d/eth0?

No. Is there /etc/network/interfaces.d is empty.

is network-manager running?

Probably, because host internet connection is up. How to check?

lxcbr0 is created by the init job 'lxc-net' (either /etc/init/lxc-net.conf
or /lib/systemd/system/lxc-net.service)

I see reference to lxc-net start but I don't see where is the configuration for lxcbr0.

@srkunze

This comment has been minimized.

Copy link
Contributor

commented Dec 11, 2015

@stgraber https://linuxcontainers.org/lxd/news/#lxd-024-release-announcement-8th-of-december-2015 says we now have macvlan available.

Wouldn't this solve this issue?

@techtonik

This comment has been minimized.

Copy link
Contributor Author

commented Dec 13, 2015

Probably. Still need to figure out how to use it. My use case:

  1. init remote container
  2. login into remote
  3. checkout website on remote
  4. run webserver on 0.0.0.0 remote
  5. access webserver from local
@srkunze

This comment has been minimized.

Copy link
Contributor

commented Dec 14, 2015

@stgraber Is there some documentation on the new macvlan functionality?

@stgraber

This comment has been minimized.

Copy link
Member

commented Dec 14, 2015

Well, the various fields are documented in specs/configuration.md

It's basically:

type=nic
nictype=macvlan
parent=eth0

Note that it cannot work with WiFi networks (which is why we've never made it the default in LXC or LXD) and similarly cannot be used on links that do per-MAC 802.1X authentication.

@srkunze

This comment has been minimized.

Copy link
Contributor

commented Dec 14, 2015

Note that it cannot work with WiFi networks (which is why we've never made it the default in LXC or LXD) and similarly cannot be used on links that do per-MAC 802.1X authentication.

Thanks a lot. That clarifies it for me.

@techtonik Considering all pieces, I would still go with the routing (DHCP) solution.

@techtonik

This comment has been minimized.

Copy link
Contributor Author

commented Dec 15, 2015

Yep. It will be a pain if it doesn't work through WiFi.

@srkunze

This comment has been minimized.

Copy link
Contributor

commented Dec 16, 2015

Seems like this issue is settled, @techtonik ?

@techtonik

This comment has been minimized.

Copy link
Contributor Author

commented Jan 5, 2016

@srkunze, not really. So far I see no clear recipe in this thread. The answer needs to be summarized, ideally with some pictures.

@srkunze

This comment has been minimized.

Copy link
Contributor

commented Jan 5, 2016

The answer needs to be summarized, ideally with some pictures.

I have no idea how to do this for all types of routers. UI changes too quickly and all routers/DHCP servers can be configured differently.

@stgraber Maybe, there is another even easier solution?

@techtonik

This comment has been minimized.

Copy link
Contributor Author

commented Jan 7, 2016

@srkunze summarizing up to a point where it is clear why you need a router and where is sufficient for now. But note that there are three possible cases:

  1. routing (1:only with remote host, 2:with external router)
  2. port forwarding on remote host
  3. port forwarding through LXC provided channel

I actually think about 3rd variant - why don't use already opened channel to fetch traffic to and from running container? With netcat, for example.

@Annakan

This comment has been minimized.

Copy link

commented Feb 25, 2016

I would like to pinch in because every time I tried to use LXC/LXD I encountered the problem without a clear and "simple" solution.

Explanation and digression

skip this if too long
Basically it is dead simple to launch an LXD container and connect into it and install nginx, and ldap server or a database, cool, but then ... there is no clear way to access it.
That's a need the docker team "solved" from day one with ports forwarding and I feel is really missing on LXD.
One of the strengths of LXD is the ability to be used as an isolated container, akin of docker, if you are of that religion, or a very thin and "toolable" 'virtual machine' and that is an invaluable thing. And a big selling point for LXD.
Except I was never able to have an lxc/d container simply request a IP from the DHCP available to the host and simply sit on the host network that way.
And let's face it, IpChain is a mess to configure (Just a quick look at pf and you'll see what I mean ;) ) and has no way to group/tag rules. So manually adding (if one can figure the right ones) the chains to add and keeping them updated with he IPs of the containers can quickly become nightmarish.
Again I am not saying that to rant but to highlight something the core developers, being well versed in the Linux network stack might overlook as a "first contact" issue.
I do understand the theory and basics of networking as I suspect many developers do but that fall short of understanding the intricacy of the stack and debugging efficiently a complex configuration and all the moving parts that LXD introduce to do its job (special dnsmaq, bridge with interface added dynamically, various kind of nic and networking types etc etc).

If I may suggest I think there are 3 or 4 kinds of configuration that are useful and should be able to be setup easily with a simple profile choice (at least the first two would be game changing) :

  1. Isolated container :
    The network configuration is expected to be obtained from the host, usually thought dnsmaq (current state). We need some tools that at least mirror the port forwarding capabilities of docker, ideally dynamically (even at run time)
  2. Thin VM :
    The network configuration is expected to be obtained through the host, usually thought DHCP. The lx container is a full standing citizen on the host external network.
  3. Thin VM / host managed network
    The goal is to be able to have network configuration done by the host but inside a range that is itself inside the host network range with the gateway of the host so that at the same time the IP's are allocated from a range configured on the LXD host but the resulting containers would be visible from the host external network. That might be tricky to do in a general way but I would love to know how to do that. The use case being to allocate sub-ranges to the LXD hosts and be able to spawn thin VM / containers on it that are first class network citizen. And the reason for that need, is that with a tool like consul one could then have a lot of dynamic configuration without heavy orchestration tools to dynamically manage ports in and out of the containers of a host and between container host.

End of digression

Having two profiles after install (defautlAsContainer and defaultAsThinVM) that could provide either (1) [an isolated container with ports forwarding] or (2) [a "thin VM" available on the host netword provided a DHCP or IP range is available] would completely change the "first hour experience" of LXD.

Back to the issue

I tried both ways :

First @IshwarKanse way

I added a bridge to my /etc/network/interface but what puzzled at what IP I should use here

    address 172.31.31.35
    netmask 255.255.255.0
    gateway 172.31.31.2

Given that my host IP is DHCP configured (fixed but the lease need to be kept ...) .
Second it seems that as soon as I bring the bridge up (sudo service network-manager restart) I loose connection on the host and the container can't get an IP at start up.
I don't doubt the way @IshwarKanse works but more explanation would be nice to place the configuration parameters inside the more general network parameters of the host.

second @stgraber way with macvlan

I did exactly this :

Stop containername
lxc profile edit mvlan

type=nic
nictype=macvlan
parent=eth0
`lxc profile apply containername mvlan` Start `containername`

But the container did not get any IP at start up, the network interface is here with a mac but the associated dhclient can't get an IP.

I had a look at https://www.flockport.com/lxc-macvlan-networking/ even if I know LXC and LXD are slightly different beasts, the lxc way seemed to also set up a dedicated bridge on the host and a lxc.network.macvlan.mode = bridge in the container config.
Is something like that the missing piece ?

@srkunze

This comment has been minimized.

Copy link
Contributor

commented Feb 25, 2016

@techtonik What's wrong with plain old routing? At least it solves this issue: accessing a container from the LAN. I don't see much use of port forwarding right now. :)

@Annakan Don't you think this is the other way round? This issue here is about how to access a container FROM the LAN. Given the routing of the LAN is properly configured that just works.

@Annakan

This comment has been minimized.

Copy link

commented Feb 25, 2016

Thanks for your answer

A computer crash made me loose my long answer, you will thus be spared it ;)

I don't think it is the other way round since that means you have to manage on the host something that concerns the container. You can't use a container without doing at least some port or IP mapping and that's something you have to do with the IP of the container. Thus, you have to retrieve that IP and expose it on the host, a sure sign that it is something that should be managed by the container manager and not manually on the host.

Or else, you have to keep tabs manually on the host of the rules you create for the container.
You have to update, delete them and that means you have to create complicated mechanisms to keep them in sync.

Container migration is also complicated because you have to find a way to reapply the rules on the target host.
On the other hand if the container profiles contains the network model (like : I use my host DHCP or I expose port X and Y to my host or through my host (different situations)) then it is simple to migrate them , activate them, or shut them down.

Ipchains, as far as I know does not offer a way to tag or group rules making this even more complicated and relying on the IP of the containers and the "exact identity" of rules to manage them.
It is, honestly a mess of a packet filter language.

Besides as far I was able to see, the official documentation does not offer a template of such rules, and the ones I googgled seemed really awkward with strange uses of "nat", but I confess I am not an ipchain expert, they did not work for me in a reliable way.

The larger "view"

Isolated containers and complex service discovery and transfer of rules, total independence from the file-system and automatic orchestration are a fine theoretical nirvana but it does concerns only 0.001 % of the people and companies out there, the ones who dynamically spans 1000 of containers across multiple data-centers.
This is the use case of docker and it is a very narrow target, and LXC/D as a true card to play by being able to scale from "thin VM"' that can be spawned by code , to "by the book immutable container", and offer a path for companies to go from one point to the other.

But it starts by being able to spawn a LX container and have it grab an IP from the host DHCP[edit for clarity : the same DHCPas the host, or the available DHCP] and be useful right away.
Then one can add, configuration management (SALT/Puppet etc), dynamic configuration (consul, zookeeper) and then evaluate the cost of abstracting the filesystem and database and making those containers immutable and idempotent) Docker is the religion of the immutable container, LXC/D can offer something much more flexible and address a much broader market.

How simple I wish it to be ;)

I really think that pass by being able to write :

lxc remote add images images.linuxcontainers.org
lxc launch images:centos/7/amd64 centos  -p AnyDHCPAvaliableToHostNetworkProfile

And get a container that is reachable from the network. Simple, useful, and immediately rewarding.

@srkunze

This comment has been minimized.

Copy link
Contributor

commented Feb 26, 2016

That's quite some explanation. Thanks :-)

So, the argument goes that in order to do the "routing config" step, one needs to know the container's IP in the first place. Quite true. Manually doable but automatically would be better.

Which brings me to my next question: the to-be-configured DHCP server does not run on the host necessarily but on another network-centric host. How should LXD authenticate there to add routes?

@Annakan

This comment has been minimized.

Copy link

commented Feb 26, 2016

Yes, I would make it even more precise saying that only the container know its purpose and thus the connectivity and ports he needs to expose, so however you see it, providing it with the resources (port mapping, ip) is something you need to query it to achieve, and that might be problematic if it is not yet running.
Better make that a part of its definition, have the environment set up as automatically as possible from there and my understanding is that's what profiles are for, make the junction between launch time and run time.

As for the last part of your answer I suspect we have misunderstanding (unless you talk about the last, 4th, case of my long answer who is more an open thinking than the first 2).

My "only" wish is either/both

1.To have a a way to make port mapping and routing a part of the container (either its definition, or a launch time value or a profile definition, I suspect a launch/definition time value would be best). And have run/launch take care of firewall rules and bridge configuration)
2. To have a way to launch a container grabbing its ip and stack configuration from a DHCP outside the host (the same the host potentially got its IP from), basically having the bridge and port configuration passing the dhcpoffer to the container and let the dhclientin it take it from here.

The various answers in this thead (from @IshwarKanse , through routing, and @stgraber , through macvalan) are supposed to give just that, except I (and the OP it seems) were not able to get them working manually, and I wish they could be automatically set up by either a profile or a launch configuration.

Unless you are talking about DHCP security through option 82 ?

PS : I edited my previous post to clear things up

@srkunze

This comment has been minimized.

Copy link
Contributor

commented Feb 26, 2016

I think I got it now. :)

Well, that's something for @stgraber to decide. :)

@stgraber

This comment has been minimized.

Copy link
Member

commented Feb 26, 2016

@Annakan did you try using macvlan with the parent set to the host interface?

@stgraber

This comment has been minimized.

Copy link
Member

commented Feb 26, 2016

Oh, I see you mentioned it earlier. macvlan should do basically what you want, the one catch though is that your container can't talk to the host then, so if your host is the dhcp server, that'd be a problem.

@Annakan

This comment has been minimized.

Copy link

commented Feb 26, 2016

Thanks for the answers
I did exactly this :

> Stop OpenResty
> lxc profile edit mvlan

type=nic
nictype=macvlan
parent=eth0


> lxc profile apply OpenResty mvlan
> Start OpenResty

lxc profile edit brwan gives exactly this

###
### Note that the name is shown but cannot be changed

name: brwan
config: {}
devices:
  eth0:
    nictype: macvlan
    parent: eth0
    type: nic

Container startup fails

lxc info --show-log OpenResty

Yields :
`
lxc 20160226161814.349 INFO lxc_seccomp - seccomp.c:parse_config_v2:449 - Adding compat rule for delete_module action 327681
lxc 20160226161814.349 INFO lxc_seccomp - seccomp.c:parse_config_v2:456 - Merging in the compat seccomp ctx into the main one
lxc 20160226161814.349 INFO lxc_conf - conf.c:run_script_argv:367 - Executing script '/usr/bin/lxd callhook /var/lib/lxd 4 start' for container 'OpenResty', config section 'lxc'
lxc 20160226161814.349 INFO lxc_start - start.c:lxc_check_inherited:247 - closed inherited fd 3
lxc 20160226161814.349 INFO lxc_start - start.c:lxc_check_inherited:247 - closed inherited fd 8
lxc 20160226161814.360 INFO lxc_monitor - monitor.c:lxc_monitor_sock_name:178 - using monitor sock name lxc/d78a9d7e97b4b375//var/lib/lxd/containers
lxc 20160226161814.375 DEBUG lxc_start - start.c:setup_signal_fd:285 - sigchild handler set
lxc 20160226161814.375 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161814.375 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161814.375 DEBUG lxc_console - console.c:lxc_console_peer_default:524 - no console peer
lxc 20160226161814.375 INFO lxc_start - start.c:lxc_init:484 - 'OpenResty' is initialized
lxc 20160226161814.376 DEBUG lxc_start - start.c:__lxc_start:1247 - Not dropping cap_sys_boot or watching utmp
lxc 20160226161814.377 INFO lxc_start - start.c:resolve_clone_flags:944 - Cloning a new user namespace
lxc 20160226161814.399 ERROR lxc_conf - conf.c:instantiate_veth:2590 - failed to attach 'veth2FKB5C' to the bridge 'brwan': Operation not permitted
lxc 20160226161814.414 ERROR lxc_conf - conf.c:lxc_create_network:2867 - failed to create netdev
lxc 20160226161814.414 ERROR lxc_start - start.c:lxc_spawn:1011 - failed to create the network
lxc 20160226161814.414 ERROR lxc_start - start.c:__lxc_start:1274 - failed to spawn 'OpenResty'
lxc 20160226161814.414 INFO lxc_conf - conf.c:run_script_argv:367 - Executing script '/usr/share/lxcfs/lxc.reboot.hook' for container 'OpenResty', config section 'lxc'
lxc 20160226161814.918 INFO lxc_conf - conf.c:run_script_argv:367 - Executing script '/usr/bin/lxd callhook /var/lib/lxd 4 stop' for container 'OpenResty', config section 'lxc'
lxc 20160226161814.993 WARN lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - command get_init_pid failed to receive response
lxc 20160226161814.993 WARN lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - command get_init_pid failed to receive response
lxc 20160226161814.994 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161814.994 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161815.001 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161815.001 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161815.003 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161858.875 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161858.875 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161858.883 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161858.887 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161858.887 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161858.889 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161858.897 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161922.688 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161922.688 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161922.690 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161922.694 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161922.694 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161922.696 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161922.697 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161932.011 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161932.011 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161932.013 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161932.016 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161932.016 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161932.025 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161932.027 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226165637.738 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226165637.738 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226165637.747 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226165637.751 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226165637.751 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226165637.759 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226165637.761 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error

It seems that LXD try to link themacvlan to a brigde named after the profile name (brwan) and not to the host (eth0 in my case) interface, unless the error message is misleading. or is it that I need to create a separate bridge named after the profile to receive the virtual interfaces ? (but then I will need to remove theeth0 host interface from thelxcbr0 `bridge right ? and thus loose other container connectivity ?)

@stgraber

This comment has been minimized.

Copy link
Member

commented Feb 26, 2016

Can you paste "lxc config show --expanded OpenResty"?

@Annakan

This comment has been minimized.

Copy link

commented Feb 26, 2016

I assumed you meant the "show" subcommand
lxc config show --expanded OpenResty

name: OpenResty
profiles:
- brwan
config:
  volatile.base_image: 4dfde108d4e03643816ce2b649799dd3642565ca81a147c9153ca34c151b42ea
  volatile.eth0.hwaddr: 00:16:3e:8a:3a:e1
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":310000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":310000,"Nsid":0,"Maprange":65536}]'
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: brwan
    type: nic
  root:
    path: /
    type: disk
ephemeral: false

hum .. parent: brwan ?

@stgraber

This comment has been minimized.

Copy link
Member

commented Feb 26, 2016

ok, what about "lxc config show OpenResty" (no expanded)?

@Annakan

This comment has been minimized.

Copy link

commented Feb 26, 2016

I might have got it, I used the same container experimenting with @IshwarKanse solution and at that point I tried to setup a secondary bridge (hence the brwan name of the profile).

I suspect some previous profile configuration are lingering on. Or some dependency I don't understand yet.
I shall try with a completely fresh container and I should not have reused my previous one

right ?

@srkunze

This comment has been minimized.

Copy link
Contributor

commented Feb 26, 2016

@stgraber Great. -.-
So, what's the solution here? A custom wrapper deciding whether to talk to the host directly or use macvlan? Or is there a standard solution to this?

@markc

This comment has been minimized.

Copy link

commented Feb 27, 2016

FWIW my simplistic SOHO non-enterprise approach to exposing containers to my LAN and/or public IPs is to disable the default lxcbr0 setup (10.0.3.0) and create my own lxcbr0 bridge on the host with my LAN IP (192.168.0.0). I then install the standard dnsmasq package on my Kubuntu host, disable DHCP on my LAN router, and use a single /etc/dnsmasq.conf config file to manage all DHCP and DNS queries for all my containers and other hosts on my LAN.

@IshwarKanse

This comment has been minimized.

Copy link

commented Feb 27, 2016

I've been using LXD in production for 5 months now. I work as System and Network Administrator at University of Mumbai. Many services in our University like OpenNMS for network monitoring, OTRS for helpdesk, Owncloud for file sharing, websites of many departments, Zabbix for server monitoring etc are running in LXD containers. As mentioned in my previous post, I've created a bridge br0 on all the container hosts and created a profile called bridged which tells LXD to use the br0 bridge when launching containers. For those of you who have worked with KVM are familier with bridges. When launching new containers, these containers get IP address from the DHCP in our network but I usually change it to a static address. Migration is also easy, just move the container to another host and it works, no need to change anything on the host.

One of my friends recently joined a company called RKSV as a Senior Solutions Architect. It is a financial company. They provide online trading platform. One of the applications they were working on is Upstox, it is a mobile app to buy and sell stocks. Being a financial app it pulls in lots of stock data in real time, They needed a platform which provided high performance to deploy their application backend with. My friend started testing the application in various virtualization platforms. He used Xenserver which was not working for the application. Some of the developers were fans of Docker, they insisted running their application with it. Running a application on your laptop using Docker is easy but running it in production is a different thing. You need to think about logging, monitoring, high availability, multi host network and all that stuff. Traditional solutions doesn't really work. You need services that are developed to work with Docker. In the last company that I worked for I was working with deployment of various solutions with Docker. It was fun working with small multi container applications. But for large complicated apps, we usually preferred KVM. In my friend's case their application was experiencing very high latency in the network performance with Docker. They use multicast in their network but they couldn’t get the containers to work with multicast. I suggested my friend to try LXD for their application We used Ubuntu 14.04 for the base machines and Ubuntu 14.04 as container guests. Deployed the application with all its dependency services nodejs, mongodb, redis etc. Used Netflix Vector and other tools to test the performance. The performance was really great, we basically were running the application on bare metal. We used the bridged method. Multicast was working inside the containers. Traditional monitoring and logging solutions were working. They later switched their QA to LXD. After a month of testing they were using LXD in production. They had few problems with their Mongo instances, but switching them to macvlan solved the problem. They are now building images with Ansible. Using Jenkins + Ansible for automated deployment. Using Gitlab for source code management. Everything running with LXD.

Whenever I meet my friend I always ask how is LXD working for them. He says they never had a problem. Whenever a new LXD version is released, we test it for few days and then later do the upgrade for our machines.

@techtonik

This comment has been minimized.

Copy link
Contributor Author

commented Feb 28, 2016

Thanks @Annakan for taking it from where I left it. I especially like this part:

(2) [a "thin VM" available on the host netword provided a DHCP or IP range is available] would completely change the "first hour experience" of LXD.

So it looks like people in this thread have found the right recipe (even two) that work as a solution. Now the only thing that is left is to somehow make a short instruction targeted for folks like web component designers who never had to deal with network stack deeper than HTTP connections.

@Annakan

This comment has been minimized.

Copy link

commented Feb 29, 2016

I am willing to draft such a short how to.
I just would love to completely document this subject with

  • is it possible to overcome the "can't see the container ip from the lxd host" macvlan drawback with some clever routing magic(but I suspect it is not possible because one nic as now two mac addresses and there is no way to route below the ip level, might be wrong though)
  • I would like to also document the @IshwarKanse way with a special bridge
  • and ultimately, to give a template of the proper ipchains rules in the case of a standard container to expose only some ports to be complete.

I would then write a "how to" document to cover all those options for a person starting with LXC/D either coming from docker or not.
I'll come back later to investigate @IshwarKanse solution and try to have it work too.

@stgraber

This comment has been minimized.

Copy link
Member

commented Feb 29, 2016

The only way I'm aware of to overcome the macvlan issue is by having the host itself use a macvlan interface for its IP, leaving the parent device (e.g. eth0) without any IP.

@srkunze

This comment has been minimized.

Copy link
Contributor

commented Feb 29, 2016

Sounds suboptimal. At least regarding the convenience all other participants of this thread strive to accomplish.

@stgraber

This comment has been minimized.

Copy link
Member

commented Feb 29, 2016

There's only so much we can do with the kernel interfaces being offered to us, I'm sure there would be a lot of interested people if you were to develop a new mode for the macvlan kernel driver which acts like the bridge mode but without its one drawback.

@techtonik

This comment has been minimized.

Copy link
Contributor Author

commented Mar 1, 2016

@stgraber where is the bug tracker for this macvlan? I wonder if somebody had already reported the issue?

@stgraber

This comment has been minimized.

Copy link
Member

commented Mar 1, 2016

The upstream kernel doesn't really have bug trackers, when someone has a fix to contribute, the patch is just sent to a mailing-list for review.

Well, they have https://bugzilla.kernel.org/ but it's not very actively looked at.

@Annakan

This comment has been minimized.

Copy link

commented Mar 1, 2016

Ok I am trying now to understand the "simple bridged" solution to get containers on the general network the way @IshwarKanse tried to explain on top of this thread.

When I look at the "code/recipe" I see him build a new bridge with the eth0 host interface.

Unless I am mistaken that means the eth0interface will leave the base lxcbr0 bridge am I right ? (Or can a physical interface be part of two different bridges, because when I tried this I lost connectivity on the eth0 interface to the outside world. But that might also be a route problem.)

He also assign an non routable IP address to the bridge but I suspect that IP class should be the same as the external dhcp host.

Second he sets up a new profile that is technically identical to the default profile but point to the new bridge.

So the only difference I see is that this new bridge is not managed by the dnsmaq daemon.

Is that a moderately good analysis or am I way of base ?

@markc

This comment has been minimized.

Copy link

commented Mar 1, 2016

You could try something like this, assuming a recent ubuntu host, main router is 192.168.0.1 and this host will be 192.168.0.2, containers will be assigned 192.168.0.3 to 192.168.0.99...

. set USE_LXC_BRIDGE="false" in /etc/default/lxc-net
. make sure /etc/lxc/lxc-usernet has something like "YOUR_USERNAME veth lxcbr0 10"
. add this to /etc/network/interfaces (change IP and devices to match your needs)...

auto eth0
iface eth0 inet manual

auto lxcbr0
iface lxcbr0 inet static
  address 192.168.0.2
  netmask 255.255.255.0
  gateway 192.168.0.1
  dns-nameserver 192.168.0.2
  dns-search example.lan
  bridge_ports eth0
  bridge_stp off

. install the regular dnsmasq package
. create a /etc/dnsmasq.conf file something like this (change example.lan to your domain)...

domain-needed
bogus-priv
no-resolv
no-poll
expand-hosts
log-queries
log-dhcp
cache-size=10000
no-negcache
local-ttl=60
log-async=10
dns-loop-detect
except-interface=eth0
listen-address=192.168.0.2
server=8.8.8.8
server=8.8.4.4
domain=example.lan
local=/example.lan/

host-record=gw.example.lan,192.168.0.1
host-record=host.example.lan,192.168.0.2
host-record=example.lan,192.168.0.3
host-record=c3.example.lan,192.168.0.3
host-record=c4.example.lan,192.168.0.4
host-record=c5.example.lan,192.168.0.5
ptr-record=example.lan,192.168.0.3
ptr-record=host.example.lan,192.168.0.2
mx-host=example.lan,example.lan,10
txt-record=example.lan,"v=spf1 mx -all"
cname=www.example.lan,example.lan

# DHCP

dhcp-range=192.168.0.3,192.168.0.99,255.255.255.0
dhcp-option=option:domain-search,example.lan
dhcp-option=3,192.168.0.1

dhcp-host=c3,192.168.0.3
dhcp-host=c4,192.168.0.4
dhcp-host=c5,192.168.0.5

You may have to disable the DHCP server on your main router so that there is only a single DHCP server on this 192.168.0.* network segment but that is okay because any other (perhaps wifi) hosts on this network will now also be allocated IPs from the above /etc/dnsmasq.conf file.

Now try lxc launch YOUR_IMAGE c3 (or c4, c5 etc) and the default image will now get an IP from your host dnsmasqs DHCP server according to whatever is mapped at the end of the above /etc/dnsmasq.conf.

Very simple, no multiple bridges, no iptables, no vlans, no fancy routing, no special container profiles.

The trickiest part is making sure the host only has "nameserver 192.168.0.2" in /etc/resolv.conf and any container or other DHCP client (if you want to take advantage of local DNS caching and customised example.lan resolution). In my case I set my first container (192.168.0.3) as the DMZ on my router so that example.lan is a real domain accessible from the outside world on the routers external IP as well as 192.168.0.3 internally.

Update: just to clarify the point @Annakan made about relying on the original DHCP server. My strategy here works fine with a remote DHCP server by commenting out below the DHCP section in /etc/dnsmasq.conf, which effectively removes DHCP functionality from dnsmasq, and getting IPs allocated from the remote/router DHCP server. I happen to use this particular method because I find it convenient to manage ALL DHCP and DNS requests via this single host and (because this dnsmasq servers DHCP/DNS ports are visible to the entire 192.168.0.0/24 network) it also works for all other devices I care to use, including wifi devices that connect directly to my router with the disabled DHCP server. Those devices get an IP from my hosts dnsmasq server because it's now the only one on the network and therefor gives me a simpler way to manage ALL DHCP/DNS for any machine on this LAN and any LXD container running on multiple LXD hosts as long as everything is on the same network segment.

@Annakan

This comment has been minimized.

Copy link

commented Mar 3, 2016

Thanks a lot for your detailed contribution.
Your solution is only doable in an environment where you can basically "take over" the network and you make your LXD/C machine the new DHCP server (might be more useful to reuse your existing DHCP server currently on the network, especially if you want to have more then one LXC/D container host, but then you would answer the need expressed on this thread exactly).

The goal on this thread is to be able to reuse a currently existing DHCP server on the network and make some containers full thin machines on the host network, without precluding the use of the host for isolated containers, and that means without destroying the base lxcbr0 bridge.
It is basically to be able to do with LXC/D what VMWare /VBox do natively by letting you chose a network configuration of bridge, NAT or host only * per VM*.
The whole configuration should not rely on you having control of anything beyond the host because in most "enterprise" situation, one doesn't... And besides anything that relies on some LXC/D specific configuration beyond the host (on another machine than the host) has a poor chance to scale well (what you did on another machine or network component to accommodate your LXC/D host will either not work or have to be duplicated for another LXC/D host)

The working solution so far :

use the macvlan option
benefit : make your VM stand on the host network as any other non container made machine of that network
Drawback : your host can't talk to the container, that might make automation difficult.

Other method considered

Ideally we want would like to be able to set up two bridges, one for "normal" containers who get a non-routable IP from the LXC/D managed dnsmasq, the other a "transparent" bridge that would connect the containers designated "thin VM" directly on the host external network and get IPs from a DHCP server on the host external network.

But one nic can't belong to two bridges. So I see no way of doing this. Real Vlans ?

Can we configure the network inside some containers to get an IP on the host external network ? I don't see how for now since the lxcbr0 bridge sits on 10.0.3.x network.

Other situation to document

Last we need to document the template firewall rules to expose port xxx in container "toto" on port yyy on the host, for the use case of isolated containers "à la" docker.
I'll be digging again in the ipchain documentation to achieve that.
Ideally I would love LXD to take care of that and implement the same port and 'link' concept as docker.
Port (and links) are useful concept, volumes are much more dubious concepts the way docker defines them (dummy ref counted containers).

@melato

This comment has been minimized.

Copy link

commented Jul 17, 2016

I use shorewall and pound to forward incoming http requests to the appropriate container.
With shorewall, you can configure iptables to forward port 80 to a specific container (on port 80 or any other port).

In addition, I have a reverse proxy "pound" container (which could also run directly on the host itself, but why not put it in a container).
Using shorewall, I've configured iptables to map incoming port 80 to pound:8080
pound looks at the Host: header and forwards the request to port 80 of the appropriate container.
The container that has the web server listens to port 80, as usual. It should be configured to log the original ip address of the request, instead of the ip address of the pound container.
I start with the two-interface configration for shorewall. I've used with both the lxcbr0 and lxdbr0 interfaces.
I configure my containers with static ip addresses.

@melato

This comment has been minimized.

Copy link

commented Jul 17, 2016

I also use shorewall to setup a separate external ssh port for each container. For example, I ssh to port 7011 of the host which is forwarded to port 22 of the container with internal ip 10.x.x.11. Here's the configuration line in /etc/shorewall/rules for this:

DNAT net lxc:10.17.92.11:22 tcp 7011

@zeroware

This comment has been minimized.

Copy link

commented Aug 31, 2016

Hello,

Just started to use LXD and so far it's awesome.
I was wondering if you could assign a second interface to all the containers.
This interface would act has an internal lan only local to the host.
Then you can combine this with the macvlan solution and you'd be able to :

  • Reach your containers on the same LAN where the host belong
  • Reach your containers from inside the host with the internal LAN using the secondary interface on the containers
@hallyn

This comment has been minimized.

Copy link
Member

commented Sep 1, 2016

On Wed, Aug 31, 2016 at 12:18:02PM -0700, Zero wrote:

Hello,

Just started to use LXD and so far it's awesome.
I was wondering if you could assign a second interface to all the containers.
This interface would act has an internal lan only local to the host.
Then you can combine this with the macvlan solution and you'd be able to :

  • Reach your containers on the same LAN where the host belong
  • Reach your containers from inside the host with the internal LAN using the secondary interface on the containers

Sure, you can create a private bridge which doesn't have any outgoing
nics attached, then add to the default lxd profile a second nic which
is on that bridge.

@emprovements

This comment has been minimized.

Copy link

commented Dec 18, 2016

@hallyn would you mind to give me some hints how to do that? I am fairly new in linux networking and I started to dig into LXD. I often have only wlan network accessible so in order to keep it as simple as possible and also keep an internet access for the containers and host (keep the lxdbr0 untouched) but still be able to reach the containers from host, I think this private bridge is perfect idea. Than I would attach my eth0 into the bridge so I can have internet over the wlan0 and reach container services in private "subnetwork" over the eth0.
Thanks!

@hallyn

This comment has been minimized.

Copy link
Member

commented Dec 19, 2016

@emprovements what is your distro/release?

@stgraber stgraber added the Discussion label Mar 8, 2017

@psheets

This comment has been minimized.

Copy link

commented Mar 29, 2017

I ran into an issue getting containers to retrieve IPs from the local DNS using all of these solutions. The issue had to do with the vmware vSwitch the os was connected too. I was able get it to work by changing promiscuous mode to accept on the vSwitch. It is outlined here:
https://medium.com/@philsheets/vmware-lxd-external-dhcp-server-for-containers-2f1470995111

@Remigius2011

This comment has been minimized.

Copy link

commented Apr 6, 2017

I don't know whether this helps anybody, but in my setup (lxd 2.12 on xenial, upgraded from 2.0 installed as ubuntu package) I have launched a single test container with a bridged network named lxdbr0, then all I had to do was add a static route (currently on my windows machine, but I'll add it to my firewall):

$ route add 10.205.0.0 MASK 255.255.255.0 192.168.1.99

where 10.205.0.0/24 is the bridge network and 192.168.1.99 is the LXD host (having a second IP 10.205.0.1 for adapter lxdbr0). Assuming the container has IP 10.205.0.241, you can now ping the host and the container:

$ ping 10.205.0.1
$ ping 10.205.0.241

(or at least I could...). This means, the lxd host acts as a gateway to the internal bridged network - more or less out of the box.

@techtonik

This comment has been minimized.

Copy link
Contributor Author

commented May 3, 2017

After 1.5 years I finally managed to ping my container from the LAN using macvlan interface. I don't need to access container from the host machine, so if anybody needs that setup, you are more than welcome to open a separate issue.

macvlan solution is documented here - #3273

@techtonik techtonik closed this May 3, 2017

@DougBrunelle

This comment has been minimized.

Copy link

commented May 8, 2017

Ubuntu containers is a very cool technology which frees us from having to use other virtualization techniques such as Virtualbox, but I perceive that there are still some usability issues that the developers might address for those of us trying to test or implement it.

Perhaps there needs to be a more comprehensive 'lxd init'. My experience after days of trying various configurations and reinstallations of lxc/lxd is that I can access services on a container such as apache2, etc. from my host machine, but not from other machines on my LAN.

Suggestion for an expanded version of lxd setup follows. I am assuming that most people will want their containers accessible/networkable from not only their local machine hosting the containers, but also from other machines on their LAN, for prototyping services, testing, etc.. I think that the options should be additive, in the following manner:

  1. Option 1: By default, containers are accessible from the host machine only. Which seems to be the current situation.
  2. Option 2 would be to make the containers accessible from other computers on the user's LAN and the host machine.
  3. Option 3: make the containers visible/usable from the Internet, the user's LAN, and the host machine.
  4. Option 4 would be a manual networking setup for the container, for network and container gurus.

It seems obvious that if we're going to make containers visible outside the local LAN, we should retain the capability of networking to them from the LAN as well as the host machine, for maintenance.

Maybe these options already exist and can be configured, if you understand the intricacies of lxc/lxd and networking on ubuntu, but for the casual user who wants to learn more about container usage and how to configure them for use outside of the host machine, these options would definitely be helpful. It would also help sell the technology to those who are just sticking their toes into the water to see how it feels.

@dann1

This comment has been minimized.

Copy link

commented May 8, 2017

LXD is awesome like it is, if you want to configure the network, then you need to know at least basic networking. That happens in VMs too, the differnece is VBox has a GUI.

@DougBrunelle

This comment has been minimized.

Copy link

commented May 9, 2017

I think you're probably right, dann1. When in doubt, rtfm. I think what I need is a step-by-step linux networking manual that will take me from beginner to guru in five easy steps. :-)

@markc

This comment has been minimized.

Copy link

commented May 9, 2017

@DougBrunelle depends on what you want to do but if it's Option: 3 then I find the easiest way is to create a bridge on the host called lxdbr0* (duh) then during lxd init select no for "Would you like to create a new network bridge (yes/no) [default=yes]? no" then your new containers will pick up the local LAN or public IP from the nearest DHCP server. Once the host bridge is set up then the rest "just works". I've got 1/2 dozen local LAN boxes setup like this and after setting up one container as my main local DNS server for local LAN resolution (plus a resolver for upstream caching) on my laptop I was then able to lxc copy it to my NAS, twiddled my DHCP router to give the copied container the same IP, and all my local LAN+upstream DNS resolution kept working. Once lxc move works reliably I'll start pushing containers setup on my laptop in front of me to live public servers.

  • see this previous post above for host bridge setup hints...
    #1343 (comment)
@Remigius2011

This comment has been minimized.

Copy link

commented May 9, 2017

@DougBrunelle , in my experience, the default when saying yes to create a bridged network is 2, only the computers in the network don't know how to reach it by default, as the address range of the assigned IPs is outside the address range of your local network. This means you need to establish a static route, either from the pc you're sitting at or from the default gateway of the network it is connected to. For option 3, the best is probably to have a public facing reverse proxy, like nginx or HAProxy which distributes requests to the right endpoints. Of course, there's some learning curve to get there, but the internet is full of help and nginx is easy to configure (compared to apache httpd).

@root-prasanna

This comment has been minimized.

Copy link

commented Jul 8, 2017

@stgraber how to get access to a container which is inside virutal box (i.e. ubuntu machine inside virtual box) remotely. Virtual box and container can ping each other but container and remote machine cannot ping each other. All the machine have same ip range.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.