Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to access container from the LAN? #1343

Closed
techtonik opened this issue Nov 24, 2015 · 77 comments
Closed

How to access container from the LAN? #1343

techtonik opened this issue Nov 24, 2015 · 77 comments

Comments

@techtonik
Copy link
Contributor

@techtonik techtonik commented Nov 24, 2015

Support you configured your LXD server for remote access and now can manage containers on remote machine. How do you actually run a web server on your container and access it from network?

First, let's say that your container is able to access the network already through lxcbr0 interface created automatically on host by LXC. But this interface is allocated for NAT (which is for one way connections), so to be able to listen to incoming connections, you need to create another interface like lxcbr0 (called bridge) and link it to the network card (eth0) where you want to listen for incoming stuff.

So the final setup should be:

  • lxcbr0 - mapped to eth0 on guest - NAT
  • lxcbr1 - mapped to eth1 on guest - LAN that gets address from LAN DHCP and listens for connection

The target system is Ubuntu 15.10

@techtonik
Copy link
Contributor Author

@techtonik techtonik commented Nov 24, 2015

More information about target system.

$ ls -la /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

$ ls -la /etc/network/interfaces.d/
total 8
drwxr-xr-x 2 root root 4096 Apr 16  2015 .
drwxr-xr-x 7 root root 4096 Aug 20 00:42 ..

$ ip addr
1: lo: ...
2: eth0: ...
3. lxcbr0: ...
4. vethKWL1L8: ...

I have no idea what vethKWL1L8 is and why /etc/network/interfaces is empty.

Loading

@IshwarKanse
Copy link

@IshwarKanse IshwarKanse commented Nov 24, 2015

Step 1: Create a bridge on your host, follow your distribution guide for the same. Here is an example configuration from my machine. I'm using Ubuntu 15.10.

sudo vim /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
    address 172.31.31.35
    netmask 255.255.255.0
    gateway 172.31.31.2
    dns-nameservers 8.8.8.8 8.8.4.4
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

Step 2: Create a new profile or you can edit the default profile.

lxc profile create bridged

Step 3: Edit the profile and add your bridge to it.

lxc profile edit bridged
name: bridged
config: {}
devices:
  eth0:
    nictype: bridged
    parent: br0
    type: nic

Step 4: While launching new containers you can use this profile or you can apply it to an existing container.

lxc launch trusty -p bridged newcontainer

or

lxc profile apply containername bridged 

Restart the container if your applying it to an existing container.

Step 5: You'll need to assign static ip to your container if you don't have dhcp in your network.

Loading

@techtonik
Copy link
Contributor Author

@techtonik techtonik commented Nov 24, 2015

Step 1: Create a bridge on your host...

My /etc/network/interfaces is empty, but I already have eth0 and lxcbr0 configured. Where does this happen?
What are other configuration differences between my current lxcbr0 and proposed br0?
The address for host eth0 is handled dynamically by local DHCP server and I want the same for guest.

Step 3: Edit the profile and add your bridge to it.

This changes the meaning for eth0 on guest and I need new interface eth1 on guest that is LAN attached. I edited the issue to mean that I have DHCP running in the network.

Note also that host eth0 is the NAT interface already for lxcbr0 (if I understand correctly host eth0 is already bridged to lxcbr0) and it should also be LAN interface.

Loading

@hallyn
Copy link
Member

@hallyn hallyn commented Nov 26, 2015

On Tue, Nov 24, 2015 at 01:45:57AM -0800, anatoly techtonik wrote:

Step 1: Create a bridge on your host...

My /etc/network/interfaces is empty, but I already have eth0 and lxcbr0 configured. Where does this happen?

Is there /etc/network/interfaces.d/eth0? is network-manager running?

lxcbr0 is created by the init job 'lxc-net' (either /etc/init/lxc-net.conf
or /lib/systemd/system/lxc-net.service)

Loading

@techtonik
Copy link
Contributor Author

@techtonik techtonik commented Nov 27, 2015

Is there /etc/network/interfaces.d/eth0?

No. Is there /etc/network/interfaces.d is empty.

is network-manager running?

Probably, because host internet connection is up. How to check?

lxcbr0 is created by the init job 'lxc-net' (either /etc/init/lxc-net.conf
or /lib/systemd/system/lxc-net.service)

I see reference to lxc-net start but I don't see where is the configuration for lxcbr0.

Loading

@srkunze
Copy link
Contributor

@srkunze srkunze commented Dec 11, 2015

@stgraber https://linuxcontainers.org/lxd/news/#lxd-024-release-announcement-8th-of-december-2015 says we now have macvlan available.

Wouldn't this solve this issue?

Loading

@techtonik
Copy link
Contributor Author

@techtonik techtonik commented Dec 13, 2015

Probably. Still need to figure out how to use it. My use case:

  1. init remote container
  2. login into remote
  3. checkout website on remote
  4. run webserver on 0.0.0.0 remote
  5. access webserver from local

Loading

@srkunze
Copy link
Contributor

@srkunze srkunze commented Dec 14, 2015

@stgraber Is there some documentation on the new macvlan functionality?

Loading

@stgraber
Copy link
Member

@stgraber stgraber commented Dec 14, 2015

Well, the various fields are documented in specs/configuration.md

It's basically:

type=nic
nictype=macvlan
parent=eth0

Note that it cannot work with WiFi networks (which is why we've never made it the default in LXC or LXD) and similarly cannot be used on links that do per-MAC 802.1X authentication.

Loading

@srkunze
Copy link
Contributor

@srkunze srkunze commented Dec 14, 2015

Note that it cannot work with WiFi networks (which is why we've never made it the default in LXC or LXD) and similarly cannot be used on links that do per-MAC 802.1X authentication.

Thanks a lot. That clarifies it for me.

@techtonik Considering all pieces, I would still go with the routing (DHCP) solution.

Loading

@techtonik
Copy link
Contributor Author

@techtonik techtonik commented Dec 15, 2015

Yep. It will be a pain if it doesn't work through WiFi.

Loading

@srkunze
Copy link
Contributor

@srkunze srkunze commented Dec 16, 2015

Seems like this issue is settled, @techtonik ?

Loading

@techtonik
Copy link
Contributor Author

@techtonik techtonik commented Jan 5, 2016

@srkunze, not really. So far I see no clear recipe in this thread. The answer needs to be summarized, ideally with some pictures.

Loading

@srkunze
Copy link
Contributor

@srkunze srkunze commented Jan 5, 2016

The answer needs to be summarized, ideally with some pictures.

I have no idea how to do this for all types of routers. UI changes too quickly and all routers/DHCP servers can be configured differently.

@stgraber Maybe, there is another even easier solution?

Loading

@techtonik
Copy link
Contributor Author

@techtonik techtonik commented Jan 7, 2016

@srkunze summarizing up to a point where it is clear why you need a router and where is sufficient for now. But note that there are three possible cases:

  1. routing (1:only with remote host, 2:with external router)
  2. port forwarding on remote host
  3. port forwarding through LXC provided channel

I actually think about 3rd variant - why don't use already opened channel to fetch traffic to and from running container? With netcat, for example.

Loading

@Annakan
Copy link

@Annakan Annakan commented Feb 25, 2016

I would like to pinch in because every time I tried to use LXC/LXD I encountered the problem without a clear and "simple" solution.

Explanation and digression

skip this if too long
Basically it is dead simple to launch an LXD container and connect into it and install nginx, and ldap server or a database, cool, but then ... there is no clear way to access it.
That's a need the docker team "solved" from day one with ports forwarding and I feel is really missing on LXD.
One of the strengths of LXD is the ability to be used as an isolated container, akin of docker, if you are of that religion, or a very thin and "toolable" 'virtual machine' and that is an invaluable thing. And a big selling point for LXD.
Except I was never able to have an lxc/d container simply request a IP from the DHCP available to the host and simply sit on the host network that way.
And let's face it, IpChain is a mess to configure (Just a quick look at pf and you'll see what I mean ;) ) and has no way to group/tag rules. So manually adding (if one can figure the right ones) the chains to add and keeping them updated with he IPs of the containers can quickly become nightmarish.
Again I am not saying that to rant but to highlight something the core developers, being well versed in the Linux network stack might overlook as a "first contact" issue.
I do understand the theory and basics of networking as I suspect many developers do but that fall short of understanding the intricacy of the stack and debugging efficiently a complex configuration and all the moving parts that LXD introduce to do its job (special dnsmaq, bridge with interface added dynamically, various kind of nic and networking types etc etc).

If I may suggest I think there are 3 or 4 kinds of configuration that are useful and should be able to be setup easily with a simple profile choice (at least the first two would be game changing) :

  1. Isolated container :
    The network configuration is expected to be obtained from the host, usually thought dnsmaq (current state). We need some tools that at least mirror the port forwarding capabilities of docker, ideally dynamically (even at run time)
  2. Thin VM :
    The network configuration is expected to be obtained through the host, usually thought DHCP. The lx container is a full standing citizen on the host external network.
  3. Thin VM / host managed network
    The goal is to be able to have network configuration done by the host but inside a range that is itself inside the host network range with the gateway of the host so that at the same time the IP's are allocated from a range configured on the LXD host but the resulting containers would be visible from the host external network. That might be tricky to do in a general way but I would love to know how to do that. The use case being to allocate sub-ranges to the LXD hosts and be able to spawn thin VM / containers on it that are first class network citizen. And the reason for that need, is that with a tool like consul one could then have a lot of dynamic configuration without heavy orchestration tools to dynamically manage ports in and out of the containers of a host and between container host.

End of digression

Having two profiles after install (defautlAsContainer and defaultAsThinVM) that could provide either (1) [an isolated container with ports forwarding] or (2) [a "thin VM" available on the host netword provided a DHCP or IP range is available] would completely change the "first hour experience" of LXD.

Back to the issue

I tried both ways :

First @IshwarKanse way

I added a bridge to my /etc/network/interface but what puzzled at what IP I should use here

    address 172.31.31.35
    netmask 255.255.255.0
    gateway 172.31.31.2

Given that my host IP is DHCP configured (fixed but the lease need to be kept ...) .
Second it seems that as soon as I bring the bridge up (sudo service network-manager restart) I loose connection on the host and the container can't get an IP at start up.
I don't doubt the way @IshwarKanse works but more explanation would be nice to place the configuration parameters inside the more general network parameters of the host.

second @stgraber way with macvlan

I did exactly this :

Stop containername
lxc profile edit mvlan

type=nic
nictype=macvlan
parent=eth0
`lxc profile apply containername mvlan` Start `containername`

But the container did not get any IP at start up, the network interface is here with a mac but the associated dhclient can't get an IP.

I had a look at https://www.flockport.com/lxc-macvlan-networking/ even if I know LXC and LXD are slightly different beasts, the lxc way seemed to also set up a dedicated bridge on the host and a lxc.network.macvlan.mode = bridge in the container config.
Is something like that the missing piece ?

Loading

@srkunze
Copy link
Contributor

@srkunze srkunze commented Feb 25, 2016

@techtonik What's wrong with plain old routing? At least it solves this issue: accessing a container from the LAN. I don't see much use of port forwarding right now. :)

@Annakan Don't you think this is the other way round? This issue here is about how to access a container FROM the LAN. Given the routing of the LAN is properly configured that just works.

Loading

@Annakan
Copy link

@Annakan Annakan commented Feb 25, 2016

Thanks for your answer

A computer crash made me loose my long answer, you will thus be spared it ;)

I don't think it is the other way round since that means you have to manage on the host something that concerns the container. You can't use a container without doing at least some port or IP mapping and that's something you have to do with the IP of the container. Thus, you have to retrieve that IP and expose it on the host, a sure sign that it is something that should be managed by the container manager and not manually on the host.

Or else, you have to keep tabs manually on the host of the rules you create for the container.
You have to update, delete them and that means you have to create complicated mechanisms to keep them in sync.

Container migration is also complicated because you have to find a way to reapply the rules on the target host.
On the other hand if the container profiles contains the network model (like : I use my host DHCP or I expose port X and Y to my host or through my host (different situations)) then it is simple to migrate them , activate them, or shut them down.

Ipchains, as far as I know does not offer a way to tag or group rules making this even more complicated and relying on the IP of the containers and the "exact identity" of rules to manage them.
It is, honestly a mess of a packet filter language.

Besides as far I was able to see, the official documentation does not offer a template of such rules, and the ones I googgled seemed really awkward with strange uses of "nat", but I confess I am not an ipchain expert, they did not work for me in a reliable way.

The larger "view"

Isolated containers and complex service discovery and transfer of rules, total independence from the file-system and automatic orchestration are a fine theoretical nirvana but it does concerns only 0.001 % of the people and companies out there, the ones who dynamically spans 1000 of containers across multiple data-centers.
This is the use case of docker and it is a very narrow target, and LXC/D as a true card to play by being able to scale from "thin VM"' that can be spawned by code , to "by the book immutable container", and offer a path for companies to go from one point to the other.

But it starts by being able to spawn a LX container and have it grab an IP from the host DHCP[edit for clarity : the same DHCPas the host, or the available DHCP] and be useful right away.
Then one can add, configuration management (SALT/Puppet etc), dynamic configuration (consul, zookeeper) and then evaluate the cost of abstracting the filesystem and database and making those containers immutable and idempotent) Docker is the religion of the immutable container, LXC/D can offer something much more flexible and address a much broader market.

How simple I wish it to be ;)

I really think that pass by being able to write :

lxc remote add images images.linuxcontainers.org
lxc launch images:centos/7/amd64 centos  -p AnyDHCPAvaliableToHostNetworkProfile

And get a container that is reachable from the network. Simple, useful, and immediately rewarding.

Loading

@srkunze
Copy link
Contributor

@srkunze srkunze commented Feb 26, 2016

That's quite some explanation. Thanks :-)

So, the argument goes that in order to do the "routing config" step, one needs to know the container's IP in the first place. Quite true. Manually doable but automatically would be better.

Which brings me to my next question: the to-be-configured DHCP server does not run on the host necessarily but on another network-centric host. How should LXD authenticate there to add routes?

Loading

@Annakan
Copy link

@Annakan Annakan commented Feb 26, 2016

Yes, I would make it even more precise saying that only the container know its purpose and thus the connectivity and ports he needs to expose, so however you see it, providing it with the resources (port mapping, ip) is something you need to query it to achieve, and that might be problematic if it is not yet running.
Better make that a part of its definition, have the environment set up as automatically as possible from there and my understanding is that's what profiles are for, make the junction between launch time and run time.

As for the last part of your answer I suspect we have misunderstanding (unless you talk about the last, 4th, case of my long answer who is more an open thinking than the first 2).

My "only" wish is either/both

1.To have a a way to make port mapping and routing a part of the container (either its definition, or a launch time value or a profile definition, I suspect a launch/definition time value would be best). And have run/launch take care of firewall rules and bridge configuration)
2. To have a way to launch a container grabbing its ip and stack configuration from a DHCP outside the host (the same the host potentially got its IP from), basically having the bridge and port configuration passing the dhcpoffer to the container and let the dhclientin it take it from here.

The various answers in this thead (from @IshwarKanse , through routing, and @stgraber , through macvalan) are supposed to give just that, except I (and the OP it seems) were not able to get them working manually, and I wish they could be automatically set up by either a profile or a launch configuration.

Unless you are talking about DHCP security through option 82 ?

PS : I edited my previous post to clear things up

Loading

@srkunze
Copy link
Contributor

@srkunze srkunze commented Feb 26, 2016

I think I got it now. :)

Well, that's something for @stgraber to decide. :)

Loading

@stgraber
Copy link
Member

@stgraber stgraber commented Feb 26, 2016

@Annakan did you try using macvlan with the parent set to the host interface?

Loading

@stgraber
Copy link
Member

@stgraber stgraber commented Feb 26, 2016

Oh, I see you mentioned it earlier. macvlan should do basically what you want, the one catch though is that your container can't talk to the host then, so if your host is the dhcp server, that'd be a problem.

Loading

@Annakan
Copy link

@Annakan Annakan commented Feb 26, 2016

Thanks for the answers
I did exactly this :

> Stop OpenResty
> lxc profile edit mvlan

type=nic
nictype=macvlan
parent=eth0


> lxc profile apply OpenResty mvlan
> Start OpenResty

lxc profile edit brwan gives exactly this

###
### Note that the name is shown but cannot be changed

name: brwan
config: {}
devices:
  eth0:
    nictype: macvlan
    parent: eth0
    type: nic

Container startup fails

lxc info --show-log OpenResty

Yields :
`
lxc 20160226161814.349 INFO lxc_seccomp - seccomp.c:parse_config_v2:449 - Adding compat rule for delete_module action 327681
lxc 20160226161814.349 INFO lxc_seccomp - seccomp.c:parse_config_v2:456 - Merging in the compat seccomp ctx into the main one
lxc 20160226161814.349 INFO lxc_conf - conf.c:run_script_argv:367 - Executing script '/usr/bin/lxd callhook /var/lib/lxd 4 start' for container 'OpenResty', config section 'lxc'
lxc 20160226161814.349 INFO lxc_start - start.c:lxc_check_inherited:247 - closed inherited fd 3
lxc 20160226161814.349 INFO lxc_start - start.c:lxc_check_inherited:247 - closed inherited fd 8
lxc 20160226161814.360 INFO lxc_monitor - monitor.c:lxc_monitor_sock_name:178 - using monitor sock name lxc/d78a9d7e97b4b375//var/lib/lxd/containers
lxc 20160226161814.375 DEBUG lxc_start - start.c:setup_signal_fd:285 - sigchild handler set
lxc 20160226161814.375 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161814.375 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161814.375 DEBUG lxc_console - console.c:lxc_console_peer_default:524 - no console peer
lxc 20160226161814.375 INFO lxc_start - start.c:lxc_init:484 - 'OpenResty' is initialized
lxc 20160226161814.376 DEBUG lxc_start - start.c:__lxc_start:1247 - Not dropping cap_sys_boot or watching utmp
lxc 20160226161814.377 INFO lxc_start - start.c:resolve_clone_flags:944 - Cloning a new user namespace
lxc 20160226161814.399 ERROR lxc_conf - conf.c:instantiate_veth:2590 - failed to attach 'veth2FKB5C' to the bridge 'brwan': Operation not permitted
lxc 20160226161814.414 ERROR lxc_conf - conf.c:lxc_create_network:2867 - failed to create netdev
lxc 20160226161814.414 ERROR lxc_start - start.c:lxc_spawn:1011 - failed to create the network
lxc 20160226161814.414 ERROR lxc_start - start.c:__lxc_start:1274 - failed to spawn 'OpenResty'
lxc 20160226161814.414 INFO lxc_conf - conf.c:run_script_argv:367 - Executing script '/usr/share/lxcfs/lxc.reboot.hook' for container 'OpenResty', config section 'lxc'
lxc 20160226161814.918 INFO lxc_conf - conf.c:run_script_argv:367 - Executing script '/usr/bin/lxd callhook /var/lib/lxd 4 stop' for container 'OpenResty', config section 'lxc'
lxc 20160226161814.993 WARN lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - command get_init_pid failed to receive response
lxc 20160226161814.993 WARN lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - command get_init_pid failed to receive response
lxc 20160226161814.994 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161814.994 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161815.001 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161815.001 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161815.003 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161858.875 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161858.875 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161858.883 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161858.887 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161858.887 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161858.889 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161858.897 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161922.688 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161922.688 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161922.690 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161922.694 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161922.694 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161922.696 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161922.697 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161932.011 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161932.011 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161932.013 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161932.016 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226161932.016 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226161932.025 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226161932.027 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226165637.738 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226165637.738 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226165637.747 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226165637.751 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type u nsid 0 hostid 310000 range 65536
lxc 20160226165637.751 INFO lxc_confile - confile.c:config_idmap:1495 - read uid map: type g nsid 0 hostid 310000 range 65536
lxc 20160226165637.759 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error
lxc 20160226165637.761 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error

It seems that LXD try to link themacvlan to a brigde named after the profile name (brwan) and not to the host (eth0 in my case) interface, unless the error message is misleading. or is it that I need to create a separate bridge named after the profile to receive the virtual interfaces ? (but then I will need to remove theeth0 host interface from thelxcbr0 `bridge right ? and thus loose other container connectivity ?)

Loading

@stgraber
Copy link
Member

@stgraber stgraber commented Feb 26, 2016

Can you paste "lxc config show --expanded OpenResty"?

Loading

@Annakan
Copy link

@Annakan Annakan commented Feb 26, 2016

I assumed you meant the "show" subcommand
lxc config show --expanded OpenResty

name: OpenResty
profiles:
- brwan
config:
  volatile.base_image: 4dfde108d4e03643816ce2b649799dd3642565ca81a147c9153ca34c151b42ea
  volatile.eth0.hwaddr: 00:16:3e:8a:3a:e1
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":310000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":310000,"Nsid":0,"Maprange":65536}]'
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: brwan
    type: nic
  root:
    path: /
    type: disk
ephemeral: false

hum .. parent: brwan ?

Loading

@stgraber
Copy link
Member

@stgraber stgraber commented Feb 26, 2016

ok, what about "lxc config show OpenResty" (no expanded)?

Loading

@Annakan
Copy link

@Annakan Annakan commented Feb 26, 2016

I might have got it, I used the same container experimenting with @IshwarKanse solution and at that point I tried to setup a secondary bridge (hence the brwan name of the profile).

I suspect some previous profile configuration are lingering on. Or some dependency I don't understand yet.
I shall try with a completely fresh container and I should not have reused my previous one

right ?

Loading

@hallyn
Copy link
Member

@hallyn hallyn commented Sep 1, 2016

On Wed, Aug 31, 2016 at 12:18:02PM -0700, Zero wrote:

Hello,

Just started to use LXD and so far it's awesome.
I was wondering if you could assign a second interface to all the containers.
This interface would act has an internal lan only local to the host.
Then you can combine this with the macvlan solution and you'd be able to :

  • Reach your containers on the same LAN where the host belong
  • Reach your containers from inside the host with the internal LAN using the secondary interface on the containers

Sure, you can create a private bridge which doesn't have any outgoing
nics attached, then add to the default lxd profile a second nic which
is on that bridge.

Loading

@emprovements
Copy link

@emprovements emprovements commented Dec 18, 2016

@hallyn would you mind to give me some hints how to do that? I am fairly new in linux networking and I started to dig into LXD. I often have only wlan network accessible so in order to keep it as simple as possible and also keep an internet access for the containers and host (keep the lxdbr0 untouched) but still be able to reach the containers from host, I think this private bridge is perfect idea. Than I would attach my eth0 into the bridge so I can have internet over the wlan0 and reach container services in private "subnetwork" over the eth0.
Thanks!

Loading

@hallyn
Copy link
Member

@hallyn hallyn commented Dec 19, 2016

@emprovements what is your distro/release?

Loading

@psheets
Copy link

@psheets psheets commented Mar 29, 2017

I ran into an issue getting containers to retrieve IPs from the local DNS using all of these solutions. The issue had to do with the vmware vSwitch the os was connected too. I was able get it to work by changing promiscuous mode to accept on the vSwitch. It is outlined here:
https://medium.com/@philsheets/vmware-lxd-external-dhcp-server-for-containers-2f1470995111

Loading

@Remigius2011
Copy link

@Remigius2011 Remigius2011 commented Apr 6, 2017

I don't know whether this helps anybody, but in my setup (lxd 2.12 on xenial, upgraded from 2.0 installed as ubuntu package) I have launched a single test container with a bridged network named lxdbr0, then all I had to do was add a static route (currently on my windows machine, but I'll add it to my firewall):

$ route add 10.205.0.0 MASK 255.255.255.0 192.168.1.99

where 10.205.0.0/24 is the bridge network and 192.168.1.99 is the LXD host (having a second IP 10.205.0.1 for adapter lxdbr0). Assuming the container has IP 10.205.0.241, you can now ping the host and the container:

$ ping 10.205.0.1
$ ping 10.205.0.241

(or at least I could...). This means, the lxd host acts as a gateway to the internal bridged network - more or less out of the box.

Loading

@techtonik
Copy link
Contributor Author

@techtonik techtonik commented May 3, 2017

After 1.5 years I finally managed to ping my container from the LAN using macvlan interface. I don't need to access container from the host machine, so if anybody needs that setup, you are more than welcome to open a separate issue.

macvlan solution is documented here - #3273

Loading

@techtonik techtonik closed this May 3, 2017
@DougBrunelle
Copy link

@DougBrunelle DougBrunelle commented May 8, 2017

Ubuntu containers is a very cool technology which frees us from having to use other virtualization techniques such as Virtualbox, but I perceive that there are still some usability issues that the developers might address for those of us trying to test or implement it.

Perhaps there needs to be a more comprehensive 'lxd init'. My experience after days of trying various configurations and reinstallations of lxc/lxd is that I can access services on a container such as apache2, etc. from my host machine, but not from other machines on my LAN.

Suggestion for an expanded version of lxd setup follows. I am assuming that most people will want their containers accessible/networkable from not only their local machine hosting the containers, but also from other machines on their LAN, for prototyping services, testing, etc.. I think that the options should be additive, in the following manner:

  1. Option 1: By default, containers are accessible from the host machine only. Which seems to be the current situation.
  2. Option 2 would be to make the containers accessible from other computers on the user's LAN and the host machine.
  3. Option 3: make the containers visible/usable from the Internet, the user's LAN, and the host machine.
  4. Option 4 would be a manual networking setup for the container, for network and container gurus.

It seems obvious that if we're going to make containers visible outside the local LAN, we should retain the capability of networking to them from the LAN as well as the host machine, for maintenance.

Maybe these options already exist and can be configured, if you understand the intricacies of lxc/lxd and networking on ubuntu, but for the casual user who wants to learn more about container usage and how to configure them for use outside of the host machine, these options would definitely be helpful. It would also help sell the technology to those who are just sticking their toes into the water to see how it feels.

Loading

@dann1
Copy link

@dann1 dann1 commented May 8, 2017

LXD is awesome like it is, if you want to configure the network, then you need to know at least basic networking. That happens in VMs too, the differnece is VBox has a GUI.

Loading

@DougBrunelle
Copy link

@DougBrunelle DougBrunelle commented May 9, 2017

I think you're probably right, dann1. When in doubt, rtfm. I think what I need is a step-by-step linux networking manual that will take me from beginner to guru in five easy steps. :-)

Loading

@markc
Copy link

@markc markc commented May 9, 2017

@DougBrunelle depends on what you want to do but if it's Option: 3 then I find the easiest way is to create a bridge on the host called lxdbr0* (duh) then during lxd init select no for "Would you like to create a new network bridge (yes/no) [default=yes]? no" then your new containers will pick up the local LAN or public IP from the nearest DHCP server. Once the host bridge is set up then the rest "just works". I've got 1/2 dozen local LAN boxes setup like this and after setting up one container as my main local DNS server for local LAN resolution (plus a resolver for upstream caching) on my laptop I was then able to lxc copy it to my NAS, twiddled my DHCP router to give the copied container the same IP, and all my local LAN+upstream DNS resolution kept working. Once lxc move works reliably I'll start pushing containers setup on my laptop in front of me to live public servers.

  • see this previous post above for host bridge setup hints...
    #1343 (comment)

Loading

@Remigius2011
Copy link

@Remigius2011 Remigius2011 commented May 9, 2017

@DougBrunelle , in my experience, the default when saying yes to create a bridged network is 2, only the computers in the network don't know how to reach it by default, as the address range of the assigned IPs is outside the address range of your local network. This means you need to establish a static route, either from the pc you're sitting at or from the default gateway of the network it is connected to. For option 3, the best is probably to have a public facing reverse proxy, like nginx or HAProxy which distributes requests to the right endpoints. Of course, there's some learning curve to get there, but the internet is full of help and nginx is easy to configure (compared to apache httpd).

Loading

@root-prasanna
Copy link

@root-prasanna root-prasanna commented Jul 8, 2017

@stgraber how to get access to a container which is inside virutal box (i.e. ubuntu machine inside virtual box) remotely. Virtual box and container can ping each other but container and remote machine cannot ping each other. All the machine have same ip range.

Loading

@derekmahar
Copy link

@derekmahar derekmahar commented May 6, 2020

type=nic
nictype=macvlan
parent=eth0

Note that it cannot work with WiFi networks (which is why we've never made it the default in LXC or LXD) and similarly cannot be used on links that do per-MAC 802.1X authentication.

Does the macvlan NIC type work on a virtual "wired" network interface (Intel PRO/1000 MT Desktop (82540EM)) inside a VirtualBox host on a Windows 10 laptop using a WiFi interface?

Loading

@stgraber
Copy link
Member

@stgraber stgraber commented May 6, 2020

it should, though we've sometimes seen odd behavior with specific drivers and nics, so it may end up depending on the virtual implementation inside of the virtualbox code.

Loading

@tomponline
Copy link
Member

@tomponline tomponline commented May 6, 2020

You will certainly need to enable promiscuous mode on the nic in the virtualbox config to allow the vm to use other Mac addresses.

Loading

@derekmahar
Copy link

@derekmahar derekmahar commented May 6, 2020

You will certainly need to enable promiscuous mode on the nic in the virtualbox config to allow the vm to use other Mac addresses.

Which VirtualBox network adapter promiscuous mode, "Allow VMs" or "Allow All"?

Loading

@tomponline
Copy link
Member

@tomponline tomponline commented May 6, 2020

I would suggest you try allow VMS and if that doesn't work try allow all.

Loading

@derekmahar
Copy link

@derekmahar derekmahar commented May 6, 2020

I would suggest you try allow VMS and if that doesn't work try allow all.

How can I force a container to drop its IP address (release its DHCP lease)? lxc exec container1 -- dhclient -r -v doesn't work. Does the container cache the IP address that it obtains from the DHCP server? Even when I disable the DHCP server of the VirtualBox host-only network to which the VM guest belongs and restart the LXD container inside this guest, the container still retains its previous IP address. Even restarting the VM guest didn't force the container to drop its IP address.

Loading

@derekmahar
Copy link

@derekmahar derekmahar commented May 6, 2020

I would suggest you try allow VMS and if that doesn't work try allow all.

How can I force a container to drop its IP address (release its DHCP lease)? lxc exec container1 -- dhclient -r -v doesn't work. Does the container cache the IP address that it obtains from the DHCP server? Even when I disable the DHCP server of the VirtualBox host-only network to which the VM guest belongs and restart the LXD container inside this guest, the container still retains its previous IP address. Even restarting the VM guest didn't force the container to drop its IP address.

Nevermind. Recreating and starting the container forces it to retrieve a new DHCP lease and different IP address. To my surprise, though, it obtained this IP address even after I disabled the DHCP server of the VirtualBox host-only network and set promiscuous mode of the VM Host-only Adapter to "Deny". Where might the container be getting its IP address? I will observe the behaviour with a VirtualBox Bridged Adapter instead of a Host-only Adapter.

Loading

@derekmahar
Copy link

@derekmahar derekmahar commented May 6, 2020

I confirmed that an LXD 4.0.1 container using a macvlan adapter, running inside a VirtualBox 6.1.6 Ubuntu Server 20.04 guest using only a Bridged Adapter, in any promiscuous mode (including "Deny"), could retrieve its IP address from the DHCP server of my wireless LAN router.

Network configuration:

VirtualBox Ubuntu Server 20.04 guest (LXD host) network adapter:

NIC 2:
  MAC: 080027F0CFD0
  Attachment: Bridged Interface 'Intel(R) Wireless-AC 9560 160MHz'
  Cable connected: on,
  Trace: off (file: none),
  Type: 82540EM
  Reported speed: 0 Mbps
  Boot priority: 0
  Promisc Policy: deny
  Bandwidth group: none

Note that the VirtualBox Bridged Adapter is attached the wireless adapter in my laptop.

VirtualBox guest Netplan configuration and network interface IP address:

derek@derek-ubuntu:~$ cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
  version: 2
  ethernets:
    # Bridged adapter
    enp0s8:
      dhcp4: no
      dhcp6: no
      addresses:
        - 192.168.0.10/24
      gateway4: 192.168.0.1
      nameservers:
        addresses:
          - 192.168.0.1
          - 1.1.1.1
          - 1.0.0.1
derek@derek-ubuntu:~$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:f0:cf:d0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.10/24 brd 192.168.0.255 scope global enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fef0:cfd0/64 scope link
       valid_lft forever preferred_lft forever

LXD container IP address:

derek@derek-ubuntu:~$ lxc list
+------------+---------+----------------------+------+-----------+-----------+
|    NAME    |  STATE  |         IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+------------+---------+----------------------+------+-----------+-----------+
| container1 | RUNNING | 192.168.0.104 (eth0) |      | CONTAINER | 0         |
+------------+---------+----------------------+------+-----------+-----------+
derek@derek-ubuntu:~$ lxc exec container1 -- ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:f1:18:75 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.0.104/24 brd 192.168.0.255 scope global dynamic eth0
       valid_lft 6867sec preferred_lft 6867sec
    inet6 fe80::216:3eff:fef1:1875/64 scope link
       valid_lft forever preferred_lft forever

LXD container Netplan configuration:

derek@derek-ubuntu:~$ lxc exec container1 -- sh -c 'cat /etc/netplan/10-lxc.yaml'
network:
  version: 2
  ethernets:
    eth0:
      dhcp4: true
      dhcp-identifier: mac

LXD container profile:

derek@derek-ubuntu:~$ lxc profile show macvlan_bridged
config: {}
description: Default LXD profile
devices:
  eth0:
    nictype: macvlan
    parent: enp0s8
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: macvlan_bridged
used_by:
- /1.0/instances/container1

Note that the parent of the macvlan interface refers to interface enp0s8 in the VirtualBox guest.

Loading

@derekmahar
Copy link

@derekmahar derekmahar commented May 6, 2020

After 1.5 years I finally managed to ping my container from the LAN using macvlan interface. I don't need to access container from the host machine, so if anybody needs that setup, you are more than welcome to open a separate issue.

@techtonik, didn't @stgraber explain that the macvlan interface blocks external access from the host? Did he mean only the LXD host? In my configuration, I could not ping the container from the VirtualBox host (Windows 10) or guest (LXD host).

Loading

@derekmahar
Copy link

@derekmahar derekmahar commented May 6, 2020

I don't know whether this helps anybody, but in my setup (lxd 2.12 on xenial, upgraded from 2.0 installed as ubuntu package) I have launched a single test container with a bridged network named lxdbr0, then all I had to do was add a static route (currently on my windows machine, but I'll add it to my firewall):

@Remigius2011, would you describe your static route configuration in more detail? On what device did you enter the static route? How did you persist this route so that it survives restarts?

Loading

@derekmahar
Copy link

@derekmahar derekmahar commented May 6, 2020

@stgraber how to get access to a container which is inside virutal box (i.e. ubuntu machine inside virtual box) remotely. Virtual box and container can ping each other but container and remote machine cannot ping each other. All the machine have same ip range.

@root-prasanna, while three years later you have likely already solved your problem, others still struggling with the same problem might find YouTube video LXD 2.0 on Ubuntu 16.04 within VirtualBox for easy LAN access - 10 minutes guide as helpful as did I.

Loading

@derekmahar
Copy link

@derekmahar derekmahar commented May 6, 2020

After 1.5 years I finally managed to ping my container from the LAN using macvlan interface. I don't need to access container from the host machine, so if anybody needs that setup, you are more than welcome to open a separate issue.

@techtonik, didn't @stgraber explain that the macvlan interface blocks external access from the host? Did he mean only the LXD host? In my configuration, I could not ping the container from the VirtualBox host (Windows 10) or guest (LXD host).

@techtonik, Sorry, I misread @stgraber's comment. He said that the containers can't contact the host, not that the host cannot contact the containers.

Loading

@derekmahar
Copy link

@derekmahar derekmahar commented May 6, 2020

I confirmed that an LXD 4.0.1 container using a macvlan adapter, running inside a VirtualBox 6.1.6 Ubuntu Server 20.04 guest using only a Bridged Adapter, in any promiscuous mode (including "Deny"), could retrieve its IP address from the DHCP server of my wireless LAN router.

I think the following note in 6.5. Bridged Networking may explain why promiscuous mode had no effect in my test:

Bridging to a wireless interface is done differently from bridging to a wired interface, because most wireless adapters do not support promiscuous mode.

Loading

@tomponline
Copy link
Member

@tomponline tomponline commented May 7, 2020

@derekmahar glad you got it working.

To summarise:

  • macvlan does not allow the LXD host (i.e the VM guest in this case) to communicate with the container or vice versa.
  • Virtualbox VM config will need to have a bridged adaptor to connect to the wider network. Using host-only config means that Virtualbox provides a local DHCP server (so that explains why you kept getting the same IP allocated when using host-only mode).
  • Generally speaking promiscuous mode is required to be enabled because otherwise Virtualbox doesn't allow the LXD macvlan interface to receive Ethernet frames destined for MAC addresses not belonging to the LXD host.
  • If you are using bridging onto the wider network then static routes shouldn't be needed.

Its interesting what you found about not needing promiscuous mode when the bridge parent is a wif adaptor, thanks for confirming that.

Loading

@derekmahar
Copy link

@derekmahar derekmahar commented May 7, 2020

1. Option 1: By default, containers are accessible from the host machine only. Which seems to be the current situation.
2. Option 2 would be to make the containers accessible from other computers on the user's LAN and the host machine.
3. Option 3: make the containers visible/usable from the Internet, the user's LAN, and the host machine.
4. Option 4 would be a manual networking setup for the container, for network and container gurus.

These would be useful options. I would call them Container Network Visibility Profiles. These might be analogous to the networking modes that VirtualBox offers:

image

Loading

@derekmahar
Copy link

@derekmahar derekmahar commented May 7, 2020

@derekmahar glad you got it working.

@tomponline, actually, I think the only positive result was that the macvlan interface of the container acquired an IP address. Even using a VirtualBox Bridged Adapter, my Windows 10 host still couldn't access the container inside the VirtualBox VM.

This morning, I realized that I had omitted (which I've since corrected) the gateway address in my original VirtualBox guest (LXD host) Netplan configuration which prevented it from accessing the Internet, but after adding it, it did not change the effect that promiscuous mode had on the container macvlan interface acquiring an IP address. However, even with the gateway, the container still could not access the Internet. Should it be able?

Here are the test results after adding the gateway to the VirtualBox guest:

derek@derek-ubuntu:~$ ip addr show dev enp0s8
2: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:f0:cf:d0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.10/24 brd 192.168.0.255 scope global enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fef0:cfd0/64 scope link
       valid_lft forever preferred_lft forever
derek@derek-ubuntu:~$ ping -c 4 google.com
PING google.com (172.217.13.206) 56(84) bytes of data.
64 bytes from yul03s05-in-f14.1e100.net (172.217.13.206): icmp_seq=1 ttl=51 time=1569 ms
64 bytes from yul03s05-in-f14.1e100.net (172.217.13.206): icmp_seq=2 ttl=51 time=1278 ms
64 bytes from yul03s05-in-f14.1e100.net (172.217.13.206): icmp_seq=3 ttl=51 time=1131 ms
64 bytes from yul03s05-in-f14.1e100.net (172.217.13.206): icmp_seq=4 ttl=51 time=921 ms

--- google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 920.674/1224.625/1569.044/235.881 ms, pipe 2
derek@derek-ubuntu:~$ lxc list
+------------+---------+----------------------+------+-----------+-----------+
|    NAME    |  STATE  |         IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+------------+---------+----------------------+------+-----------+-----------+
| container1 | RUNNING | 192.168.0.100 (eth0) |      | CONTAINER | 0         |
+------------+---------+----------------------+------+-----------+-----------+
derek@derek-ubuntu:~$ ping -c 4 192.168.0.100
PING 192.168.0.100 (192.168.0.100) 56(84) bytes of data.
From 192.168.0.10 icmp_seq=1 Destination Host Unreachable
From 192.168.0.10 icmp_seq=2 Destination Host Unreachable
From 192.168.0.10 icmp_seq=3 Destination Host Unreachable
From 192.168.0.10 icmp_seq=4 Destination Host Unreachable

--- 192.168.0.100 ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3059ms
pipe 4
derek@derek-ubuntu:~$ lxc exec container1 -- ip addr show dev eth0
4: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:91:80:e3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.0.100/24 brd 192.168.0.255 scope global dynamic eth0
       valid_lft 4413sec preferred_lft 4413sec
    inet6 fe80::216:3eff:fe91:80e3/64 scope link
       valid_lft forever preferred_lft forever
derek@derek-ubuntu:~$ lxc exec container1 -- ping -c 4 192.168.0.10
PING 192.168.0.10 (192.168.0.10) 56(84) bytes of data.
From 192.168.0.100 icmp_seq=1 Destination Host Unreachable
From 192.168.0.100 icmp_seq=2 Destination Host Unreachable
From 192.168.0.100 icmp_seq=3 Destination Host Unreachable
From 192.168.0.100 icmp_seq=4 Destination Host Unreachable

--- 192.168.0.10 ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3066ms
pipe 4
derek@derek-ubuntu:~$ lxc exec container1 -- ping -c 4 google.com
ping: google.com: Temporary failure in name resolution

macvlan does not allow the LXD host (i.e the VM guest in this case) to communicate with the container or vice versa.

Shouldn't the VirtualBox host or some other host on the same network be able to contact an LXD macvlan container as @techtonik had observed? In my case, Windows 10 was unable to contact the container (192.168.0.100), despite the LXD parent interface in the VirtualBox guest (192.168.0.10) being attached to a VirtualBox Bridged Adapter and having promiscuous mode Allow All.

derek@DESKTOP-2F2F59O:~$ ping -c 4 192.168.0.100
PING 192.168.0.100 (192.168.0.100) 56(84) bytes of data.

--- 192.168.0.100 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3006ms

Virtualbox VM config will need to have a bridged adaptor to connect to the wider network. Using host-only config means that Virtualbox provides a local DHCP server (so that explains why you kept getting the same IP allocated when using host-only mode).

Right, which is why in my second test, I used a VirtualBox Bridged Adapter.

Generally speaking promiscuous mode is required to be enabled because otherwise Virtualbox doesn't allow the LXD macvlan interface to receive Ethernet frames destined for MAC addresses not belonging to the LXD host.

I'll keep this in mind the next time I create an LXD macvlan interface for a container on a host that has a wired Ethernet adapter.

If you are using bridging onto the wider network then static routes shouldn't be needed.

Nevertheless, I'd like to learn how to use static routes with the more constrained VirtualBox networking modes. VirtualBox and LXD are really expanding my Linux networking knowledge!

Its interesting what you found about not needing promiscuous mode when the bridge parent is a wif adaptor, thanks for confirming that.

I'm very curious to repeat my test using a wired adapter, but unfortunately, my Windows laptop doesn't have one.

Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet