New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow setting of raw.lxc.network #1259

Closed
cybe opened this Issue Nov 2, 2015 · 29 comments

Comments

10 participants
@cybe

cybe commented Nov 2, 2015

Disallowing setting raw.lxc.network (introduced in 799471f) breaks my current setup. I rely on using lxc.network for type, name, link, ipv4 and ipv4.gateway and completely skip network configuration through lxd.

Please re-allow setting raw.lxc.network, at least until lxd supports those configurations directly.

@bradenwright

This comment has been minimized.

Show comment
Hide comment
@bradenwright

bradenwright Nov 2, 2015

I've been stuggling with automating a way to setup network, primarily setting up a static ip address.

Basically passing in lxc configuration like you mention above seemed to set my ip, but I believe I was having some kind of issue with it but I don't remember what exactly right now. Then I saw that it wasn't going to be supported #1246 so I was looking for other methods.

So I tried writing in ubuntu the /etc/network/interfaces.d/eth0.cfg, but /etc/init.d/networking restart command does not work. So I tried restarting other services or issuing commands like "ifconfig eth0 10.0.3.99/24 up" and "route add default gw 10.0.3.1", but after container was running for a little while ip address was reseting to static ip. Then I tried to suck it up and just restart the container, but I'm trying to automate the process in Ruby (Test-Kitchen Driver for Chef) and it hangs on restarts, I've been able to stop it from hanging by adding sleep for 5 secs after it starts and after it restarts. But I really don't like this solution (as part of the reason I want to use lxc is for speed).

So I guess I'm fine with using above style of raw.lxc.network, or whatever method.... but how can I set up a static ip (in an automated way). I would prefer a method that (1) isn't specific to the OS. Meaning I'd rather not write files inside container as I last tried b/c its specific to Ubuntu/Debian specific. (2) doesn't require the container to be restarted.

bradenwright commented Nov 2, 2015

I've been stuggling with automating a way to setup network, primarily setting up a static ip address.

Basically passing in lxc configuration like you mention above seemed to set my ip, but I believe I was having some kind of issue with it but I don't remember what exactly right now. Then I saw that it wasn't going to be supported #1246 so I was looking for other methods.

So I tried writing in ubuntu the /etc/network/interfaces.d/eth0.cfg, but /etc/init.d/networking restart command does not work. So I tried restarting other services or issuing commands like "ifconfig eth0 10.0.3.99/24 up" and "route add default gw 10.0.3.1", but after container was running for a little while ip address was reseting to static ip. Then I tried to suck it up and just restart the container, but I'm trying to automate the process in Ruby (Test-Kitchen Driver for Chef) and it hangs on restarts, I've been able to stop it from hanging by adding sleep for 5 secs after it starts and after it restarts. But I really don't like this solution (as part of the reason I want to use lxc is for speed).

So I guess I'm fine with using above style of raw.lxc.network, or whatever method.... but how can I set up a static ip (in an automated way). I would prefer a method that (1) isn't specific to the OS. Meaning I'd rather not write files inside container as I last tried b/c its specific to Ubuntu/Debian specific. (2) doesn't require the container to be restarted.

@bradenwright

This comment has been minimized.

Show comment
Hide comment
@bradenwright

bradenwright Nov 3, 2015

Oh btw I tried out the raw.lxc network options to see what my issue(s) were.

  1. Main issue was there was /etc/network/interface.d/eth0.cfg setup for dhcp on ubuntu image, so when I setup a static ip the node had both a static ip and a dhcp ip. Unless I deleted/changed eth0.cfg file, and restarted the container.

  2. The other issue I had was I was having trouble figuring out how to automate setting raw.lxc.network. When I tried different ways of using commands like:

lxc config set web1-ubuntu-1404 raw.lxc "network.type = veth\nnetwork.name = eth0\nnetwork.link = lxcbr0\nnetwork.ipv4 = 10.0.3.32/24\nlxc.network.ipv4.gateway = 10.0.3.1\nnetwork.flags = up"

Only way I was able to get configuration to work was by using lxc config edit. And making it look like:

config:
  raw.lxc: |-
    lxc.network.type = veth
    lxc.network.name = eth0
    lxc.network.link = lxcbr0
    lxc.network.ipv4 = 10.0.3.32/24
    lxc.network.ipv4.gateway = 10.0.3.1
    lxc.network.flags = up

But I tried a lot of different formats and could figure out how to automate/do this via cli. So when I saw it was no longer going to be supported I looked at other options. But I just wanted to clarify since I wasn't really having an issue with lxc.network options they seem to work great for me!

bradenwright commented Nov 3, 2015

Oh btw I tried out the raw.lxc network options to see what my issue(s) were.

  1. Main issue was there was /etc/network/interface.d/eth0.cfg setup for dhcp on ubuntu image, so when I setup a static ip the node had both a static ip and a dhcp ip. Unless I deleted/changed eth0.cfg file, and restarted the container.

  2. The other issue I had was I was having trouble figuring out how to automate setting raw.lxc.network. When I tried different ways of using commands like:

lxc config set web1-ubuntu-1404 raw.lxc "network.type = veth\nnetwork.name = eth0\nnetwork.link = lxcbr0\nnetwork.ipv4 = 10.0.3.32/24\nlxc.network.ipv4.gateway = 10.0.3.1\nnetwork.flags = up"

Only way I was able to get configuration to work was by using lxc config edit. And making it look like:

config:
  raw.lxc: |-
    lxc.network.type = veth
    lxc.network.name = eth0
    lxc.network.link = lxcbr0
    lxc.network.ipv4 = 10.0.3.32/24
    lxc.network.ipv4.gateway = 10.0.3.1
    lxc.network.flags = up

But I tried a lot of different formats and could figure out how to automate/do this via cli. So when I saw it was no longer going to be supported I looked at other options. But I just wanted to clarify since I wasn't really having an issue with lxc.network options they seem to work great for me!

@stgraber

This comment has been minimized.

Show comment
Hide comment
@stgraber

stgraber Nov 3, 2015

Member

You can set multi-line keys with "cat input | lxc config set container-name raw.lxc -"

Member

stgraber commented Nov 3, 2015

You can set multi-line keys with "cat input | lxc config set container-name raw.lxc -"

@bradenwright

This comment has been minimized.

Show comment
Hide comment
@bradenwright

bradenwright Nov 3, 2015

Thanks @stgraber 👍 that worked like perfect for me. If you have any recommendations about disabling the dhcp interface on boot I'd be really interested. Right now I have tested the following and it seems to work without a reboot (including disabling dhcp without a reboot).

container_name="mycontainer"
image_name="ubuntu-14.04"
ip="10.0.3.7/24"

lxd-images import ubuntu trusty --alias $image_name

lxc init $image_name $container_name

echo -e "lxc.network.type = veth\nlxc.network.name = eth0\nlxc.network.link = lxcbr0\nlxc.network.ipv4 = $ip\nlxc.network.ipv4.gateway = auto\nlxc.network.flags = up" | lxc config set $container_name raw.lxc -

lxc start $container_name && lxc exec $container_name -- sed -i 's/dhcp/manual/g' /etc/network/interfaces.d/eth0.cfg

So I would have to agree with @cybe 👍
"Please re-allow setting raw.lxc.network, at least until lxd supports those configurations directly."

bradenwright commented Nov 3, 2015

Thanks @stgraber 👍 that worked like perfect for me. If you have any recommendations about disabling the dhcp interface on boot I'd be really interested. Right now I have tested the following and it seems to work without a reboot (including disabling dhcp without a reboot).

container_name="mycontainer"
image_name="ubuntu-14.04"
ip="10.0.3.7/24"

lxd-images import ubuntu trusty --alias $image_name

lxc init $image_name $container_name

echo -e "lxc.network.type = veth\nlxc.network.name = eth0\nlxc.network.link = lxcbr0\nlxc.network.ipv4 = $ip\nlxc.network.ipv4.gateway = auto\nlxc.network.flags = up" | lxc config set $container_name raw.lxc -

lxc start $container_name && lxc exec $container_name -- sed -i 's/dhcp/manual/g' /etc/network/interfaces.d/eth0.cfg

So I would have to agree with @cybe 👍
"Please re-allow setting raw.lxc.network, at least until lxd supports those configurations directly."

@bradenwright

This comment has been minimized.

Show comment
Hide comment
@bradenwright

bradenwright Nov 4, 2015

So just an FYI , I did notice that if I supply a bad config option LXD/LXC will hang, and LXC stops responding well. But when supplying proper config options I haven't have any troubles. I've been running a lot of tests writing a driver plugin for chef/test-kitchen (kitchen-lxc_cli), my OS is Ubuntu 15.10, running LXD/LXC version 0.20

bradenwright commented Nov 4, 2015

So just an FYI , I did notice that if I supply a bad config option LXD/LXC will hang, and LXC stops responding well. But when supplying proper config options I haven't have any troubles. I've been running a lot of tests writing a driver plugin for chef/test-kitchen (kitchen-lxc_cli), my OS is Ubuntu 15.10, running LXD/LXC version 0.20

@tych0

This comment has been minimized.

Show comment
Hide comment
@tych0

tych0 Nov 4, 2015

Member

On Tue, Nov 03, 2015 at 06:24:40PM -0800, bradenwright wrote:

So just an FYI , I did notice that if I supply a bad config option LXD/LXC will hang, and LXC stops responding well. But when supplying proper config options I haven't have any troubles. I've been running a lot of tests writing a driver plugin for chef/test-kitchen (kitchen-lxc_cli), my OS is Ubuntu 15.10, running LXD/LXC version 0.20

Can you give an example of the input that causes things to hang?


Reply to this email directly or view it on GitHub:
#1259 (comment)

Member

tych0 commented Nov 4, 2015

On Tue, Nov 03, 2015 at 06:24:40PM -0800, bradenwright wrote:

So just an FYI , I did notice that if I supply a bad config option LXD/LXC will hang, and LXC stops responding well. But when supplying proper config options I haven't have any troubles. I've been running a lot of tests writing a driver plugin for chef/test-kitchen (kitchen-lxc_cli), my OS is Ubuntu 15.10, running LXD/LXC version 0.20

Can you give an example of the input that causes things to hang?


Reply to this email directly or view it on GitHub:
#1259 (comment)

@bradenwright

This comment has been minimized.

Show comment
Hide comment
@bradenwright

bradenwright Nov 4, 2015

So I'll make sure I try some more tonight, or next couple days but I can't for the life of me reproduce the problem at this point. B/c the other day every single time it returned saying "error: not found" lxc commands would hang, and I had to restart my computer.

Only thing I will mention is for a little while I was running 0.3 (I think that or something really close) and I noticed much newer versions were out, but 0.3 was what installed by default on a clean ubuntu 15.10 install for me. Maybe the issues with it hanging were on the earlier version. Sorry I don't remember the exact order of events.

I'll try to reproduce the error though, if it happens are there any specific logs or anything like that you want (like the logs for that container in /var/log/lxc/**/lxc.log

When it did happen it was exactly as described here #1246

bradenwright commented Nov 4, 2015

So I'll make sure I try some more tonight, or next couple days but I can't for the life of me reproduce the problem at this point. B/c the other day every single time it returned saying "error: not found" lxc commands would hang, and I had to restart my computer.

Only thing I will mention is for a little while I was running 0.3 (I think that or something really close) and I noticed much newer versions were out, but 0.3 was what installed by default on a clean ubuntu 15.10 install for me. Maybe the issues with it hanging were on the earlier version. Sorry I don't remember the exact order of events.

I'll try to reproduce the error though, if it happens are there any specific logs or anything like that you want (like the logs for that container in /var/log/lxc/**/lxc.log

When it did happen it was exactly as described here #1246

@tych0

This comment has been minimized.

Show comment
Hide comment
@tych0

tych0 Nov 5, 2015

Member

On Wed, Nov 04, 2015 at 03:02:20PM -0800, bradenwright wrote:

So I'll make sure I try some more tonight, or next couple days but I can't for the life of me reproduce the problem at this point. B/c the other day every single time it returned saying "error: not found" lxc commands would hang, and I had to restart my computer.

Only thing I will mention is for a little while I was running 0.3 (I think that or something really close) and I noticed much newer versions were out, but 0.3 was what installed by default on a clean ubuntu 15.10 install for me. Maybe the issues with it hanging were on the earlier version. Sorry I don't remember the exact order of events.

Hmm. 0.3 is quite old, and was never offered in any official ubuntu
release. Can you show apt-get showpkg lxd for that package?

I'll try to reproduce the error though, if it happens are there any specific logs or anything like that you want (like the logs for that container in /var/log/lxc/**/lxc.log

Just a reproducer on a recent version of LXD is probably enough, but
including the /var/log/lxd/* logs would potentially be useful.

Thanks!

When it did happen it was exactly as described here #1246


Reply to this email directly or view it on GitHub:
#1259 (comment)

Member

tych0 commented Nov 5, 2015

On Wed, Nov 04, 2015 at 03:02:20PM -0800, bradenwright wrote:

So I'll make sure I try some more tonight, or next couple days but I can't for the life of me reproduce the problem at this point. B/c the other day every single time it returned saying "error: not found" lxc commands would hang, and I had to restart my computer.

Only thing I will mention is for a little while I was running 0.3 (I think that or something really close) and I noticed much newer versions were out, but 0.3 was what installed by default on a clean ubuntu 15.10 install for me. Maybe the issues with it hanging were on the earlier version. Sorry I don't remember the exact order of events.

Hmm. 0.3 is quite old, and was never offered in any official ubuntu
release. Can you show apt-get showpkg lxd for that package?

I'll try to reproduce the error though, if it happens are there any specific logs or anything like that you want (like the logs for that container in /var/log/lxc/**/lxc.log

Just a reproducer on a recent version of LXD is probably enough, but
including the /var/log/lxd/* logs would potentially be useful.

Thanks!

When it did happen it was exactly as described here #1246


Reply to this email directly or view it on GitHub:
#1259 (comment)

@bradenwright

This comment has been minimized.

Show comment
Hide comment
@bradenwright

bradenwright Nov 13, 2015

FYI I have tried on multiple occasions since and have not been able to reproduce the issue. I also did a clean install of another box with ubuntu 15.10 to see if that would reproduce it, but it installed lxd 0.20 and I couldn't reproduce it there either.

Only think I can think of is my laptop that I had the hanging issue on, I still some stuff like virtual box, vagrant, etc, think I had docker installed. Maybe there was a dependency that kept it from installing lxd 0.20. Anyways if it ever comes up again I'll be sure to report it.

But thanks to everyone involved, its been a pleasure working with lxd so far!!!

bradenwright commented Nov 13, 2015

FYI I have tried on multiple occasions since and have not been able to reproduce the issue. I also did a clean install of another box with ubuntu 15.10 to see if that would reproduce it, but it installed lxd 0.20 and I couldn't reproduce it there either.

Only think I can think of is my laptop that I had the hanging issue on, I still some stuff like virtual box, vagrant, etc, think I had docker installed. Maybe there was a dependency that kept it from installing lxd 0.20. Anyways if it ever comes up again I'll be sure to report it.

But thanks to everyone involved, its been a pleasure working with lxd so far!!!

@bradenwright

This comment has been minimized.

Show comment
Hide comment
@bradenwright

bradenwright Nov 13, 2015

Any update on if the mentioned change to block lxc.network through raw.lxc will be lifted? Or how setting a static ip is going to be dealt with moving forward, I did see this ticket as well #1250

Basically until there is a better solution for setting up things like static ips in the container, can there at least be an option to override it being blocked, b/c it works great for me and I'd prefer not fork the project just to allow this setting ;)

I'm starting to mess with lxc config device add <container> <name> device disk and wanted to upgrade to the latest before filing a ticket, but my setup requires the use of lxc.network until a better solution is available, so I'm just trying to decide if I should be patient or fork the project.

Thanks.

bradenwright commented Nov 13, 2015

Any update on if the mentioned change to block lxc.network through raw.lxc will be lifted? Or how setting a static ip is going to be dealt with moving forward, I did see this ticket as well #1250

Basically until there is a better solution for setting up things like static ips in the container, can there at least be an option to override it being blocked, b/c it works great for me and I'd prefer not fork the project just to allow this setting ;)

I'm starting to mess with lxc config device add <container> <name> device disk and wanted to upgrade to the latest before filing a ticket, but my setup requires the use of lxc.network until a better solution is available, so I'm just trying to decide if I should be patient or fork the project.

Thanks.

@justindthomas

This comment has been minimized.

Show comment
Hide comment
@justindthomas

justindthomas Nov 18, 2015

This is driving me a little crazy. I use consistent mac addresses (mapped at my DHCP server) to set consistent IP addresses (where necessary). Not being able to set a container's mac address is really irksome.

justindthomas commented Nov 18, 2015

This is driving me a little crazy. I use consistent mac addresses (mapped at my DHCP server) to set consistent IP addresses (where necessary). Not being able to set a container's mac address is really irksome.

@stgraber

This comment has been minimized.

Show comment
Hide comment
@stgraber

stgraber Nov 18, 2015

Member

If you look at specs/configuration.md, you'll notice that "nic" type devices have a "hwaddr" attribute that you can set. You absolutely do not have to use raw.lxc to change that.

Member

stgraber commented Nov 18, 2015

If you look at specs/configuration.md, you'll notice that "nic" type devices have a "hwaddr" attribute that you can set. You absolutely do not have to use raw.lxc to change that.

@justindthomas

This comment has been minimized.

Show comment
Hide comment
@justindthomas

justindthomas Nov 18, 2015

I'll check that out - thanks @stgraber.

justindthomas commented Nov 18, 2015

I'll check that out - thanks @stgraber.

@bradenwright

This comment has been minimized.

Show comment
Hide comment
@bradenwright

bradenwright Dec 13, 2015

Any update on this? Either allowing the settings or if there is a plan for network settings moving forward

bradenwright commented Dec 13, 2015

Any update on this? Either allowing the settings or if there is a plan for network settings moving forward

@blakedot

This comment has been minimized.

Show comment
Hide comment
@blakedot

blakedot Dec 21, 2015

I'd like to second that we really need the option to configure lxc.network. This is holding up further development towards my company's production use of LXD.

Could at least a config override knob be added so that we can turn off this check? Even if it's "unsupported" a lot of people I've talked to are using "interesting" LXC network configs that are more complicated than a simple "dhcp on eth0" setup.

Thanks very much & best regards.

blakedot commented Dec 21, 2015

I'd like to second that we really need the option to configure lxc.network. This is holding up further development towards my company's production use of LXD.

Could at least a config override knob be added so that we can turn off this check? Even if it's "unsupported" a lot of people I've talked to are using "interesting" LXC network configs that are more complicated than a simple "dhcp on eth0" setup.

Thanks very much & best regards.

@stgraber

This comment has been minimized.

Show comment
Hide comment
@stgraber

stgraber Dec 21, 2015

Member

So looking at LXC's config keys, there are:

  • lxc.network.type => All supported in LXD except for "none" which we won't support and VLAN which can trivially be done using macvlan or bridge.
  • lxc.network.flags => Not supported in LXD, interface is simply always brought up.
  • lxc.network.link => Supported by LXD
  • lxc.network.mtu => Supported by LXD
  • lxc.network.name => Supported by LXD
  • lxc.network.hwaddr => Supported by LXD
  • lxc.network.ipv4 => Not supported by LXD, IP configuration must be done from inside the container. Most distros flush any pre-existing kernel network configuration when they boot, so this pretty much never works anyway.
  • lxc.network.ipv4.gateway => Not supported by LXD, IP configuration must be done from inside the container. Most distros flush any pre-existing kernel network configuration when they boot, so this pretty much never works anyway.
  • lxc.network.ipv6 => Not supported by LXD, IP configuration must be done from inside the container. Most distros flush any pre-existing kernel network configuration when they boot, so this pretty much never works anyway.
  • lxc.network.ipv6.gateway => Not supported by LXD, IP configuration must be done from inside the container. Most distros flush any pre-existing kernel network configuration when they boot, so this pretty much never works anyway.
  • lxc.network.script.up => Not supported by LXD. Executing a host-side script as root is a security issue and also isn't compatible with live migration. By design, LXD never runs any user-provided code on the host.
  • lxc.network.script.down => Not supported by LXD. Executing a host-side script as root is a security issue and also isn't compatible with live migration. By design, LXD never runs any user-provided code on the host.

And that's it for everything which LXC offers. So simply put:

  • If you need the .ipv4. or .ipv6. stuff, just do the setup from inside the container instead of from outside of it. You can even use template files and set those values as user configuration keys if you want.
  • If you need the up and down script stuff, then you're indeed out of luck because we have no intention to implement those in LXD. However if you tell us what your scripts do, maybe it's a feature that could be added to LXD itself.
Member

stgraber commented Dec 21, 2015

So looking at LXC's config keys, there are:

  • lxc.network.type => All supported in LXD except for "none" which we won't support and VLAN which can trivially be done using macvlan or bridge.
  • lxc.network.flags => Not supported in LXD, interface is simply always brought up.
  • lxc.network.link => Supported by LXD
  • lxc.network.mtu => Supported by LXD
  • lxc.network.name => Supported by LXD
  • lxc.network.hwaddr => Supported by LXD
  • lxc.network.ipv4 => Not supported by LXD, IP configuration must be done from inside the container. Most distros flush any pre-existing kernel network configuration when they boot, so this pretty much never works anyway.
  • lxc.network.ipv4.gateway => Not supported by LXD, IP configuration must be done from inside the container. Most distros flush any pre-existing kernel network configuration when they boot, so this pretty much never works anyway.
  • lxc.network.ipv6 => Not supported by LXD, IP configuration must be done from inside the container. Most distros flush any pre-existing kernel network configuration when they boot, so this pretty much never works anyway.
  • lxc.network.ipv6.gateway => Not supported by LXD, IP configuration must be done from inside the container. Most distros flush any pre-existing kernel network configuration when they boot, so this pretty much never works anyway.
  • lxc.network.script.up => Not supported by LXD. Executing a host-side script as root is a security issue and also isn't compatible with live migration. By design, LXD never runs any user-provided code on the host.
  • lxc.network.script.down => Not supported by LXD. Executing a host-side script as root is a security issue and also isn't compatible with live migration. By design, LXD never runs any user-provided code on the host.

And that's it for everything which LXC offers. So simply put:

  • If you need the .ipv4. or .ipv6. stuff, just do the setup from inside the container instead of from outside of it. You can even use template files and set those values as user configuration keys if you want.
  • If you need the up and down script stuff, then you're indeed out of luck because we have no intention to implement those in LXD. However if you tell us what your scripts do, maybe it's a feature that could be added to LXD itself.
@bradenwright

This comment has been minimized.

Show comment
Hide comment
@bradenwright

bradenwright Dec 21, 2015

I don't know how other feel but setting up ips inside the container is a pain. I'm trying to automate things with Chef/Test-Kitchen, and having the ip switch means I have to put in sleeps or have some way to verify that the ip has been updated before I can run Chef/Test-Kitchen. Not ideal for automation! I would much prefer the container be setup initially on the correct ip than deal with changing it.

Also I do use lxc.network.script.down!!! I use it for openvswitch b/c when a container gets destroy the openvswitch port isn't released (which I believe is a bug but that's the workaround I've found online).

Is there a reason that these setting are being blocked??? Just seems like something is being restricted which people find useful and there isn't a good workaround for it.

Ultimately I really like the flexibility of LXD, but the network setting situation has been a pretty big head ache for me.

bradenwright commented Dec 21, 2015

I don't know how other feel but setting up ips inside the container is a pain. I'm trying to automate things with Chef/Test-Kitchen, and having the ip switch means I have to put in sleeps or have some way to verify that the ip has been updated before I can run Chef/Test-Kitchen. Not ideal for automation! I would much prefer the container be setup initially on the correct ip than deal with changing it.

Also I do use lxc.network.script.down!!! I use it for openvswitch b/c when a container gets destroy the openvswitch port isn't released (which I believe is a bug but that's the workaround I've found online).

Is there a reason that these setting are being blocked??? Just seems like something is being restricted which people find useful and there isn't a good workaround for it.

Ultimately I really like the flexibility of LXD, but the network setting situation has been a pretty big head ache for me.

@blakedot

This comment has been minimized.

Show comment
Hide comment
@blakedot

blakedot Dec 22, 2015

Thanks for getting back to us, Stéphane. Let me give you an example of a production LXC config we use:

lxc.network.type = vlan
lxc.network.vlan.id = 51
lxc.network.flags = up
lxc.network.link = veth1
lxc.network.ipv4 = xx.xxx.x.51/32 255.255.255.255
lxc.network.ipv4.gateway = 8.8.8.8
lxc.network.mtu = 1500

On the "hypervisor", veth1's peer interface is veth0, which is bound directly as a physical interface on a Juniper VSRX MPLS L3VPN router running as a KVM VM. All the network config is done with a Python script that speaks JunOS PyEz & spits out YAML for the LXC configs. The VSRX has one vlan subinterface per container, and each container has a single public IP with a /32 mask. The VSRX is configured to proxy-arp for 8.8.8.8 so that every container on the network can use the same default gateway, so the only network config we need to worry about is the ad-hoc IP on the container and the static host route to the container's subinterface on the Juniper.

This allows for complete VM IP mobility & subnet independence without messing around with any crazy networking protocols (except proxy-arp, which is pretty old & well understood). Since it uses individual vlans on a pre-configured veth (which is brought up in the KVM config of the VSRX) it works just like vlans on a real ethernet interface, and the host's forwarding performance is about as good as it gets. It's also very simple to understand & troubleshoot as it works just like a real router plugged into a physical server with one vlan per container.

Configuring this inside the container is a non-starter, as the whole point of LXC & LXD is to avoid scripts & hacks & stuff and do everything from a central automated point of control with a unified configuration file.

If there's another way to do it without enabling lxc.network we'd certianly be open to that. Otherwise I guess we'll have to redevelop our container config provisioning to use DHCP for IP provisioning...

Again, just a knob to modify the default behavior would be all that's needed. Thanks very much for your help.

edit I guess your earlier solution of "If you need the .ipv4. or .ipv6. stuff, just do the setup from inside the container instead of from outside of it. You can even use template files and set those values as user configuration keys if you want." is pretty much the supported way to do this...

blakedot commented Dec 22, 2015

Thanks for getting back to us, Stéphane. Let me give you an example of a production LXC config we use:

lxc.network.type = vlan
lxc.network.vlan.id = 51
lxc.network.flags = up
lxc.network.link = veth1
lxc.network.ipv4 = xx.xxx.x.51/32 255.255.255.255
lxc.network.ipv4.gateway = 8.8.8.8
lxc.network.mtu = 1500

On the "hypervisor", veth1's peer interface is veth0, which is bound directly as a physical interface on a Juniper VSRX MPLS L3VPN router running as a KVM VM. All the network config is done with a Python script that speaks JunOS PyEz & spits out YAML for the LXC configs. The VSRX has one vlan subinterface per container, and each container has a single public IP with a /32 mask. The VSRX is configured to proxy-arp for 8.8.8.8 so that every container on the network can use the same default gateway, so the only network config we need to worry about is the ad-hoc IP on the container and the static host route to the container's subinterface on the Juniper.

This allows for complete VM IP mobility & subnet independence without messing around with any crazy networking protocols (except proxy-arp, which is pretty old & well understood). Since it uses individual vlans on a pre-configured veth (which is brought up in the KVM config of the VSRX) it works just like vlans on a real ethernet interface, and the host's forwarding performance is about as good as it gets. It's also very simple to understand & troubleshoot as it works just like a real router plugged into a physical server with one vlan per container.

Configuring this inside the container is a non-starter, as the whole point of LXC & LXD is to avoid scripts & hacks & stuff and do everything from a central automated point of control with a unified configuration file.

If there's another way to do it without enabling lxc.network we'd certianly be open to that. Otherwise I guess we'll have to redevelop our container config provisioning to use DHCP for IP provisioning...

Again, just a knob to modify the default behavior would be all that's needed. Thanks very much for your help.

edit I guess your earlier solution of "If you need the .ipv4. or .ipv6. stuff, just do the setup from inside the container instead of from outside of it. You can even use template files and set those values as user configuration keys if you want." is pretty much the supported way to do this...

@schammy

This comment has been minimized.

Show comment
Hide comment
@schammy

schammy Feb 23, 2016

I just wanted to put my support behind this ticket.

There absolutely needs to be a proper supported method to define static networking at container creation time. Without it, I can't use LXD because I need to be able to automate this. This is disappointing because there's a lot of great functionality in LXD that I'm going to miss out on.

schammy commented Feb 23, 2016

I just wanted to put my support behind this ticket.

There absolutely needs to be a proper supported method to define static networking at container creation time. Without it, I can't use LXD because I need to be able to automate this. This is disappointing because there's a lot of great functionality in LXD that I'm going to miss out on.

@justindthomas

This comment has been minimized.

Show comment
Hide comment
@justindthomas

justindthomas Feb 23, 2016

@schammy for what it's worth, I find using static mappings on my DHCP server a natural fit. Modify hwaddr to specify a predictable address for your container and then map that to a static address on your DHCP server.

More complex than just using a static IP address, but it's more flexible in my experience; all my address settings come from one place and one configuration. Makes it easy to avoid collisions.

justindthomas commented Feb 23, 2016

@schammy for what it's worth, I find using static mappings on my DHCP server a natural fit. Modify hwaddr to specify a predictable address for your container and then map that to a static address on your DHCP server.

More complex than just using a static IP address, but it's more flexible in my experience; all my address settings come from one place and one configuration. Makes it easy to avoid collisions.

@schammy

This comment has been minimized.

Show comment
Hide comment
@schammy

schammy Feb 23, 2016

@justindthomas Thanks. I consider that more of a workaround than a solution though, and it adds complexity to something that should be dead simple (like it is in LXC). I'm glad that's working for you but even if we used DHCP in our server environment (we don't), this isn't the solution I want.

schammy commented Feb 23, 2016

@justindthomas Thanks. I consider that more of a workaround than a solution though, and it adds complexity to something that should be dead simple (like it is in LXC). I'm glad that's working for you but even if we used DHCP in our server environment (we don't), this isn't the solution I want.

@justindthomas

This comment has been minimized.

Show comment
Hide comment
@justindthomas

justindthomas Feb 23, 2016

Sure. To each their own. I find it valuable to keep the IP address configuration separate from my container. I suppose I just got used to that (and came to prefer it) after using AWS for a few years.

justindthomas commented Feb 23, 2016

Sure. To each their own. I find it valuable to keep the IP address configuration separate from my container. I suppose I just got used to that (and came to prefer it) after using AWS for a few years.

@bradenwright

This comment has been minimized.

Show comment
Hide comment
@bradenwright

bradenwright Feb 24, 2016

I guess while there is activity on this thread again, I'll comment again. @justindthomas I agree that method is useful, but I also agree with @schammy that it doesn't necessarily resolve or negate the need for things like static ips.

I guess ultimately I'd at least like an explanation for why its being blocked.... especially since most of the lxc.network configuration options are allowed. But I like using static ips for certain things, and the method of setting them with lxc.network.ipv4 works great.

I mean if there is a reason to block it, I'm all ears, if not then it would be nice to open it back but or at least have an override option. Right now I've pinned my install to 0.20, but I'm consider forking the project essentially to comment out an if statement.

bradenwright commented Feb 24, 2016

I guess while there is activity on this thread again, I'll comment again. @justindthomas I agree that method is useful, but I also agree with @schammy that it doesn't necessarily resolve or negate the need for things like static ips.

I guess ultimately I'd at least like an explanation for why its being blocked.... especially since most of the lxc.network configuration options are allowed. But I like using static ips for certain things, and the method of setting them with lxc.network.ipv4 works great.

I mean if there is a reason to block it, I'm all ears, if not then it would be nice to open it back but or at least have an override option. Right now I've pinned my install to 0.20, but I'm consider forking the project essentially to comment out an if statement.

stgraber added a commit to stgraber/lxd that referenced this issue Feb 24, 2016

Allow setting lxc.network.X.ipv{4,6}[.gateway]
This is absolutely unsupported (just like anything through raw.lxc) but
when restricted to only numbered interface and only those two keys, this
shouldn't conflict with LXD's one network handling.

Note that finding the right interface index is left to the user to
figure out, LXD doesn't in any way guarantee LXC configuration ordering
to be consistent across restarts.

Closes lxc#1259

Signed-off-by: Stéphane Graber <stgraber@ubuntu.com>

@hallyn hallyn closed this in #1645 Feb 24, 2016

@schammy

This comment has been minimized.

Show comment
Hide comment
@schammy

schammy Feb 24, 2016

Yay :)

I do wish it was "supported" but I'll take what I can get for now.

schammy commented Feb 24, 2016

Yay :)

I do wish it was "supported" but I'll take what I can get for now.

@blakedot

This comment has been minimized.

Show comment
Hide comment
@blakedot

blakedot Mar 7, 2016

Awesome, thank you @stgraber ! I'll have our devs test this in the coming week or so.

blakedot commented Mar 7, 2016

Awesome, thank you @stgraber ! I'll have our devs test this in the coming week or so.

@bradmwalker

This comment has been minimized.

Show comment
Hide comment
@bradmwalker

bradmwalker Jun 16, 2017

A use case for lxc.network.type = none: open-iscsi's iscsid apparently works only when run in the host's network namespace due to a NETLINK socket (and passing thru and recreating device nodes, mounting sysfs:rw, and preloading modules...)

bradmwalker commented Jun 16, 2017

A use case for lxc.network.type = none: open-iscsi's iscsid apparently works only when run in the host's network namespace due to a NETLINK socket (and passing thru and recreating device nodes, mounting sysfs:rw, and preloading modules...)

@alejandro-perez

This comment has been minimized.

Show comment
Hide comment
@alejandro-perez

alejandro-perez Nov 17, 2017

Contributor

I also want to be able to create a LXC's lxc.network.type = none in LXD, but I don't see the way. I was hoping that raw.lxc would do it, but it complains saying Config parsing error: Only interface-specific ipv4/ipv6 lxc.net. keys are allowed

Contributor

alejandro-perez commented Nov 17, 2017

I also want to be able to create a LXC's lxc.network.type = none in LXD, but I don't see the way. I was hoping that raw.lxc would do it, but it complains saying Config parsing error: Only interface-specific ipv4/ipv6 lxc.net. keys are allowed

@brauner

This comment has been minimized.

Show comment
Hide comment
@brauner

brauner Nov 17, 2017

Member

@alejandro-perez that's not something that's going to work nicely for unprivileged containers. In essence, if you are an unprivileged container and you request lxc.net.0.type = none then you are effectively instructing liblxc to share the network namespace with the host. This will mean the kernel will not allow you to mount sysfs for security reasons since it would allow you to access network interfaces your user namespace does not own. And if there's no sysfs then your init system (systemd especially) will be very unhappy. You could share the user namespace with the host but that only works if the host and the container share the same uid/gid mapping otherwise you won't be able to boot since the container's rootfs will belong to an unprivileged uid/gid and you're not writing a mapping. So the only way to get this to behave correctly is by sharing the network and user namespace with the host being a privileged container.

Member

brauner commented Nov 17, 2017

@alejandro-perez that's not something that's going to work nicely for unprivileged containers. In essence, if you are an unprivileged container and you request lxc.net.0.type = none then you are effectively instructing liblxc to share the network namespace with the host. This will mean the kernel will not allow you to mount sysfs for security reasons since it would allow you to access network interfaces your user namespace does not own. And if there's no sysfs then your init system (systemd especially) will be very unhappy. You could share the user namespace with the host but that only works if the host and the container share the same uid/gid mapping otherwise you won't be able to boot since the container's rootfs will belong to an unprivileged uid/gid and you're not writing a mapping. So the only way to get this to behave correctly is by sharing the network and user namespace with the host being a privileged container.

@alejandro-perez

This comment has been minimized.

Show comment
Hide comment
@alejandro-perez

alejandro-perez Nov 17, 2017

Contributor

@brauner Oh, yes I know that it won't work with unprivileged containers. I was asking how can I do it with LXD privileged containers (I know how to make it work with LXC, but I see no way of doing it with LXD).

Contributor

alejandro-perez commented Nov 17, 2017

@brauner Oh, yes I know that it won't work with unprivileged containers. I was asking how can I do it with LXD privileged containers (I know how to make it work with LXC, but I see no way of doing it with LXD).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment