New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LXD Static IP configuration - clear + working documentation seems scarce #2534
Comments
Seems like the most robust way to accomplish this is to...
Change host /etc/network/interfaces + reboot (works).
lxc config set template-yakkety raw.lxc 'lxc.network.0.ipv4 = 144.217.33.224'
lxc config set template-yakkety raw.lxc 'lxc.network.0.ipv4.link = br0' It appears the only outstanding issue to fix this is how to associate container IP address with host br0 interface. Suggestions? |
lxc network device add template-yakkety eth0 nic nictype=bridged parent=br0 name=eth0 |
Though, note that the preferred way to do this is through your Linux distribution's own configuration mechanism rather than pre-configure things through raw.lxc. For Ubuntu, that'd be through some cloud-init configuration of some sort, that said, if raw.lxc works for you, that's fine too :) |
I believe the first problem to be resolved is to destroy lxdbr0 + unfortunately dpkg-reconfigure no longer works... meaning the networking configuration stage never starts. dpkg-reconfigure -p medium lxd Short of purging/reinstalling LXD completely. I've removed all my containers. Let me know how to reconfigure LXD to no longer use lxdbr0. Thanks. |
lxc profile edit default |
lxc profile edit default == only changes a text file. Still running...
Also ifconfig still shows lxdbr0... service lxd/lxc-containers restart has no effect on dnsmasq or ifconfig listing. Let me how to de-entangle dnsmasq from attempting to handle lxdbr0 + to delete lxdbr0 so it's completely gone... while leaving LXD intact/running. Thanks. |
Ah right, to destroy the bridge, you'll want "lxc network delete lxdbr0" |
Just so the exact process is documented. If you have currently defined containers + wish to remove the associated bridge interface...
lxc profile edit default -> change lxdbr0 to br0
lxc network delete lxdbr0 At this point... the lxdbr0 interface is destroyed (pruned from ifconfig output + all networking tables) + also dnsmasq terminates. Whew... stgrabber == my hero. |
Cool, sounds like you got it all sorted out, Closing |
My version of lxc has no "lxc device add" syntax. Let me know if this looks correct... lxc network attach br0 yakkety eth0 Which looks to mean... attach the host's br0 to the yakkety container's eth0. |
@davidfavor it's "lxc config device add" And yes, your "lxc network attach" is fine, you need LXD 2.3 or higher for that, but I'm assuming that's what you have. |
Using LXD-2.4.1. Hum... After a issuing the above command, now... lxc config get --verbose yakkety raw.lxc So can't get or set raw.lxc anymore. |
@davidfavor "lxc config show yakkety" |
net12 # lxc config show yakkety Ugh... inotifywait shows that raw.lxc seems to live in lxc.db so this data lives in sqlite3 land. |
ah right, you wouldn't be able to show the container config either. It's a bug I fixed a few days ago, it shouldn't have let you set an invalid raw.lxc to begin with... sudo sqlite3 /var/lib/lxd/lxd.db "SELECT value FROM containers_config WHERE key='raw.lxc';" |
net12 # sqlite3 /var/lib/lxd/lxd.db .dump | grep raw net12 # sqlite3 /var/lib/lxd/lxd.db "SELECT value FROM containers_config WHERE key='raw.lxc';" Looks correct. Let me know if I can just delete this line or if that will screw up something else. |
ok, that looks fine, so long as the container has a network device attached to it, otherwise those two entries will be invalid since they apply to a network device that's not defined... Does running:
Fix the problem? That should add the br0 bridge to your container by adding it to the profile it depends on, which should avoid the error you've been getting so far. |
net12 # lxc network attach-profile br0 default eth0 |
Profile seems correct... net12 # lxc profile show default
|
hmm, alright, well, lets just fix the DB then:
Then paste the output of:
|
net12 # lxc config show yakkety --expanded
Hum... If you know the Markdown to use to format this in pre tags, let me know. |
Hmm, so that all looks correct. What do you have in /var/log/lxd/yakkety/lxc.log* with a bit of luck one of the log files will tell you what the parser thought was wrong with your raw.lxc |
My guess is that it may be upset about the gateway being outside of your IP's mask, hopefully it logs that kind of problem. |
Okay... After the sqlite3 deletion... echo -e "lxc.network.0.ipv4 = 144.217.33.224\nlxc.network.0.ipv4.gateway = 149.56.27.254\n" | lxc config set yakkety raw.lxc - lxc start yakkety -> works + IP assigned + IP isn't pingable. Maybe I have to do some of the network attach magic again. Maybe this? lxc network attach-profile br0 default yakkety eth0 |
"ip -4 route show" and "ip -4 addr show" in the container would probably help figure out what's going on. |
net12 # ip -4 route show net12 # ip -4 addr show |
I think you missed the "in the container" part :) |
Maybe this is the problem. Seems like 28: eth0@if29 is wrong. net12 # lxc exec yakkety -- ip -4 route show net12 # lxc exec yakkety -- ip -4 addr show |
Seems to match what you requested LXC to do |
Hmm, so that got ignored somehow, weird |
Hum... I think the container's /etc/network/interfaces has to have something to include /etc/network/interfaces.d/* files. Let me know how you do this. |
Change container's /etc/network/interfaces... auto eth0 to have last line of... source /etc/network/interfaces.d/*.cfg Sound right? Yes? |
net12 # lxc restart yakkety Now produces... net12 # lxc list
|
Then... ip -4 route add 144.217.33.0/24 dev lxdbr0 To make 144.217.33.0/27 addresses pingable... |
Oh right, I thought cloud-init would add that source statement automatically, not sure why you're missing it. |
Hmm, that ip route is a bit wrong, you want your /27 instead of that /24. So:
|
Whoa! Appears... Host can ping container. Container can ping host. Container IP pingable from outside machine. Geez! Might be working. |
(The /24 will obviously work, but you're also accidentally routing a bunch of IPs that don't belong to you :)) |
Got it... So for manual/static routes, there will be one static route/IP... of the form... ip -4 route add 144.217.33.$addr/27 dev lxdbr0 Where $addr is each active IP address. Yes? |
nope, just "ip -4 route add 144.217.33.0/27 dev lxdbr0" will cover your whole subnet, no need to do it per container. |
Right .0 rather than .IP will get them all. Got it. Dude! You're a life saver! I'm hosting 100s of high traffic client sites + like to start converting them all from LXC to LXD. Thanks for your huge investment of time today. After I roll all this info into a simple step-by-step guide, I'll drop the link here. |
Oh, btw, my command earlier was wrong, you want:
Since .224 is the first address. If you use .0/27, then it will do from 144.217.33.0 to to 33.31 which is not what you want :) |
Right... BASE-IP/27 Thanks. |
Testing shows all packet flow working as expected. One final question about runtime IPs. For each container to support a static/public IP, each container will have two IPs. One internal + one external (static/public). Let me know if this looks correct to you, based on your OVH setup. Thanks. net12 # lxc list
|
Looks like you already answered my question above... where you said... Confirm it's got both a private (10.x.x.x) address and its public IP listed in "lxc list" |
Problem source identified. Request for best way to resolve. At boot time, /etc/rc.local runs + executes the following command to route a public ip range to the LXD bridge interface. ip -4 route add 144.217.33.224/27 dev lxdbr0 During host level upgrade of LXD somehow this additional route is lost. Running the above command on the command line fixes problem. Question is, what's the best way to associate the above command with lxdbr0 interface stops/restarts. For physical interfaces, host level /etc/network/interfaces is where post-up commands are added. Someone let me know the correct way to associate a post-up command with the LXD interface. Thanks. |
Hmm, so I think the best way to deal with this would be through a new LXD network config key. Until then, you could define a systemd unit which starts "After=lxd.service" and runs the command you need. That way, whenever the daemon is started/restarted, your command is run again. |
It seems like every 6 months you are invalidating your own documentation and not updating it afterward. I am constantly following instructions in your insights pages only to find important commands and config files completely missing. I will read through this to re-learn how to set up LXD on 16.10 with a bridge to the LAN for my controller node since the instructions I followed 2 weeks ago for 16.04 no longer work and the /etc/defaults/lxd-bridge file has been completely removed now. My current state is Ubuntu 16.10 server with br0 bridge, static Ip assignment, MaaS rack controller running as local LXD instance, static controller ip assignment, static MaaS rack controller Ip assignment, lxd br0 bridge taking over the machine's bridge that is configured in /etc/network/interfaces to static class C RFC1918 and replacing it at boot time with dynamic class A RFC1918 addresses. I am reserving the first 10 ip addresses in MaaS for core services (MaaS Rack and Region controllers, Juju bootstrap, Cloudify, and Network Devices) and providing a dynamic range for Openstack core services via the rack controller DHCP service on this LAN. I should probably not need to have multiple LXD network config sets just to run the current version of Openstack without backports. I cant help but feel that this project would benefit from having more discipline as to what can be ripped out/replaced versus what should be immutable to prevent users from having to replace all their documentation and config automations every 6 months or so. |
@stgraber Is there something I'm missing from #2534 (comment) ? I verified that I can attach any IP from the subnet to my host using lxc network create testbr0 ipv6.address=none ipv4.address=10.0.3.1/24 ipv4.nat=true
lxc network attach-profile testbr0 default eth0 Then I push a file to the container's /etc/network/interfaces.d/eth0-static.cfg containing
and then I add the route to the LXD bridge, after which I restart my container. 169.53.244.32 is the first IP in the subnet |
@CarltonSemple that should be okay, though note that the route may get flushed when LXD restarts (during upgrade or such). That's why I added ipv4.routes a few releases back. |
Nope, you should only have the route setup on the host, not have the address defined there. A firewall could be blocking incoming packets for those IPs or you may have the traffic back out from those containers get NATed somehow breaking things. Best way to debug those kind of issues is to run tcpdump inside the container and see what's making it there. |
@stgraber I'm wondering if it could be the different infrastructure. I'm using Softlayer, with a "routed to VLAN" portable IP block |
With a tcpdump inside the container, nothing appears to reach the container. I see the correct |
@davidfavor Is this step-by-step guide available? I suggest this issue is still relevant. I'm having no success assigning an IP address of my own choosing to a container. I have several questions that I'm not able to answer for myself from this interesting discussion:
I'm hoping to find answers to these questions in generic terms, not Ubuntu-centric, so the information is equally valid across the spectrum of linux hosts. |
http://cloudinit.readthedocs.io/en/latest/topics/network-config.html#network-configuration-sources |
The template below is mostly useful for bug reports and support questions.
Feel free to remove anything which doesn't apply to you and add more information where it makes sense.
Required information
Issue description
Goal is to have LXD containers with static IPs which can communication with host + other containers.
Steps to reproduce
Simplest approach seems to be setting /etc/default/lxd-bridge LXD_CONFILE to a container,IP pairs + Ubuntu 16.10 seems to have removed this file.
I have 100s of LXC container,IP pairs to port to LXD + prefer a solution that avoids the old iptables nat rule approach.
None of the #2083 approaches seem to produce useful results.
The
echo -e "lxc.network.0.ipv4 = 144.217.33.224\nlxc.network.0.ipv4.gateway = 149.56.27.254\n" | lxc config set template-yakkety raw.lxc -
comes close, as my test container does end up with the correct IP assigned.
Maybe this is the correct approach, along with setting up the host base interface (eth2) in my case, to use br0, rather than eth2 + somehow bridging lxdbr0 to br0.
Suggestions appreciated, as all the Ubuntu docs seem wrong + the LXD 2.0 Introduction series seems to be missing basic networking examples for large scale LXD deployments.
Once I have a working approach, I'll publish all steps back here, so others can accomplish this easier.
Thanks.
The text was updated successfully, but these errors were encountered: