From eec07148545c4a021c22913abf186f48aee22af1 Mon Sep 17 00:00:00 2001 From: Steven Spencer Date: Sun, 7 Aug 2022 07:09:06 -0500 Subject: [PATCH] Move lxd server to books * Separate out all of the sections into logical chapters * removed link page jumps * saved to books/lxd_server * in the process, tested for Rocky Linux 9.0 (OK) * modified the .pages navigation for "containers" so that LXD Server redirects to the books section * modified .pages in the books section to add navigation for LXD Server * removed the old stand-alone documents for lxd_server including any translations This move was made because the document best fit in books, rather than in the guides section. --- docs/books/.pages | 1 + docs/books/lxd_server/00-toc.md | 56 + docs/books/lxd_server/01-install.md | 243 ++++ docs/books/lxd_server/02-zfs_setup.md | 62 + docs/books/lxd_server/03-lxdinit.md | 167 +++ docs/books/lxd_server/04-firewall.md | 187 +++ docs/books/lxd_server/05-lxd_images.md | 111 ++ docs/books/lxd_server/06-profiles.md | 365 ++++++ docs/books/lxd_server/07-configurations.md | 133 ++ docs/books/lxd_server/08-snapshots.md | 95 ++ docs/books/lxd_server/09-snapshot_server.md | 185 +++ docs/books/lxd_server/10-automating.md | 70 ++ docs/guides/containers/.pages | 4 + docs/guides/containers/lxd_server.it.md | 1234 ------------------ docs/guides/containers/lxd_server.md | 1239 ------------------- 15 files changed, 1679 insertions(+), 2473 deletions(-) create mode 100644 docs/books/lxd_server/00-toc.md create mode 100644 docs/books/lxd_server/01-install.md create mode 100644 docs/books/lxd_server/02-zfs_setup.md create mode 100644 docs/books/lxd_server/03-lxdinit.md create mode 100644 docs/books/lxd_server/04-firewall.md create mode 100644 docs/books/lxd_server/05-lxd_images.md create mode 100644 docs/books/lxd_server/06-profiles.md create mode 100644 docs/books/lxd_server/07-configurations.md create mode 100644 docs/books/lxd_server/08-snapshots.md create mode 100644 docs/books/lxd_server/09-snapshot_server.md create mode 100644 docs/books/lxd_server/10-automating.md create mode 100644 docs/guides/containers/.pages delete mode 100644 docs/guides/containers/lxd_server.it.md delete mode 100644 docs/guides/containers/lxd_server.md diff --git a/docs/books/.pages b/docs/books/.pages index f8b1fa1353..b77b5b87ea 100644 --- a/docs/books/.pages +++ b/docs/books/.pages @@ -5,5 +5,6 @@ nav: - Learning Ansible: learning_ansible - Learning Bash: learning_bash - Learning Rsync: learning_rsync + - LXD Production Server: lxd_server - DISA STIG: disa_stig - ... diff --git a/docs/books/lxd_server/00-toc.md b/docs/books/lxd_server/00-toc.md new file mode 100644 index 0000000000..65e7ff9ccd --- /dev/null +++ b/docs/books/lxd_server/00-toc.md @@ -0,0 +1,56 @@ +--- +title: LXD Server +author: Steven Spencer +contributors: Ezequiel Bruni +tested with: 8.5, 8.6 +tags: + - lxd + - enterprise +--- + +# Creating a full LXD Server + +!!! note "A note about Rocky Linux 9.0" + + This procedure works just fine for 9.0. The exception is the ZFS procedure, as the latest version supported in their repository is 8.6. This will probably change going forward, but for now, if you want to use ZFS storage pools, consider staying on Rocky Linux 8.6. The chapter dealing with ZFS has been marked as specific to 8.6. + +## Introduction + +LXD is best described on the [official website](https://linuxcontainers.org/lxd/introduction/), but think of it as a container system that provides the benefits of virtual servers in a container, or a container on steroids. + +It is very powerful, and with the right hardware and set up, can be leveraged to run a lot of server instances on a single piece of hardware. If you pair that with a snapshot server, you also have a set of containers that you can spin up almost immediately in the event that your primary server goes down. + +(You should not think of this as a traditional backup. You still need a regular backup system of some sort, like [rsnapshot](../backup/rsnapshot_backup.md).) + +The learning curve for LXD can be a bit steep, but this book will attempt to give you a wealth of knowledge at your fingertips, to help you deploy and use LXD on Rocky Linux. + +For those wanting to use LXD as a lab environment on their own laptops or workstations, see **Appendix A: Workstation Setup**. + +## Prerequisites And Assumptions + +* One Rocky Linux server, nicely configured. You should consider a separate hard drive for ZFS disk space (you have to if you are using ZFS) in a production environment. And yes, we are assuming this is a bare metal server, not a VPS. +* This should be considered an advanced topic, but we have tried our best to make it as easy to understand as possible for everyone. That said, knowing a few basic things about container management will take you a long way. +* You should be very comfortable at the command line on your machine(s), and fluent in a command line editor. (We are using _vi_ throughout this example, but you can substitute in your favorite editor.) +* You need to be an unprivileged user for the bulk of the LXD processes. Except where noted, enter LXD commands as your unprivileged user. We are assuming that you are logged in as a user named "lxdadmin" for LXD commands. The bulk of the set up _is_, done as root until you get past the LXD initialization. We will have you create the "lxdadmin" user later in the process. +* For ZFS, make sure that UEFI secure boot is NOT enabled. Otherwise, you will end up having to sign the ZFS module in order to get it to load. +* We are using Rocky Linux-based containers for the most part. + +## Synopsis + +* **Chapter 1: Install and Configuration** deals with the installation of the primary server. In general, the proper way to do LXD in production is to have both a primary server and a snapshot server. +* **Chapter 2: (8.6 Only) ZFS Setup** deals with the setup and configuration of the ZFS. ZFS is an open-source logical volume manager and file system created by Sun Microsystems, originally for its Solaris operating system. It is technically possible for you to build ZFS from source for 9.0, however ZFS is complicated, so if you really want to use it on 9.0, your best bet is to wait for the ZFS repository to be updated. +* **Chapter 3: LXD Initialization and User Setup** Deals with the base initialization and options and covers both Rocky Linux 8.6 and 9.0. It also deals with the setup of our unprivileged user that we will use throughout most of the rest of the process. +* **Chapter 4: Firewall Setup** deals with both `iptables` and `firewalld` setup options, but we recommend that you use `firewalld`for both 8.6 and 9.0. +* **Chapter 5: Setting Up and Managing Images** describes the process for installing OS images to a container and configuring them. It discusses the challenges of using `macvlan` for IP addressing on 9.0 and outlines a workaround procedure for doing so. +* **Chapter 6: Profiles** deals with adding profiles and applying them to containers and particularly covers macvlan and its importance for IP addressing on your LAN or WAN +* **Chapter 7: Container Configuration Options** briefly covers some of the basic configuration options for containers and offers some pros and cons for modifying configuration options. +* **Chapter 8: Container Snapshots** details the snapshot process for containers on the primary server. +* **Chapter 9: The Snapshot Server** covers the setup and configuration of the snapshot server and how to create the symbiotic relationship between the primary and snapshot server. +* **Chapter 10: Automating Snapshots** covers the automation of snapshot creation and populating the snapshot server with new snapshots. +* **Appendix A: Workstation Setup** not technically a part of the production server documents, but offers a solution for people who want a simple way to build a lab of LXD containers on their personal laptop or workstation. + +## Conclusions + +You can use these chapters to effectively setup an enterprise-level primary and snapshot LXD server. In the process, you will learn a great deal about LXD, and we *have* touched on a lot of features. Just be aware that there is much more to learn, and treat these documents as a starting point. + +The greatest advantage of LXD is that it is economical to use on a server, allows you to spin up OS installs quickly, can be used for multiple stand-alone application servers running on a single piece of bare metal, all of which properly leverages that hardware for maximum use. diff --git a/docs/books/lxd_server/01-install.md b/docs/books/lxd_server/01-install.md new file mode 100644 index 0000000000..7ce0661e4b --- /dev/null +++ b/docs/books/lxd_server/01-install.md @@ -0,0 +1,243 @@ +--- +title: 1 Install and Configuration +author: Steven Spencer +contributors: Ezequiel Bruni +tested with: 8.5, 8.6, 9.0 +tags: + - lxd + - enterprise + - lxd install +--- + +# Chapter 1: Install and Configuration + +Throughout this section you will need to be the root user or you will need to be able to _sudo_ to root. + +## Install EPEL and OpenZFS (8.6 Only) Repositories + +LXD requires the EPEL (Extra Packages for Enterprise Linux) repository, which is easy to install using: + +``` +dnf install epel-release +``` + +Once installed, check for updates: + +``` +dnf update +``` + +If you're using ZFS, install the OpenZFS repository with: + +``` +dnf install https://zfsonlinux.org/epel/zfs-release.el8_6.noarch.rpm +``` + +We also need the GPG key, so use this command to get that: + +``` +gpg --import --import-options show-only /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux +``` + +If there were kernel updates during the update process above, reboot your server + +## Install snapd, dkms, vim, and kernel-devel + +LXD must be installed from a snap for Rocky Linux. For this reason, we need to install `snapd` (and a few other useful programs) with: + +``` +dnf install snapd dkms vim kernel-devel +``` + +And now enable and start snapd: + +``` +systemctl enable snapd +``` + +And then run: + +``` +systemctl start snapd +``` + +Reboot the server before continuing here. + +## Install LXD + +Installing LXD requires the use of the snap command. At this point, we are just installing it, we are not doing the set up: + +``` +snap install lxd +``` + +## Install OpenZFS (8.6 Only) + +``` +dnf install zfs +``` + +## Environment Set up + +Most server kernel settings are not sufficient to run a large number of containers. If we assume from the beginning that we will be using our server in production, then we need to make these changes up front to avoid errors such as "Too many open files" from occurring. + +Luckily, tweaking the settings for LXD is easy with a few file modifications and a reboot. + +### Modifying limits.conf + +The first file we need to modify is the limits.conf file. This file is self-documented, so look at the explanations in the file as to what this file does. To make our modifications type: + +``` +vi /etc/security/limits.conf +``` + +This entire file is remarked/commented out and, at the bottom, shows the current default settings. In the blank space above the end of file marker (#End of file) we need to add our custom settings. The end of the file will look like this when you are done: + +``` +# Modifications made for LXD + +* soft nofile 1048576 +* hard nofile 1048576 +root soft nofile 1048576 +root hard nofile 1048576 +* soft memlock unlimited +* hard memlock unlimited +``` + +Save your changes and exit. (`SHIFT:wq!` for _vi_) + +### Modifying sysctl.conf With 90-lxd.override.conf + +With _systemd_, we can make changes to our system's overall configuration and kernel options *without* modifying the main configuration file. Instead, we'll put our settings in a separate file that will simply override the particular settings we need. + +To make these kernel changes, we are going to create a file called _90-lxd-override.conf_ in /etc/sysctl.d. To do this type: + +``` +vi /etc/sysctl.d/90-lxd-override.conf +``` + +Place the following content in that file. Note that if you are wondering what we are doing here, the file content below is self-documenting: + +``` +## The following changes have been made for LXD ## + +# fs.inotify.max_queued_events specifies an upper limit on the number of events that can be queued to the corresponding inotify instance + - (default is 16384) + +fs.inotify.max_queued_events = 1048576 + +# fs.inotify.max_user_instances This specifies an upper limit on the number of inotify instances that can be created per real user ID - +(default value is 128) + +fs.inotify.max_user_instances = 1048576 + +# fs.inotify.max_user_watches specifies an upper limit on the number of watches that can be created per real user ID - (default is 8192) + +fs.inotify.max_user_watches = 1048576 + +# vm.max_map_count contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of cal +ling malloc, directly by mmap and mprotect, and also when loading shared libraries - (default is 65530) + +vm.max_map_count = 262144 + +# kernel.dmesg_restrict denies container access to the messages in the kernel ring buffer. Please note that this also will deny access t +o non-root users on the host system - (default is 0) + +kernel.dmesg_restrict = 1 + +# This is the maximum number of entries in ARP table (IPv4). You should increase this if you create over 1024 containers. + +net.ipv4.neigh.default.gc_thresh3 = 8192 + +# This is the maximum number of entries in ARP table (IPv6). You should increase this if you plan to create over 1024 containers.Not nee +ded if not using IPv6, but... + +net.ipv6.neigh.default.gc_thresh3 = 8192 + +# This is a limit on the size of eBPF JIT allocations which is usually set to PAGE_SIZE * 40000. + +net.core.bpf_jit_limit = 3000000000 + +# This is the maximum number of keys a non-root user can use, should be higher than the number of containers + +kernel.keys.maxkeys = 2000 + +# This is the maximum size of the keyring non-root users can use + +kernel.keys.maxbytes = 2000000 + +# This is the maximum number of concurrent async I/O operations. You might need to increase it further if you have a lot of workloads th +at use the AIO subsystem (e.g. MySQL) + +fs.aio-max-nr = 524288 +``` + +Save your changes and exit. + +At this point you should reboot the server. + +### Checking _sysctl.conf_ Values + +Once the reboot has been completed, log back in as to the server. We need to spot check that our override file has actually done the job. + +This is easy to do. There's no need to check every setting unless you want to, but checking a few will verify that the settings have been changed. This is done with the _sysctl_ command: + +``` +sysctl net.core.bpf_jit_limit +``` + +Which should show you: + +``` +net.core.bpf_jit_limit = 3000000000 +``` + +Do the same with a few other settings in the override file (above) to verify that changes have been made. + +### Enabling ZFS And Setting Up The Pool (8.6 Only) + +If you have UEFI secure boot turned off, this should be fairly easy. First, load the ZFS module with modprobe: + +``` +/sbin/modprobe zfs +``` + +This should not return an error, it should simply return to the command prompt when done. If you get an error, stop now and begin troubleshooting. Again, make sure that secure boot is off as that will be the most likely culprit. + +Next we need to take a look at the disks on our system, determine what has the OS loaded on it, and what is available to use for the ZFS pool. We will do this with _lsblk_: + +``` +lsblk +``` + +Which should return something like this (your system will be different!): + +``` +AME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT +loop0 7:0 0 32.3M 1 loop /var/lib/snapd/snap/snapd/11588 +loop1 7:1 0 55.5M 1 loop /var/lib/snapd/snap/core18/1997 +loop2 7:2 0 68.8M 1 loop /var/lib/snapd/snap/lxd/20037 +sda 8:0 0 119.2G 0 disk +├─sda1 8:1 0 600M 0 part /boot/efi +├─sda2 8:2 0 1G 0 part /boot +├─sda3 8:3 0 11.9G 0 part [SWAP] +├─sda4 8:4 0 2G 0 part /home +└─sda5 8:5 0 103.7G 0 part / +sdb 8:16 0 119.2G 0 disk +├─sdb1 8:17 0 119.2G 0 part +└─sdb9 8:25 0 8M 0 part +sdc 8:32 0 149.1G 0 disk +└─sdc1 8:33 0 149.1G 0 part +``` + +In this listing, we can see that */dev/sda* is in use by the operating system, so we are going to use */dev/sdb* for our zpool. Note that if you have multiple free hard drives, you may wish to consider using raidz (a software raid specifically for ZFS). + +That falls outside the scope of this document, but should definitely be a consideration for production, as it offers better performance and redundancy. For now, let's create our pool on the single device we have identified: + +``` +zpool create storage /dev/sdb +``` + +What this says is to create a pool called "storage" that is ZFS on the device */dev/sdb*. + +Once the pool is created, it's a good idea to reboot the server again at this point. diff --git a/docs/books/lxd_server/02-zfs_setup.md b/docs/books/lxd_server/02-zfs_setup.md new file mode 100644 index 0000000000..762dddc812 --- /dev/null +++ b/docs/books/lxd_server/02-zfs_setup.md @@ -0,0 +1,62 @@ +--- +title: 2 ZFS Setup (8.6 Only) +author: Steven Spencer +contributors: Ezequiel Bruni +tested with: 8.5, 8.6 +tags: + - lxd + - enterprise + - lxd zfs +--- + +# Chapter 2: ZFS Setup (8.6 Only) + +If you are using Rocky Linux 8.6 and have already installed ZFS, this section will walk you through ZFS setup. + +## Enabling ZFS and setting Up the pool + +First, enter this command: + +``` +/sbin/modprobe zfs +``` + +This should not return an error, it should simply return to the command prompt when done. If you get an error, stop now and begin troubleshooting. Again, make sure that secure boot is off as that will be the most likely culprit. + +Next we need to take a look at the disks on our system, determine what has the OS loaded on it, and what is available to use for the ZFS pool. We will do this with _lsblk_: + +``` +lsblk +``` + +Which should return something like this (your system will be different!): + +``` +AME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT +loop0 7:0 0 32.3M 1 loop /var/lib/snapd/snap/snapd/11588 +loop1 7:1 0 55.5M 1 loop /var/lib/snapd/snap/core18/1997 +loop2 7:2 0 68.8M 1 loop /var/lib/snapd/snap/lxd/20037 +sda 8:0 0 119.2G 0 disk +├─sda1 8:1 0 600M 0 part /boot/efi +├─sda2 8:2 0 1G 0 part /boot +├─sda3 8:3 0 11.9G 0 part [SWAP] +├─sda4 8:4 0 2G 0 part /home +└─sda5 8:5 0 103.7G 0 part / +sdb 8:16 0 119.2G 0 disk +├─sdb1 8:17 0 119.2G 0 part +└─sdb9 8:25 0 8M 0 part +sdc 8:32 0 149.1G 0 disk +└─sdc1 8:33 0 149.1G 0 part +``` + +In this listing, we can see that */dev/sda* is in use by the operating system, so we are going to use */dev/sdb* for our zpool. Note that if you have multiple free hard drives, you may wish to consider using raidz (a software raid specifically for ZFS). + +That falls outside the scope of this document, but should definitely be a consideration for production, as it offers better performance and redundancy. For now, let's create our pool on the single device we have identified: + +``` +zpool create storage /dev/sdb +``` + +What this says is to create a pool called "storage" that is ZFS on the device */dev/sdb*. + +Once the pool is created, it's a good idea to reboot the server again at this point. diff --git a/docs/books/lxd_server/03-lxdinit.md b/docs/books/lxd_server/03-lxdinit.md new file mode 100644 index 0000000000..ab9a76c4d7 --- /dev/null +++ b/docs/books/lxd_server/03-lxdinit.md @@ -0,0 +1,167 @@ +--- +title: 3 LXD Initialization and User Setup +author: Steven Spencer +contributors: Ezequiel Bruni +tested with: 8.5, 8.6, 9.0 +tags: + - lxd + - enterprise + - lxd initialization + - lxd setup +--- + +# Chapter 3: LXD Initialization and User Setup + +There are separate procedures for Rocky Linux 8.6 and 9.0 below, with the 8.6 version assuming that you are using a ZFS storage pool. + +##LXD Initialization + +Now that the environment is all set up, we are ready to initialize LXD. This is an automated script that asks a series of questions to get your LXD instance up and running: + +``` +lxd init +``` + +### For Rocky Linux 8.6 + +Here are the questions and our answers for the script, with a little explanation where warranted: + +``` +Would you like to use LXD clustering? (yes/no) [default=no]: +``` + +If you are interested in clustering, do some additional research on that [here](https://lxd.readthedocs.io/en/latest/clustering/) + +``` +Do you want to configure a new storage pool? (yes/no) [default=yes]: +``` + +This may seem counter-intuitive, since we have already created our ZFS pool, but it will be resolved in a later question. Accept the default. + +``` +Name of the new storage pool [default=default]: storage +``` + +You could leave this as default if you wanted to, but we have chosen to use the same name we gave our ZFS pool. + +``` +Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: +``` + +Obviously we want to accept the default. + +``` +Create a new ZFS pool? (yes/no) [default=yes]: no +``` + +Here's where the earlier question about creating a storage pool is resolved. + +``` +Name of the existing ZFS pool or dataset: storage +Would you like to connect to a MAAS server? (yes/no) [default=no]: +``` + +Metal As A Service (MAAS) is outside the scope of this document. + +``` +Would you like to create a new local network bridge? (yes/no) [default=yes]: +What should the new bridge be called? [default=lxdbr0]: +What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: +What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none +``` + +If you want to use IPv6 on your LXD containers, you can turn on this option. That is up to you. + +``` +Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes +``` + +This is necessary to snapshot the server, so answer "yes" here. + +``` +Address to bind LXD to (not including port) [default=all]: +Port to bind LXD to [default=8443]: +Trust password for new clients: +Again: +``` + +This trust password is how you will connect to the snapshot server or back from the snapshot server, so set this with something that makes sense in your environment. Save this entry to a secure location, such as a password manager. + +``` +Would you like stale cached images to be updated automatically? (yes/no) [default=yes] +Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: +``` + +### For Rocky Linux 9.0 + +Here are the questions and our answers for the script, with a little explanation where warranted: + +``` +Would you like to use LXD clustering? (yes/no) [default=no]: +``` + +If you are interested in clustering, do some additional research on that [here](https://lxd.readthedocs.io/en/latest/clustering/) + +``` +Do you want to configure a new storage pool? (yes/no) [default=yes]: +Name of the new storage pool [default=default]: storage +``` + +Optionally, you can accept the default. Since we aren't using ZFS, it really is just a choice. + +``` +Name of the storage backend to use (btrfs, dir, lvm, ceph) [default=btrfs]: dir +``` + +Note that dir is somewhat slower than btrfs. If you have the forsight to leave a disk empty, you can use that device (example: /dev/sdb) as the btrfs device and then choose btrfs. dir will work fine + +``` +Would you like to connect to a MAAS server? (yes/no) [default=no]: +``` + +Metal As A Service (MAAS) is outside the scope of this document. + +``` +Would you like to create a new local network bridge? (yes/no) [default=yes]: +What should the new bridge be called? [default=lxdbr0]: +What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: +What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none +``` + +If you want to use IPv6 on your LXD containers, you can turn on this option. That is up to you. + +``` +Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes +``` + +This is necessary to snapshot the server, so answer "yes" here. + +``` +Address to bind LXD to (not including port) [default=all]: +Port to bind LXD to [default=8443]: +Trust password for new clients: +Again: +``` + +This trust password is how you will connect to the snapshot server or back from the snapshot server, so set this with something that makes sense in your environment. Save this entry to a secure location, such as a password manager. + +``` +Would you like stale cached images to be updated automatically? (yes/no) [default=yes] +Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: +``` + +## Setting Up User Privileges + +Before we continue on, we need to create our "lxdadmin" user and make sure that it has the privileges it needs. We need the "lxdadmin" user to be able to _sudo_ to root and we need it to be a member of the lxd group. To add the user and make sure it is a member of both groups do: + +``` +useradd -G wheel,lxd lxdadmin +``` + +Then set the password: + +``` +passwd lxdadmin +``` + +As with the other passwords, save this to a secure location. diff --git a/docs/books/lxd_server/04-firewall.md b/docs/books/lxd_server/04-firewall.md new file mode 100644 index 0000000000..f4a4090dfd --- /dev/null +++ b/docs/books/lxd_server/04-firewall.md @@ -0,0 +1,187 @@ +--- +title: 4 Firewall Setup +author: Steven Spencer +contributors: Ezequiel Bruni +tested with: 8.5, 8.6, 9.0 +tags: + - lxd + - enterprise + - lxd security +--- + +# Chapter 4: Firewall Setup + +As with any server, you need to make sure that it is secured from the outside world and on your LAN. While our example server only has a LAN interface, it is totally possible to have two interfaces, one each facing your LAN and WAN networks. While we cover `iptables` rules in this procedure, we **highly** recommend using the `firewalld` procedure instead (see the note below). + +## Firewall Set Up - iptables + +!!! note "A note regarding Rocky Linux 9.0" + + Starting with Rocky Linux 9.0, `iptables` and all of the associated utilities are officially deprecated. This means that in future versions of the OS, perhaps as early as 9.1, they will disappear altogether. For this reason, you should skip down to the `firewalld` procedure below before continuing. + +Before continuing, you will want a firewall set up on your server. This example is using _iptables_ and [this procedure](../security/enabling_iptables_firewall.md) to disable _firewalld_. If you prefer to use _firewalld_, simply substitute in _firewalld_ rules using the instructions below this section. + +Create your firewall.conf script: + +``` +vi /etc/firewall.conf +``` + +We are assuming an LXD server on a LAN network of 192.168.1.0/24 below. Note, too, that we are accepting all traffic from our bridged interface. This is important if you want your containers to get IP addresses from the bridge. + +This firewall script makes no other assumptions about the network services needed. There is an SSH rule to allow our LAN network IP's to SSH into the server. You can very easily have many more rules needed here, depending on your environment. Later, we will be adding a rule for bi-directional traffic between our production server and the snapshot server. + +``` +#!/bin/sh +# +#IPTABLES=/usr/sbin/iptables + +# Unless specified, the defaults for OUTPUT is ACCEPT +# The default for FORWARD and INPUT is DROP +# +echo " clearing any existing rules and setting default policy.." +iptables -F INPUT +iptables -P INPUT DROP +iptables -A INPUT -i lxdbr0 -j ACCEPT +iptables -A INPUT -p tcp -m tcp -s 192.168.1.0/24 --dport 22 -j ACCEPT +iptables -A INPUT -i lo -j ACCEPT +iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT +iptables -A INPUT -p tcp -j REJECT --reject-with tcp-reset +iptables -A INPUT -p udp -j REJECT --reject-with icmp-port-unreachable + +/usr/sbin/service iptables save +``` + +## Firewall Set Up - firewalld + +For _firewalld_ rules, we need to use [this basic procedure](../security/firewalld.md) or be familiar with those concepts. Our assumptions are the same as with the _iptables_ rules above: LAN network of 192.168.1.0/24 and a bridge named lxdbr0. To be clear, you might have multiple interfaces on your LXD server, with one perhaps facing your WAN as well. We are also going to create a zone for the bridged and local networks. This is just for zone clarity sake, as the other names do not really apply. The below assumes that you already know the basics of _firewalld_. + +``` +firewall-cmd --new-zone=bridge --permanent +``` + +You need to reload the firewall after adding a zone: + +``` +firewall-cmd --reload +``` + +We want to allow all traffic from the bridge, so let's just add the interface, and then change the target from "default" to "ACCEPT" and we will be done: + +!!! attention + + Changing the target of a firewalld zone *must* be done with the --permanent option, so we might as well just enter that flag in our other commands as well and forgo the --runtime-to-permanent option. + +!!! Note + + If you need to create a zone that you want to allow all access to the interface or source, but do not want to have to specify any protocols or services, then you *must* change the target from "default" to ACCEPT. The same is true of DROP and REJECT for a particular IP block that you have custom zones for. To be clear, the "drop" zone will take care of that for you as long as you aren't using a custom zone. + +``` +firewall-cmd --zone=bridge --add-interface=lxdbr0 --permanent +firewall-cmd --zone=bridge --set-target=ACCEPT --permanent +``` +Assuming no errors and everything is still working just do a reload: + +``` +firewall-cmd --reload +``` +If you list out your rules now with `firewall-cmd --zone=bridge --list-all` you should see something like the following: + +``` +bridge (active) + target: ACCEPT + icmp-block-inversion: no + interfaces: lxdbr0 + sources: + services: + ports: + protocols: + forward: no + masquerade: no + forward-ports: + source-ports: + icmp-blocks: + rich rules: +``` +Note from the _iptables_ rules, that we also want to allow our local interface. Again, I do not like the included zones for this, so create a new zone and use the source IP range for the local interface to make sure you have access: + +``` +firewall-cmd --new-zone=local --permanent +firewall-cmd --reload +``` +Then we just need to add the source IP's for the local interface, change the target to "ACCEPT" and we are done with this as well: + +``` +firewall-cmd --zone=local --add-source=127.0.0.1/8 --permanent +firewall-cmd --zone=local --set-target=ACCEPT --permanent +firewall-cmd --reload +``` +Go ahead and list out the "local" zone to make sure your rules are there with `firewall-cmd --zone=local --list all` which should show you something like this: + +``` +local (active) + target: ACCEPT + icmp-block-inversion: no + interfaces: + sources: 127.0.0.1/8 + services: + ports: + protocols: + forward: no + masquerade: no + forward-ports: + source-ports: + icmp-blocks: + rich rules: +``` + +Next we want to allow SSH from our trusted network. We will use the source IP's here, just like in our _iptables_ example, and the built-in "trusted" zone. The target for this zone is already "ACCEPT" by default. + +``` +firewall-cmd --zone=trusted --add-source=192.168.1.0/24 +``` +Then add the service to the zone: + +``` +firewall-cmd --zone=trusted --add-service=ssh +``` +And if everything is working, move your rules to permanent and reload the rules: + +``` +firewall-cmd --runtime-to-permanent +firewall-cmd --reload +``` +Listing out your "trusted" zone should now show you something like this: + +``` +trusted (active) + target: ACCEPT + icmp-block-inversion: no + interfaces: + sources: 192.168.1.0/24 + services: ssh + ports: + protocols: + forward: no + masquerade: no + forward-ports: + source-ports: + icmp-blocks: + rich rules: +``` + +By default, the "public" zone is enabled and has SSH allowed. We don't want this. Make sure that your zones are correct and that the access you are getting to the server is via one of the LAN IP's (in the case of our example) and is allowed to SSH. You could lock yourself out of the server if you don't verify this before continuing. Once you've made sure you have access from the correct interface, remove SSH from the "public" zone: + +``` +firewall-cmd --zone=public --remove-service=ssh +``` + +Test access and make sure you aren't locked out. If not, then move your rules to permanent, reload, and list out zone "public" to be sure that SSH is removed: + +``` +firewall-cmd --runtime-to-permanent +firewall-cmd --reload +firewall-cmd --zone=public --list-all +``` + +There may be other interfaces on your server to consider. You can use built-in zones where appropriate, but if you don't like the names (they don't appear logical, etc.), you can definitely add zones. Just remember that if you have no services or protocols that you need to allow or reject specifically, then you will need to modify the zone target. If it works to use interfaces, as we've done with the bridge, you can do that. If you need more granular access to services, uses source IP's instead. diff --git a/docs/books/lxd_server/05-lxd_images.md b/docs/books/lxd_server/05-lxd_images.md new file mode 100644 index 0000000000..208141b708 --- /dev/null +++ b/docs/books/lxd_server/05-lxd_images.md @@ -0,0 +1,111 @@ +--- +title: 5 Setting Up and Managing Images +author: Steven Spencer +contributors: Ezequiel Bruni +tested with: 8.5, 8.6, 9.0 +tags: + - lxd + - enterprise + - lxd images +--- + +# Chapter 5: Setting Up and Managing Images + +Throughout this chapter and from here on out unless otherwise noted, you will be running commands as your unprivileged user. ("lxdadmin" if you are following along with these documents). + +## List Available Images + +Once you have your server environment set up, you'll probably be itching to get started with a container. There are a _lot_ of container OS possibilities. To get a feel for how many possibilities, enter this command: + +``` +lxc image list images: | more +``` + +Hit the space bar to page through the list. This list of containers and virtual machines continues to grow. For now, we are sticking with containers. + +The last thing you want to do is to page through looking for a container image to install, particularly if you know the image that you want to create. Let's modify the command above to show only Rocky Linux install options: + +``` +lxc image list images: | grep rocky +``` + +This brings up a much more manageable list: + +``` +| rockylinux/8 (3 more) | 0ed2f148f7c6 | yes | Rockylinux 8 amd64 (20220805_02:06) | x86_64 | CONTAINER | 128.68MB | Aug 5, 2022 at 12:00am (UTC) | +| rockylinux/8 (3 more) | 6411a033fdf1 | yes | Rockylinux 8 amd64 (20220805_02:06) | x86_64 | VIRTUAL-MACHINE | 643.15MB | Aug 5, 2022 at 12:00am (UTC) | +| rockylinux/8/arm64 (1 more) | e677777306cf | yes | Rockylinux 8 arm64 (20220805_02:29) | aarch64 | CONTAINER | 124.06MB | Aug 5, 2022 at 12:00am (UTC) | +| rockylinux/8/cloud (1 more) | 3d2fe303afd3 | yes | Rockylinux 8 amd64 (20220805_02:06) | x86_64 | CONTAINER | 147.04MB | Aug 5, 2022 at 12:00am (UTC) | +| rockylinux/8/cloud (1 more) | 7b37619bf333 | yes | Rockylinux 8 amd64 (20220805_02:06) | x86_64 | VIRTUAL-MACHINE | 659.58MB | Aug 5, 2022 at 12:00am (UTC) | +| rockylinux/8/cloud/arm64 | 21c930b2ce7d | yes | Rockylinux 8 arm64 (20220805_02:06) | aarch64 | CONTAINER | 143.17MB | Aug 5, 2022 at 12:00am (UTC) | +| rockylinux/9 (3 more) | 61b0171b7eca | yes | Rockylinux 9 amd64 (20220805_02:07) | x86_64 | VIRTUAL-MACHINE | 526.38MB | Aug 5, 2022 at 12:00am (UTC) | +| rockylinux/9 (3 more) | e7738a0e2923 | yes | Rockylinux 9 amd64 (20220805_02:07) | x86_64 | CONTAINER | 107.80MB | Aug 5, 2022 at 12:00am (UTC) | +| rockylinux/9/arm64 (1 more) | 917b92a54032 | yes | Rockylinux 9 arm64 (20220805_02:06) | aarch64 | CONTAINER | 103.81MB | Aug 5, 2022 at 12:00am (UTC) | +| rockylinux/9/cloud (1 more) | 16d3f18f2abb | yes | Rockylinux 9 amd64 (20220805_02:06) | x86_64 | CONTAINER | 123.52MB | Aug 5, 2022 at 12:00am (UTC) | +| rockylinux/9/cloud (1 more) | 605eadf1c512 | yes | Rockylinux 9 amd64 (20220805_02:06) | x86_64 | VIRTUAL-MACHINE | 547.39MB | Aug 5, 2022 at 12:00am (UTC) | +| rockylinux/9/cloud/arm64 | db3ce70718e3 | yes | Rockylinux 9 arm64 (20220805_02:06) | aarch64 | CONTAINER | 119.27MB | Aug 5, 2022 at 12:00am (UTC) | +``` + +## Installing, Renaming, And Listing Images + +For the first container, we are going to choose rockylinux/8. To install it, we *could* use: + +``` +lxc launch images:rockylinux/8 rockylinux-test-8 +``` + +That will create a Rocky Linux-based containter named "rockylinux-test-8". You can rename a container after it has been created, but you first need to stop the container, which starts automatically when it is launched. + +To start the container manually, use: + +``` +lxc start rockylinux-test-8 +``` + +To rename this image (we aren't going to do this here, but this is how it is done) first stop the container: + +``` +lxc stop rockylinux-test-8 +``` + +Then simply move the container to a new name: + +``` +lxc move rockylinux-test-8 rockylinux-8 +``` + +If you followed this instruction anyway, stop the container and move it back to the original name to continue to follow along. + +For the purposes of this guide, go ahead and install two more images for now: + +``` +lxc launch images:rockylinux/9 rockylinux-test-9 +``` + +and + +``` +lxc launch images:ubuntu/22.04 ubuntu-test +``` + +Now let's take a look at what we have so far by listing our images: + +``` +lxc list +``` + +which should return something like this: + +``` ++-------------------+---------+----------------------+------+-----------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-8 | RUNNING | 10.146.84.179 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-9 | RUNNING | 10.146.84.180 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| ubuntu-test | RUNNING | 10.146.84.181 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ + +``` + diff --git a/docs/books/lxd_server/06-profiles.md b/docs/books/lxd_server/06-profiles.md new file mode 100644 index 0000000000..94f0852c57 --- /dev/null +++ b/docs/books/lxd_server/06-profiles.md @@ -0,0 +1,365 @@ +--- +title: 6 Profiles +author: Steven Spencer +contributors: Ezequiel Bruni +tested with: 8.5, 8.6, 9.0 +tags: + - lxd + - enterprise + - lxd profiles +--- + +# Chapter 6: Profiles + +You get a default profile when you install LXD, and this profile cannot be removed or modified. That said, you can use the default profile to create new profiles to use with your containers. + +If you look at our container listing you will notice that the IP address in each case is assigned from the bridged interface. In a production environment, you may want to use something else. This might be a DHCP assigned address from your LAN interface or even a statically assigned address from your WAN. + +If you configure your LXD server with two interfaces, and assign each an IP on your WAN and LAN, then it is possible to assign your containers IP addresses based on which interface the container needs to be facing. + +As of version 9.0 of Rocky Linux (and really any bug for bug copy of Red Hat Enterprise Linux) the method for assigning IP addresses statically or dynamically using the profiles below, is broken out of the gate. + +There are ways to get around this, but it is annoying. This appears to have something to do with changes that have been made to Network Manager that affect macvlan. macvlan allows you to create multiple interfaces with different Layer 2 addresses. + +For now, just be aware that what we are going to suggest next has drawbacks when choosing container images based on RHEL. + +## Creating A macvlan Profile And Assigning It + +To create our macvlan profile, simply use this command: + +``` +lxc profile create macvlan +``` + +Keep in mind that if we were on a multi-interface machine and wanted more than one macvlan template based on which network we wanted to reach, we could use "lanmacvlan" or "wanmacvlan" or any other name that we wanted to use to identify the profile. In other words, using "macvlan" in our profile create statement is totally up to you. + +Now we want to modify the macvlan interface, but before we do, we need to know what the parent interface is for our LXD server. This will be the interface that has a LAN (in this case) assigned IP. To determine which interface that is, use: + +``` +ip addr +``` + +And then look for the interface with the LAN IP assignment in the 192.168.1.0/24 network: + +``` +2: enp3s0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 + link/ether 40:16:7e:a9:94:85 brd ff:ff:ff:ff:ff:ff + inet 192.168.1.106/24 brd 192.168.1.255 scope global dynamic noprefixroute enp3s0 + valid_lft 4040sec preferred_lft 4040sec + inet6 fe80::a308:acfb:fcb3:878f/64 scope link noprefixroute + valid_lft forever preferred_lft forever +``` + +So in this case, the interface would be "enp3s0". + +Now let's modify the profile: + +``` +lxc profile device add macvlan eth0 nic nictype=macvlan parent=enp3s0 +``` + +This command adds all of the necessary parameters to the macvlan profile so that it can be used. + +To take a look at what this command created, use the command: + +``` +lxc profile show macvlan +``` + +Which will give you output similar to this: + + +``` +config: {} +description: "" +devices: + eth0: + nictype: macvlan + parent: enp3s0 + type: nic +name: macvlan +used_by: [] +``` + +Obviously, you can use profiles for lots of other things, but assigning a static IP to a container, or using your own DHCP server as a source for an address are very common needs. + +To assign the macvlan profile to rockylinux-test-8 we need to do the following: + +``` +lxc profile assign rockylinux-test-8 default,macvlan +``` + +Let's also do the same thing for rockylinux-test-9: + +``` +lxc profile assign rockylinux-test-9 default,macvlan +``` + +This simply says, we want the default profile, and then we want to apply the macvlan profile as well. + +## Rocky Linux macvlan + +The upstream has been playing with the Network Manager implementation to the point of frustration. Early on with Rocky Linux 8, the macvlan profile was broken. With Rocky Linux 8.6, the macvlan profile worked again, and then with 9.0, it was once again broken. Technically, +macvlan is part of the kernel, and the profile does assign correctly in 9.0, but an IP address from the LAN interface network is never assigned. There are work arounds to fix this, but none is very pretty-particularly if you are wanting to assign a static IP address. + +Keep in mind that none of this has anything to do with Rocky Linux particularly, but with the upstream package implementation. + +Simply put, if you want to run Rocky Linux containers and use macvlan to assign an IP address from your LAN or WAN networks, then the process is different based on which container version of the OS you are using (8.6 or 9.0). + +### Rocky Linux 9.0 macvlan - The DHCP Fix + +First, let's illustrate what happens when we stop and restart the two containers after assigning the macvlan profile. + +Having the profile assigned, however, doesn't change the default configuration, which is set to DHCP by default. + +To test this, simply do the following: + +``` +lxc restart rocky-test-8 +lxc restart rocky-test-9 +``` + +Now list your containers again and note that the rockylinux-test-9 not have an IP address anymore: + +``` +lxc list +``` + +``` ++-------------------+---------+----------------------+------+-----------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-8 | RUNNING | 192.168.1.114 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-9 | RUNNING | | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| ubuntu-test | RUNNING | 10.146.84.181 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ + +``` +As you can see, our Rocky Linux 8.6 container received the IP address from the LAN interface, whereas the Rocky Linux 9.0 container did not. + +To further demonstrate the problem here, we need to execute `dhclient` on the Rocky Linux 9.0 container. This will show us that the macvlan profile, *is* in fact applied: + +``` +lxc exec rockylinux-test-9 dhclient +``` + +A new listing using `lxc list` now shows the following: + +``` ++-------------------+---------+----------------------+------+-----------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-8 | RUNNING | 192.168.1.114 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-9 | RUNNING | 192.168.1.113 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| ubuntu-test | RUNNING | 10.146.84.181 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +``` + +That should have happened with a simple stop and start of the container, but it does not. Assuming that we want to use a DHCP assigned IP address every time, then we can fix this with a simple crontab entry. To do this, we need to gain shell access to the container by enter +ing: + +``` +lxc exec rockylinux-test-9 bash +``` + +Next, lets determine the complete path to `dhclient`. To do this, because this was built on a minimal image, you will need to first install `which`: + +``` +dnf install which +``` + +then run: + +``` +which dhclient +``` + +which should return: + +``` +/usr/sbin/dhclient +``` + +Next, let's modify root's crontab: + +``` +crontab -e +``` + +And add this line: + +``` +@reboot /usr/sbin/dhclient +``` + +The crontab command entered above, uses _vi_ so to save your changes and exit simply use: + +``` +SHIFT:wq! +``` + +Now exit the container and restart rockylinux-test-9: + +``` +lxc restart rockylinux-test-9 +``` + +A new listing will reveal that the container has been assigned the DHCP address: + +``` ++-------------------+---------+----------------------+------+-----------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-8 | RUNNING | 192.168.1.114 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-9 | RUNNING | 192.168.1.113 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| ubuntu-test | RUNNING | 10.146.84.181 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ + +``` + +### Rocky Linux 9.0 macvlan - The Static IP Fix + +To statically assign an IP address, things get even more convoluted. Since `network-scripts` is now deprecated in Rocky Linux 9.0, the only way to do this is through static assignment, and because of the way the containers use the network, you're not going to be able to set the route with a normal `ip route` statement either. The fix is to allow the container to get a dynamically allocated IP from your router, and then to script the addition of the static IP along with the removal of the dynamically assigned one. We have already run `dhclient` and have seen the dynamically assigned IP, so we can use this (192.168.1.113) as the IP to delete. IF your router decides to hand out a different IP, the worst that can happen is that your container will end up with two IP's on the same network. + +To do this, we need to gain shell access to the container again: + +``` +lxc exec rockylinux-test-9 bash +``` + +Next, we are going to create a bash script in `/usr/local/sbin` called "static" + +``` +vi /usr/local/sbin/static +``` + +The contents of this script are simple: + +``` +#!/usr/bin/env bash + +/usr/sbin/dhclient +sleep 3 +/usr/sbin/ip addr add 192.168.1.151/24 dev eth0 +sleep 3 +/usr/sbin/ip addr del 192.168.1.113/24 dev eth0 +``` + +So what are we doing here? First, we run `dhclient` because we need the route that is created automatically when we do this. Deleting the IP will not delete the route. Second, we assign the new static IP that we have allocated for our container. In this case 192.168.1.151. and last, we delete the ip that was dynamically assigned. The sleep commands between lines gives each command time to complete before moving on to the next one. It is particularly important that `dhclient` has time to run so that the route will be added before we do the rest of the steps. + +Make our script exeutable with + +``` +chmod +x /usr/local/sbin/static +``` + +Finally, exit the container and restart it + +``` +lxc restart rockylinux-test-9 +``` + +Wait a few seconds and then list out the containers again: + +``` +lxc list +``` + +Which should show you success: + +``` ++-------------------+---------+----------------------+------+-----------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-8 | RUNNING | 192.168.1.114 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-9 | RUNNING | 192.168.1.151 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| ubuntu-test | RUNNING | 10.146.84.181 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +``` + +## Ubuntu macvlan + +Luckily, In Ubuntu's implementation of Network Manager, the macvlan stack is NOT broken, so it is much easier to deploy! + +First, just like with our rockylinux-test-9 container, we need to assign the template to our container: + +``` +lxc profile assign ubuntu-test default,macvlan` +``` + +That should be all that is necessary to get a DHCP assigned address. To find out, stop and then start the container again: + +``` +lxc restart ubuntu-test +``` + +Then list the containers again: + +``` ++-------------------+---------+----------------------+------+-----------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-8 | RUNNING | 192.168.1.114 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-9 | RUNNING | 192.168.1.151 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| ubuntu-test | RUNNING | 192.168.1.132 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +``` + +Success! + +Configuring the Static IP is just a little different, but not at all hard. We need to modify the .yaml file associated with the container's connection (/10-lxc.yaml). For this static IP, we will use 192.168.1.201: + +``` +vi /etc/netplan/10-lxc.yaml +``` + +And change what is there to the following: + +``` +network: + version: 2 + ethernets: + eth0: + dhcp4: false + addresses: [192.168.1.201/24] + gateway4: 192.168.1.1 + nameservers: + addresses: [8.8.8.8,8.8.4.4] +``` + +Save your changes (`SHFT:wq!`) and exit the container. + +Now restart the container: + +``` +lxc restart ubuntu-test +``` + +When you list your containers again, you should see our new static IP: + +``` ++-------------------+---------+----------------------+------+-----------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-8 | RUNNING | 192.168.1.114 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| rockylinux-test-9 | RUNNING | 192.168.1.151 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ +| ubuntu-test | RUNNING | 192.168.1.201 (eth0) | | CONTAINER | 0 | ++-------------------+---------+----------------------+------+-----------+-----------+ + +``` + +Success! + +In the examples used in this chapter, we have intentionally chosen a hard container to configure, and two easy ones. There are obviously many more versions of Linux available in the image listing. If you have a favorite, try installing it, assigning the macvlan template, and setting IP's. diff --git a/docs/books/lxd_server/07-configurations.md b/docs/books/lxd_server/07-configurations.md new file mode 100644 index 0000000000..8e9b086aca --- /dev/null +++ b/docs/books/lxd_server/07-configurations.md @@ -0,0 +1,133 @@ +--- +title: 7 Container Configuration Options +author: Steven Spencer +contributors: Ezequiel Bruni +tested with: 8.5, 8.6, 9.0 +tags: + - lxd + - enterprise + - lxd configuration +--- + +# Chapter 7: Container Configuration Options + +There are a wealth of options for configuring the container once you have it installed. Before we get into how to see those, however, let's take a look at the info command for a container. In this example, we will use the ubuntu-test container: + +``` +lxc info ubuntu-test +``` + +This shows something like the following: + +``` +Name: ubuntu-test +Location: none +Remote: unix:// +Architecture: x86_64 +Created: 2021/04/26 15:14 UTC +Status: Running +Type: container +Profiles: default, macvlan +Pid: 584710 +Ips: + eth0: inet 192.168.1.201 enp3s0 + eth0: inet6 fe80::216:3eff:fe10:6d6d enp3s0 + lo: inet 127.0.0.1 + lo: inet6 ::1 +Resources: + Processes: 13 + Disk usage: + root: 85.30MB + CPU usage: + CPU usage (in seconds): 1 + Memory usage: + Memory (current): 99.16MB + Memory (peak): 110.90MB + Network usage: + eth0: + Bytes received: 53.56kB + Bytes sent: 2.66kB + Packets received: 876 + Packets sent: 36 + lo: + Bytes received: 0B + Bytes sent: 0B + Packets received: 0 + Packets sent: 0 +``` + +There's a lot of good information there, from the profiles applied, to the memory in use, disk space in use, and more. + +### A Word About Configuration And Some Options + +By default, LXD will allocate the required system memory, disk space, CPU cores, etc., to the container. But what if we want to be more specific? That is totally possible. + +There are trade-offs to doing this, though. For instance, if we allocate system memory and the container doesn't actually use it all, then we have kept it from another container that might actually need it. The reverse, though, can happen. If a container is a complete pig on memory, then it can keep other containers from getting enough, thereby pinching their performance. + +Just keep in mind that every action you make to configure a container _can_ have negative effects somewhere else. + +Rather than run through all of the options for configuration, use the tab auto-complete to see the options available: + +``` +lxc config set ubuntu-test +``` + +and then hit TAB. + +This shows you all of the options for configuring a container. If you have questions about what one of the configuration options does, head up to the [official documentation for LXD](https://lxd.readthedocs.io/en/stable-4.0/instances/) and do a search for the configuration parameter, or Google the entire string, such as "lxc config set limits.memory" and take a look at the results of the search. + +We will look at a few of the most used configuration options. For example, if you want to set the max amount of memory that a container can use: + +``` +lxc config set ubuntu-test limits.memory 2GB +``` + +That says that as long as the memory is available to use, in other words there is 2GB of memory free, then the container can actually use more than 2GB if it's available. It's a soft limit, in other words. + +``` +lxc config set ubuntu-test limits.memory.enforce 2GB +``` + +That says that the container can never use more than 2GB of memory, whether it's currently available or not. In this case it's a hard limit. + +``` +lxc config set ubuntu-test limits.cpu 2 +``` + +That says to limit the number of cpu cores that the container can use to 2. + +Remember when we set up our storage pool in the ZFS chapter? We named the pool "storage," but we could have named it anything. If we want to look at this, we can use this command, which works equally well for any of the other pool types too (as shown for dir): + +``` +lxc storage show storage +``` + + +This shows the following: + +``` +config: + source: /var/snap/lxd/common/lxd/storage-pools/storage +description: "" +name: storage +driver: dir +used_by: +- /1.0/instances/rockylinux-test-8 +- /1.0/instances/rockylinux-test-9 +- /1.0/instances/ubuntu-test +- /1.0/profiles/default +status: Created +locations: +- none +``` + +This shows that all of our containers are using our dir storage pool. When using ZFS, you can also set a disk quota on a container. Here's what that would look like setting a 2GB disk quota on the ubuntu-test container. You do this with: + +``` +lxc config device override ubuntu-test root size=2GB +``` + +As stated earlier, you should use configuration options sparingly, unless you've got a container that wants to use way more than its share of resources. LXD, for the most part, will manage the environment well on its own. + +There are, of course, many more options that may be of interest to some people. You should do your own research to find out if any of those are of value in your environment. + diff --git a/docs/books/lxd_server/08-snapshots.md b/docs/books/lxd_server/08-snapshots.md new file mode 100644 index 0000000000..26085f801b --- /dev/null +++ b/docs/books/lxd_server/08-snapshots.md @@ -0,0 +1,95 @@ +--- +title: 8 Container Snapshots +author: Steven Spencer +contributors: Ezequiel Bruni +tested with: 8.5, 8.6, 9.0 +tags: + - lxd + - enterprise + - lxd snapshots +--- + +# Chapter 8: Container Snapshots + +Container snapshots, along with a snapshot server (which we will get to more later), are probably the most important aspect of running a production LXD server. Snapshots ensure quick recovery, and can be used for safety when you are, say, updating the primary software that runs on a particular container. If something happens during the update that breaks that application, you simply restore the snapshot and you are back up and running with only a few seconds worth of downtime. + +The author used LXD containers for PowerDNS public facing servers, and the process of updating those applications became so much more worry-free, since you can snapshot the container first before continuing. + +You can even snapshot a container while it is running. + +## The snapshot process + +We'll start by getting a snapshot of the ubuntu-test container by using this command: + +``` +lxc snapshot ubuntu-test ubuntu-test-1 +``` + +Here, we are calling the snapshot "ubuntu-test-1", but it can be called anything you like. To make sure that you have the snapshot, do an "lxc info" of the container: + +``` +lxc info ubuntu-test +``` + +We've looked at an info screen already, so if you scroll to the bottom, you should see: + +``` +Snapshots: + ubuntu-test-1 (taken at 2021/04/29 15:57 UTC) (stateless) +``` + +Success! Our snapshot is in place. + +Now, get into the ubuntu-test container: + +``` +lxc exec ubuntu-test bash +``` + +And create an empty file with the _touch_ command: + +``` +touch this_file.txt +``` + +Now exit the container. + +Before we restore the container as it was prior to creating the file, the safest way to restore a container, particularly if there have been a lot of changes, is to stop it first: + +``` +lxc stop ubuntu-test +``` + +Then restore it: + +``` +lxc restore ubuntu-test ubuntu-test-1 +``` + +Then start the container again: + +``` +lxc start ubuntu-test +``` + +If you get back into the container again and look, our "this_file.txt" that we created is now gone. + +Once you don't need a snapshot anymore, you can delete it: + +``` +lxc delete ubuntu-test/ubuntu-test-1 +``` + +!!! important + + You should always delete snapshots with the container running. Why? Well the _lxc delete_ command also works to delete the entire container. If we had accidentally hit enter after "ubuntu-test" in the command above, AND, if the container was stopped, the container would be deleted. No warning is given, it simply does what you ask. + + If the container is running, however, you will get this message: + + ``` + Error: The instance is currently running, stop it first or pass --force + ``` + + So always delete snapshots with the container running. + +The process of creating snapshots automatically, setting expiration of the snapshot so that it goes away after a certain length of time, and auto refreshing the snapshots to the snapshot server will be covered in detail in the following chapters. diff --git a/docs/books/lxd_server/09-snapshot_server.md b/docs/books/lxd_server/09-snapshot_server.md new file mode 100644 index 0000000000..8ee7c9eadd --- /dev/null +++ b/docs/books/lxd_server/09-snapshot_server.md @@ -0,0 +1,185 @@ +--- +title: 9 Snapshot Server +author: Steven Spencer +contributors: Ezequiel Bruni +tested with: 8.5, 8.6, 9.0 +tags: + - lxd + - enterprise + - lxd snapshot server +--- + +# Chapter 9: Snapshot Server + +As noted at the beginning, the snapshot server for LXD should be a mirror of the production server in every way possible. The reason is that you may need to take it to production in the event of a hardware failure, and having not only backups, but a quick way to bring up production containers, keeps those systems administrator panic phone calls and text messages to a minimum. THAT is ALWAYS good! + +So the process of building the snapshot server is exactly like the production server. To fully emulate our production server set up, do all of Chapters 1-4 again on ther snapshot server, and when completed, return to this spot. + +You're back!! Congratulations, this must mean that you have successfully completed the basic install for the snapshot server. That's great news!! + +## Setting Up The Primary and Snapshot Server Relationship + +We've got some housekeeping to do before we continue. First, if you are running in a production environment, you probably have access to a DNS server that you can use for setting up IP to name resolution. + +In our lab, we don't have that luxury. Perhaps you've got the same scenario running. For this reason, we are going to add both servers IP addresses and names to the /etc/hosts file on BOTH the primary and the snapshot server. You'll need to do this as your root (or _sudo_) user. + +In our lab, the primary LXD server is running on 192.168.1.106 and the snapshot LXD server is running on 192.168.1.141. We will SSH into both servers and add the following to the /etc/hosts file: + +``` +192.168.1.106 lxd-primary +192.168.1.141 lxd-snapshot +``` + +Next, we need to allow all traffic between the two servers. To do this, we are going to modify the /etc/firewall.conf file with the following. First, on the lxd-primary server, add this line: + +### IPTables - Rocky Linux 8.6 and below only + +``` +IPTABLES -A INPUT -s 192.168.1.141 -j ACCEPT +``` + +And on the lxd-snapshot server, add this line: + +``` +IPTABLES -A INPUT -s 192.168.1.106 -j ACCEPT +``` + +This allows bi-directional traffic of all types to travel between the two servers. + +### Firewalld - Rocky Linux 9.0 (also works with 8.x) + +``` +firewall-cmd zone=trusted add-source=192.168.1.141 --permanent +``` + +and on the snapshot server, add this rule: + +``` +firewall-cmd zone=trusted add-source=192.168.1.106 --permanent +``` + +then reload: + +``` +firewall-cmd reload +``` + +## Setting Up The Primary and Snapshot Server Relationship (continued) + +Next, as the "lxdadmin" user, we need to set the trust relationship between the two machines. This is done by executing the following on lxd-primary: + +``` +lxc remote add lxd-snapshot +``` + +This will display the certificate to accept, so do that, and then it will prompt for your password. This is the "trust password" that you set up when doing the LXD initialization step. Hopefully, you are securely keeping track of all of these passwords. Once you + enter the password, you should receive this: + +``` +Client certificate stored at server: lxd-snapshot +``` + +It does not hurt to have this done in reverse as well. In other words, set the trust relationship on the lxd-snapshot server so that, if needed, snapshots can be sent back to the lxd-primary server. Simply repeat the steps and substitute in "lxd-primary" for "lxd-snapshot." + +### Migrating Our First Snapshot + +Before we can migrate our first snapshot, we need to have any profiles created on lxd-snapshot that we have created on the lxd-primary. In our case, this is the "macvlan" profile. + +You'll need to create this for lxd-snapshot, so go back to [Chapter 6](06-profiles.md) and create the "macvlan" profile on lxd-snapshot. If your two servers have identical parent interface names ("enp3s0" for example) then you can copy the "macvlan" profile over to lxd-snapshot without recreating it: + +``` +lxc profile copy macvlan lxd-snapshot +``` + +Now that we have all of the relationships and profiles set up, the next step is to actually send a snapshot from lxd-primary over to lxd-snapshot. If you've been following along exactly, you've probably deleted all of your snapshots, so let's create a new one: + +``` +lxc snapshot rockylinux-test-9 rockylinux-test-9-snap1 +``` + +If you run the "info" sub-command for lxc, you can see the new snapshot on the bottom of our listing: + +``` +lxc info rockylinux-test-9 +``` + +Which will show something like this at the bottom: + +``` +rockylinux-test-9-snap1 at 2021/05/13 16:34 UTC) (stateless) +``` + +OK, fingers crossed! Let's try to migrate our snapshot: + +``` +lxc copy rockylinux-test-9/rockylinux-test-9-snap1 lxd-snapshot:rockylinux-test-9 +``` + +What this command says is, that within the container rockylinux-test-9, we want to send the snapshot, rockylinux-test-9-snap1 over to lxd-snapshot and copy it as rockylinux-test-9. + +After a short period of time has expired, the copy will be complete. Want to find out for sure? Do an "lxc list" on the lxd-snapshot server. Which should return the following: + +``` ++-------------------+---------+------+------+-----------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++-------------------+---------+------+------+-----------+-----------+ +| rockylinux-test-9 | STOPPED | | | CONTAINER | 0 | ++-------------------+---------+------+------+-----------+-----------+ +``` + +Success! Now let's try starting it. Because we are starting it on the lxd-snapshot server, we need to stop it first on the lxd-primary server: + +``` +lxc stop rockylinux-test-9 +``` + +And on the lxd-snapshot server: + +``` +lxc start rockylinux-test-9 +``` + +Assuming all of this works without error, stop the container on lxd-snapshot and start it again on lxd-primary. + +## Setting boot.autostart To Off For Containers + +The snapshots copied to lxd-snapshot will be down when they are migrated, but if you have a power event or need to reboot the snapshot server because of updates or something, you will end up with a problem as those containers will attempt to start on the snapshot server. + +To eliminate this, we need to set the migrated containers so that they will not start on reboot of the server. For our newly copied rockylinux-test-9 container, this is done with the following: + +``` +lxc config set rockylinux-test-9 boot.autostart 0 +``` + +Do this for each snapshot on the lxd-snapshot server. + +## Automating The Snapshot Process + +Ok, so it's great that you can create snapshots when you need to, and sometimes you _do_ need to manually create a snapshot. You might even want to manually copy it over to lxd-snapshot. BUT, once you've got things going and you've got 25 to 30 containers or more running on your lxd-primary machine, the very last thing you want to do is spend an afternoon deleting snapshots on the snapshot server, creating new snapshots and sending them over. + +The first thing we need to do is schedule a process to automate snapshot creation on lxd-primary. This has to be done for each container on the lxd-primary server, but once it is set up, it will take care of itself. This is done with the following syntax. Note the similarities to a crontab entry for the timestamp: + +``` +lxc config set [container_name] snapshots.schedule "50 20 * * *" +``` + +What this is saying is, do a snapshot of the container name every day at 8:50 PM. + +To apply this to our rockylinux-test-9 container: + +``` +lxc config set rockylinux-test-9 snapshots.schedule "50 20 * * *" +``` + +We also want to set up the name of the snapshot to be meaningful by our date. LXD uses UTC everywhere, so our best bet to keep track of things, is to set the snapshot name with a date/time stamp that is in a more understandable format: + +``` +lxc config set rockylinux-test-9 snapshots.pattern "rockylinux-test-9{{ creation_date|date:'2006-01-02_15-04-05' }}" +``` + +GREAT, but we certainly don't want a new snapshot every day without getting rid of an old one, right? We'd fill up the drive with snapshots. So next we run: + +``` +lxc config set rockylinux-test-9 snapshots.expiry 1d +``` + diff --git a/docs/books/lxd_server/10-automating.md b/docs/books/lxd_server/10-automating.md new file mode 100644 index 0000000000..27fc9f8f0e --- /dev/null +++ b/docs/books/lxd_server/10-automating.md @@ -0,0 +1,70 @@ +--- +title: 10 Automating Snapshots +author: Steven Spencer +contributors: Ezequiel Bruni +tested with: 8.5, 8.6, 9.0 +tags: + - lxd + - enterprise + - lxd automation +--- + +# Chapter 10: Automating Snapshots + +Automating the snapshot process makes things a whole lot easier. + +## Automating The Snapshot Copy Process + +This process is performed on lxd-primary. First thing we need to do is create a script that will be run by cron in /usr/local/sbin called "refresh-containers" : + +``` +sudo vi /usr/local/sbin/refreshcontainers.sh +``` + +The script is pretty simple: + +``` +#!/bin/bash +# This script is for doing an lxc copy --refresh against each container, copying +# and updating them to the snapshot server. + +for x in $(/var/lib/snapd/snap/bin/lxc ls -c n --format csv) + do echo "Refreshing $x" + /var/lib/snapd/snap/bin/lxc copy --refresh $x lxd-snapshot:$x + done + +``` + + Make it executable: + +``` +sudo chmod +x /usr/local/sbin/refreshcontainers.sh +``` + +Change the ownership of this script to your lxdadmin user and group: + +``` +sudo chown lxdadmin.lxdadmin /usr/local/sbin/refreshcontainers.sh +``` + +Set up the crontab for the lxdadmin user to run this script, in this case at 10 PM: + +``` +crontab -e +``` + +And your entry will look like this: + +``` +00 22 * * * /usr/local/sbin/refreshcontainers.sh > /home/lxdadmin/refreshlog 2>&1 +``` + +Save your changes and exit. + +This will create a log in lxdadmin's home directory called "refreshlog" which will give you knowledge of whether your process worked or not. Very important! + +The automated procedure will fail sometimes. This generally happens when a particular container fails to refresh. You can manually re-run the refresh with the following command (assuming rockylinux-test-9 here, as our container): + +``` +lxc copy --refresh rockylinux-test-9 lxd-snapshot:rockylinux-test-9 +``` diff --git a/docs/guides/containers/.pages b/docs/guides/containers/.pages new file mode 100644 index 0000000000..7d93ebba77 --- /dev/null +++ b/docs/guides/containers/.pages @@ -0,0 +1,4 @@ +--- +nav: + - LXD Server: https:/docs.rockylinux.org/books/lxd_server/00-toc.md/ + - ... diff --git a/docs/guides/containers/lxd_server.it.md b/docs/guides/containers/lxd_server.it.md deleted file mode 100644 index 54960bb878..0000000000 --- a/docs/guides/containers/lxd_server.it.md +++ /dev/null @@ -1,1234 +0,0 @@ ---- -title: Server LXD -author: Steven Spencer, Franco Colussi -contributors: Ezequiel Bruni, Franco Colussi -tested with: 8.5, 8.6 -tags: - - lxd - - enterprise ---- - -# Creare un server LXD completo - -## Introduzione - -LXD è meglio descritto sul [sito web ufficiale](https://linuxcontainers.org/lxd/introduction/), ma consideratelo come un sistema di container che offre i vantaggi dei server virtuali in un container, o un container con gli steroidi. - -È molto potente e, con l'hardware e la configurazione giusta, può essere sfruttato per eseguire molte istanze di server su un singolo pezzo di hardware. Se lo si abbina a un server snapshot, si ha anche una serie di container che possono essere avviati quasi immediatamente nel caso in cui il server primario si guasti. - -(Non si deve pensare a questo come a un backup tradizionale. È comunque necessario un sistema di backup regolare di qualche tipo, come [rsnapshot](../backup/rsnapshot_backup.md)) - -La curva di apprendimento di LXD può essere un po' ripida, ma questo documento cercherà di fornire un bagaglio di conoscenze a portata di mano, per aiutarvi a distribuire e utilizzare LXD su Rocky Linux. - -## Prerequisiti E Presupposti - -* Un server Linux Rocky, ben configurato. In un ambiente di produzione si dovrebbe considerare un disco rigido separato per lo spazio su disco ZFS (è necessario se si usa ZFS). E sì, si presume che si tratti di un server bare metal, non di un VPS. -* Questo dovrebbe essere considerato un argomento avanzato, ma abbiamo fatto del nostro meglio per renderlo il più semplice possibile da capire per tutti. Detto questo, conoscere alcune nozioni di base sulla gestione dei container vi porterà lontano. -* Dovete essere a vostro agio con la riga di comando del vostro computer e saper usare con disinvoltura un editor da riga di comando. (In questo esempio utilizziamo _vi_, ma potete sostituirlo con il vostro editor preferito) -* È necessario essere un utente non privilegiato per la maggior parte dei processi LXD. Tranne quando indicato, inserire i comandi LXD come utente non privilegiato. Si presume che per i comandi LXD si sia connessi come utente "lxdadmin". La maggior parte della configurazione _viene_ eseguita come root fino a quando non si supera l'inizializzazione di LXD. L'utente "lxdadmin" verrà creato più avanti nel processo. -* Per ZFS, assicurarsi che l'avvio UEFI secure boot NON sia abilitato. Altrimenti, si finirà per dover firmare il modulo ZFS per poterlo caricare. -* Per il momento utilizzeremo contenitori basati su CentOS, poiché LXC non dispone ancora di immagini Rocky Linux. Rimanete sintonizzati per gli aggiornamenti, perché è probabile che questo cambierà con il tempo. - -!!! Note "Nota" - - La situazione è cambiata! Negli esempi che seguono, potete sostituire i contenitori Rocky Linux con altri. - -## Parte 1: Preparazione dell'ambiente - -Per tutta la "Parte 1" dovrete essere l'utente root o dovrete essere in grado di fare _sudo_ a root. - -### Installare i repository EPEL e OpenZFS - -LXD richiede il repository EPEL (Extra Packages for Enterprise Linux), che è facile da installare: - -`dnf install epel-release` - -Una volta installato, controllate gli aggiornamenti: - -`dnf update` - -Se si utilizza ZFS, installare il repository OpenZFS con: - -`dnf installa https://zfsonlinux.org/epel/zfs-release.el8_3.noarch.rpm` - -Abbiamo bisogno anche della chiave GPG, per cui utilizziamo questo comando per ottenerla: - -`gpg --import --import-options show-only /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux` - -Se sono stati eseguiti aggiornamenti del kernel durante il processo di aggiornamento di cui sopra, riavviare il server - -### Installare snapd, dkms e vim - -LXD deve essere installato da uno snap per Rocky Linux. Per questo motivo, è necessario installare snapd (e alcuni altri programmi utili) con: - -`dnf install snapd dkms vim` - -Ora abilitate e avviate snapd: - -`systemctl enable snapd` - -E poi eseguire: - -`systemctl start snapd` - -Riavviare il server prima di continuare. - -### Installare LXD - -L'installazione di LXD richiede l'uso del comando snap. A questo punto, stiamo solo installando, non stiamo facendo alcuna configurazione: - -`sudo snap install lxd` - -### Installare OpenZFS - -`dnf install kernel-devel zfs` - -### Impostazione dell'Ambiente - -La maggior parte delle impostazioni del kernel del server non sono sufficienti per eseguire un gran numero di container. Se si presume fin dall'inizio che il server verrà utilizzato in produzione, è necessario apportare queste modifiche in anticipo per evitare errori come "Too many open files". - -Fortunatamente, modificare le impostazioni di LXD è facile con alcune modifiche ai file e un riavvio. - -#### Modifica di limits.conf - -Il primo file da modificare è il file limits.conf. Questo file è autodocumentato, quindi si consiglia di consultare le spiegazioni contenute nel file per sapere cosa fa. Per apportare le nostre modifiche digitate: - -`vi /etc/security/limits.conf` - -L'intero file è commentato e, in fondo, mostra le impostazioni predefinite attuali. Nello spazio vuoto sopra il marcatore di fine file (#End of file) dobbiamo aggiungere le nostre impostazioni personalizzate. Al termine, il file avrà questo aspetto: - -``` -# Modifications made for LXD - -* soft nofile 1048576 -* hard nofile 1048576 -root soft nofile 1048576 -root hard nofile 1048576 -* soft memlock unlimited -* hard memlock unlimited -``` - -Salvare le modifiche e uscire. (`SHIFT:wq!` per _vi_) - -#### Modifica di sysctl.conf con 90-lxd.override.conf - -Con _systemd_, si possono apportare modifiche alla configurazione generale del sistema e alle opzioni del kernel *senza* modificare il file di configurazione principale. Invece, metteremo le nostre impostazioni in un file separato che semplicemente sovrascriverà le impostazioni particolari di cui abbiamo bisogno. - -Per apportare queste modifiche al kernel, creeremo un file chiamato _90-lxd-override.conf_ in /etc/sysctl.d. Per farlo, digitare: - -`vi /etc/sysctl.d/90-lxd-override.conf` - -Inserite il seguente contenuto nel file. Se vi state chiedendo cosa stiamo facendo qui, il contenuto del file sottostante è autodocumentante: - -``` -## The following changes have been made for LXD ## - -# fs.inotify.max_queued_events specifies an upper limit on the number of events that can be queued to the corresponding inotify instance - - (default is 16384) - -fs.inotify.max_queued_events = 1048576 - -# fs.inotify.max_user_instances This specifies an upper limit on the number of inotify instances that can be created per real user ID - -(default value is 128) - -fs.inotify.max_user_instances = 1048576 - -# fs.inotify.max_user_watches specifies an upper limit on the number of watches that can be created per real user ID - (default is 8192) - -fs.inotify.max_user_watches = 1048576 - -# vm.max_map_count contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of cal -ling malloc, directly by mmap and mprotect, and also when loading shared libraries - (default is 65530) - -vm.max_map_count = 262144 - -# kernel.dmesg_restrict denies container access to the messages in the kernel ring buffer. Please note that this also will deny access t -o non-root users on the host system - (default is 0) - -kernel.dmesg_restrict = 1 - -# This is the maximum number of entries in ARP table (IPv4). You should increase this if you create over 1024 containers. - -net.ipv4.neigh.default.gc_thresh3 = 8192 - -# This is the maximum number of entries in ARP table (IPv6). You should increase this if you plan to create over 1024 containers.Not nee -ded if not using IPv6, but... - -net.ipv6.neigh.default.gc_thresh3 = 8192 - -# This is a limit on the size of eBPF JIT allocations which is usually set to PAGE_SIZE * 40000. - -net.core.bpf_jit_limit = 3000000000 - -# This is the maximum number of keys a non-root user can use, should be higher than the number of containers - -kernel.keys.maxkeys = 2000 - -# This is the maximum size of the keyring non-root users can use - -kernel.keys.maxbytes = 2000000 - -# This is the maximum number of concurrent async I/O operations. You might need to increase it further if you have a lot of workloads th -at use the AIO subsystem (e.g. MySQL) - -fs.aio-max-nr = 524288 -``` - -A questo punto è necessario riavviare il server. - -#### Controllo dei valori di _sysctl.conf_ - -Una volta completato il riavvio, accedere nuovamente al server. Dobbiamo verificare che il nostro file di override abbia effettivamente svolto il suo compito. - -È facile da fare. Non è necessario controllare tutte le impostazioni, a meno che non lo si voglia fare, ma controllarne alcune consente di verificare che le impostazioni siano state modificate. Questo viene fatto con il comando _sysctl_: - -`sysctl net.core.bpf_jit_limit` - -Il che dovrebbe dimostrarlo: - -`net.core.bpf_jit_limit = 3000000000` - -Fate lo stesso con alcune delle altre impostazioni del file di override (sopra) per verificare che le modifiche siano state apportate. - -### Abilitazione di ZFS e Impostazione del Pool - -Se l'avvio UEFI secure boot è disattivato, dovrebbe essere abbastanza facile. Per prima cosa, caricare il modulo ZFS con modprobe: - -`/sbin/modprobe zfs` - -Questa operazione non dovrebbe restituire un errore, ma semplicemente tornare al prompt dei comandi una volta terminata. Se si verifica un errore, interrompere subito e iniziare la risoluzione dei problemi. Anche in questo caso, assicuratevi che il secure boot sia disattivato, in quanto è la causa più probabile. - -Successivamente dobbiamo esaminare i dischi del nostro sistema, determinare quali sono quelli su cui è caricato il sistema operativo e quali sono disponibili per il pool ZFS. Lo faremo con _lsblk_: - -`lsblk` - -Il quale dovrebbe restituire qualcosa di simile (il vostro sistema sarà diverso!): - -``` -AME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT -loop0 7:0 0 32,3M 1 loop /var/lib/snapd/snap/snapd/11588 -loop1 7:1 0 55,5M 1 loop /var/lib/snapd/snap/core18/1997 -loop2 7:2 0 68,8M 1 loop /var/lib/snapd/snap/lxd/20037 -sda 8:0 0 119,2G 0 disco -├─sda1 8:1 0 600M 0 parte /boot/efi -├─sda2 8:2 0 1G 0 parte /boot -├─sda3 8:3 0 11.9G 0 part [SWAP] -├─sda4 8:4 0 2G 0 parte /home -└─sda5 8:5 0 103,7G 0 parte / -sdb 8:16 0 119,2G 0 disco -├─sdb1 8:17 0 119,2G 0 parte -└─sdb9 8:25 0 8M 0 parte -sdc 8:32 0 149,1G 0 disco -└─sdc1 8:33 0 149,1G 0 parte -``` - -In questo elenco, possiamo vedere che */dev/sda* è utilizzato dal sistema operativo, quindi useremo */dev/sdb* per il nostro zpool. Si noti che se si dispone di più dischi rigidi liberi, si può prendere in considerazione l'uso di raidz (un software raid specifico per ZFS). - -Questo non rientra nell'ambito di questo documento, ma dovrebbe essere preso in considerazione per la produzione, in quanto offre migliori prestazioni e ridondanza. Per ora, creiamo il nostro pool sul singolo dispositivo che abbiamo identificato: - -`zpool create storage /dev/sdb` - -Questo dice di creare un pool chiamato "storage" che è ZFS sul dispositivo */dev/sdb*. - -Una volta creato il pool, a questo punto è bene riavviare il server. - -### Inizializzazione LXD - -Ora che l'ambiente è stato configurato, siamo pronti a inizializzare LXD. Si tratta di uno script automatico che pone una serie di domande per rendere operativa l'istanza LXD: - -`lxd init` - -Ecco le domande e le nostre risposte per lo script, con una piccola spiegazione dove necessario: - -`Would you like to use LXD clustering? (yes/no) [default=no]:` - -Se siete interessati al clustering, fate qualche ricerca aggiuntiva su questo argomento [qui](https://lxd.readthedocs.io/en/latest/clustering/) - -`Do you want to configure a new storage pool? (yes/no) [default=yes]:` - -Questo può sembrare controintuitivo, dato che abbiamo già creato il nostro pool ZFS, ma sarà risolto in una domanda successiva. Accetta il predefinito. - -`Name of the new storage pool [default=default]: storage` - -Si potrebbe lasciare questo nome come predefinito, ma noi abbiamo scelto di usare lo stesso nome che abbiamo dato al nostro pool ZFS. - -`Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]:` - -Ovviamente vogliamo accettare l'impostazione predefinita. - -`Create a new ZFS pool? (yes/no) [default=yes]: no` - -Qui si risolve la domanda precedente sulla creazione di un pool di storage. - -`Name of the existing ZFS pool or dataset: storage` - -`Would you like to connect to a MAAS server? (yes/no) [default=no]:` - -Metal As A Service (MAAS) non rientra nel campo di applicazione del presente documento. - -`Would you like to create a new local network bridge? (yes/no) [default=yes]:` - -`What should the new bridge be called? [default=lxdbr0]:` - -`What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:` - -`What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none` - -Se si desidera utilizzare IPv6 sui propri contenitori LXD, è possibile attivare questa opzione. Questo dipende da voi. - -`Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes` - -È necessario per eseguire lo snapshot del server, quindi rispondere "yes". - -`Address to bind LXD to (not including port) [default=all]:` - -`Port to bind LXD to [default=8443]:` - -`Trust password for new clients:` - -`Again:` - -Questa password di fiducia è il modo in cui ci si connetterà al server snapshot o al suo ritorno, quindi va impostata con qualcosa che abbia senso nel vostro ambiente. Salvare questa voce in un luogo sicuro, ad esempio in un gestore di password. - -`Would you like stale cached images to be updated automatically? (yes/no) [default=yes]` - -`Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:` - -#### Impostazione dei Privilegi degli Utenti - -Prima di continuare, dobbiamo creare l'utente "lxdadmin" e assicurarci che abbia i privilegi necessari. Abbiamo bisogno che l'utente "lxdadmin" sia in grado di fare il _sudo_ a root e che sia membro del gruppo lxd. Per aggiungere l'utente e assicurarsi che sia membro di entrambi i gruppi, procedere come segue: - -`useradd -G wheel,lxd lxdadmin` - -Quindi impostare la password: - -`passwd lxdadmin` - -Come per le altre password, salvatela in un luogo sicuro. - -### Impostazione del Firewall - iptables - -Prima di continuare, è necessario impostare un firewall sul server. Questo esempio utilizza _iptables_ e [questa procedura](../security/enabling_iptables_firewall.md) per disabilitare _firewalld_. Se si preferisce usare _firewalld_, è sufficiente sostituire le regole di _firewalld_ con le istruzioni riportate in questa sezione. - -Creare lo script firewall.conf: - -`vi /etc/firewall.conf` - -Si ipotizza un server LXD su una rete LAN 192.168.1.0/24 di seguito riportata. Si noti inoltre che stiamo accettando tutto il traffico dalla nostra interfaccia bridged. Questo è importante se si vuole che i container ricevano indirizzi IP dal bridge. - -Questo script del firewall non fa altre ipotesi sui servizi di rete necessari. Esiste una regola SSH che consente agli IP della nostra rete LAN di accedere al server tramite SSH. È possibile che siano necessarie molte più regole, a seconda dell'ambiente. In seguito, aggiungeremo una regola per il traffico bidirezionale tra il server di produzione e il server snapshot. - -``` -#!/bin/sh -# -#IPTABLES=/usr/sbin/iptables - -# Unless specified, the defaults for OUTPUT is ACCEPT -# The default for FORWARD and INPUT is DROP -# -echo " clearing any existing rules and setting default policy.." -iptables -F INPUT -iptables -P INPUT DROP -iptables -A INPUT -i lxdbr0 -j ACCEPT -iptables -A INPUT -p tcp -m tcp -s 192.168.1.0/24 --dport 22 -j ACCEPT -iptables -A INPUT -i lo -j ACCEPT -iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -iptables -A INPUT -p tcp -j REJECT --reject-with tcp-reset -iptables -A INPUT -p udp -j REJECT --reject-with icmp-port-unreachable - -/usr/sbin/service iptables save -``` -### Impostazione del Firewall - firewalld - -Per le regole di _firewalld_, è necessario utilizzare [questa procedura di base](../security/firewalld.md) o avere familiarità con questi concetti. Le nostre ipotesi sono le stesse delle regole _iptables_ di cui sopra: Rete LAN 192.168.1.0/24 e un bridge chiamato lxdbr0. Per essere chiari, si potrebbero avere più interfacce sul server LXD, con una forse rivolta anche verso la WAN. Creeremo anche una zona per le reti bridged e locali. Questo è solo per chiarezza di zona, dato che gli altri nomi non sono applicabili. Quanto segue presuppone che si conoscano già le basi di _firewalld_. - -``` -firewall-cmd --new-zone=bridge --permanent -``` - -È necessario ricaricare il firewall dopo aver aggiunto una zona: - -``` -firewall-cmd --reload -``` - -Vogliamo consentire tutto il traffico dal bridge, quindi aggiungiamo l'interfaccia e cambiamo il target da "default" ad "ACCEPT" e avremo finito: - -!!! attention "Attenzione" - - La modifica della destinazione di una zona firewalld deve essere fatta con l'opzione --permanent, quindi è meglio inserire questo flag anche negli altri comandi e rinunciare all'opzione --runtime-to-permanent. - -!!! Note "Nota" - - Se si deve creare una zona in cui si vuole consentire l'accesso all'interfaccia o alla sorgente, ma non si vuole specificare alcun protocollo o servizio, è necessario modificare l'obiettivo da "default" ad ACCEPT. Lo stesso vale per DROP e REJECT per un particolare blocco IP per il quale sono state create zone personalizzate. Per essere chiari, la zona "drop" si occuperà di questo aspetto, a patto che non si utilizzi una zona personalizzata. - -``` -firewall-cmd --zone=bridge --add-interface=lxdbr0 --permanent -firewall-cmd --zone=bridge --set-target=ACCEPT --permanent -``` -Supponendo che non ci siano errori e che tutto funzioni ancora, è sufficiente ricaricare il sistema: - -``` -firewall-cmd --reload -``` -Se ora si elencano le regole con `firewall-cmd --zone=bridge --list-all`, si dovrebbe vedere qualcosa di simile a quanto segue: - -``` -bridge (active) - target: ACCEPT - icmp-block-inversion: no - interfaces: lxdbr0 - sources: - services: - ports: - protocols: - forward: no - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` -Dalle regole di _iptables_ si nota che vogliamo consentire anche la nostra interfaccia locale. Anche in questo caso, non mi piacciono le zone incluse, quindi creare una nuova zona e utilizzare l'intervallo IP di origine per l'interfaccia locale per assicurarsi di avere accesso: - -``` -firewall-cmd --new-zone=local --permanent -firewall-cmd --reload -``` -Quindi è sufficiente aggiungere gli IP di origine per l'interfaccia locale, cambiare il target in "ACCEPT" e anche in questo caso abbiamo finito: - -``` -firewall-cmd --zone=local --add-source=127.0.0.1/8 --permanent -firewall-cmd --zone=local --set-target=ACCEPT --permanent -firewall-cmd --reload -``` -Procedere con l'elenco della zona "locale" per assicurarsi che le regole siano presenti con `firewall-cmd --zone=local --list all` che dovrebbe mostrare qualcosa di simile: - -``` -local (active) - target: ACCEPT - icmp-block-inversion: no - interfaces: - sources: 127.0.0.1/8 - services: - ports: - protocols: - forward: no - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` - -Poi vogliamo consentire SSH dalla nostra rete fidata. Utilizzeremo qui gli IP di origine, proprio come nell'esempio di _iptables_, e la zona "trusted" incorporata. L'obiettivo di questa zona è già "ACCEPT" per impostazione predefinita. - -``` -firewall-cmd --zone=trusted --add-source=192.168.1.0/24 -``` -Quindi aggiungere il servizio alla zona: - -``` -firewall-cmd --zone=trusted --add-service=ssh -``` -Se tutto funziona, spostare le regole in modo permanente e ricaricarle: - -``` -firewall-cmd --runtime-to-permanent -firewall-cmd --reload -``` -L'elenco delle zone "trusted" dovrebbe ora mostrare qualcosa di simile: - -``` -trusted (active) - target: ACCEPT - icmp-block-inversion: no - interfaces: - sources: 192.168.1.0/24 - services: ssh - ports: - protocols: - forward: no - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` -Per impostazione predefinita, la zona "pubblica" è abilitata e consente l'uso di SSH. Non vogliamo questo. Assicurarsi che le zone siano corrette e che l'accesso al server avvenga tramite uno degli IP della LAN (nel nostro esempio) e che sia consentito l'SSH. Se non lo si verifica prima di continuare, si rischia di rimanere esclusi dal server. Dopo essersi assicurati di avere accesso dall'interfaccia corretta, rimuovere SSH dalla zona "pubblica": - -``` -firewall-cmd --zone=public --remove-service=ssh -``` -Verificate l'accesso e assicuratevi di non essere bloccati. In caso contrario, spostare le regole su permanenti, ricaricare ed elencare la zona "public" per essere sicuri che SSH sia stato rimosso: - -``` -firewall-cmd --runtime-to-permanent -firewall-cmd --reload -firewall-cmd --zone=public --list-all -``` -Potrebbero esserci altre interfacce da considerare sul vostro server. È possibile utilizzare le zone integrate, se necessario, ma se i nomi non piacciono (non sembrano logici, ecc.), è possibile aggiungere zone. Ricordate che se non ci sono servizi o protocolli che dovete consentire o rifiutare in modo specifico, dovrete modificare il target di zona. Se è possibile utilizzare le interfacce, come abbiamo fatto con il bridge, è possibile farlo. Se avete bisogno di un accesso più granulare ai servizi, utilizzate invece gli IP di origine. - -Questo completa la Parte 1. È possibile proseguire con la Parte 2 o tornare al [menu](#menu). Se state lavorando sul server snapshot, potete passare ora alla [Parte 5](#part5). - -## Parte 2: Impostazione e Gestione delle Immagini - -Per tutta la Parte 2, e da qui in avanti se non diversamente indicato, si eseguiranno i comandi come utente non privilegiato. ("lxdadmin" se state seguendo questo documento). - -### Elenco delle Immagini Disponibili - -Una volta configurato l'ambiente del server, probabilmente non vedrete l'ora di iniziare a usare un container. Ci sono _molte_ possibilità per i sistemi operativi container. Per avere un'idea del numero di possibilità, inserite questo comando: - -`lxc image list images: | more` - -Premete la barra spaziatrice per scorrere l'elenco. Questo elenco di container e macchine virtuali continua a crescere. Per ora ci atteniamo ai containers. - -L'ultima cosa che si vuole fare è cercare un'immagine del container da installare, soprattutto se si conosce l'immagine che si vuole creare. Modifichiamo il comando precedente per mostrare solo le opzioni di installazione di CentOS Linux: - -`lxc image list immagini: | grep centos/8` - -In questo modo si ottiene un elenco molto più gestibile: - -``` -| centos/8 (3 more) | 98b4dbef0c29 | yes | Centos 8 amd64 (20210427_07:08) | x86_64 | VIRTUAL-MACHINE | 517.44MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8 (3 more) | 0427669ebee4 | yes | Centos 8 amd64 (20210427_07:08) | x86_64 | CONTAINER | 125.58MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream (3 more) | 961170f8934f | yes | Centos 8-Stream amd64 (20210427_07:08) | x86_64 | VIRTUAL-MACHINE | 586.44MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream (3 more) | e507fdc8935a | yes | Centos 8-Stream amd64 (20210427_07:08) | x86_64 | CONTAINER | 130.33MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream/arm64 (1 more) | e5bf98409ac6 | yes | Centos 8-Stream arm64 (20210427_10:33) | aarch64 | CONTAINER | 126.56MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream/cloud (1 more) | 5751ca14bf8f | yes | Centos 8-Stream amd64 (20210427_07:08) | x86_64 | CONTAINER | 144.75MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream/cloud (1 more) | ccf0bb20b0ca | yes | Centos 8-Stream amd64 (20210427_07:08) | x86_64 | VIRTUAL-MACHINE | 593.31MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream/cloud/arm64 | db3d915d12fd | yes | Centos 8-Stream arm64 (20210427_07:08) | aarch64 | CONTAINER | 140.60MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream/cloud/ppc64el | 11aa2ab878b2 | yes | Centos 8-Stream ppc64el (20210427_07:08) | ppc64le | CONTAINER | 149.45MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream/ppc64el (1 more) | a27665203e47 | yes | Centos 8-Stream ppc64el (20210427_07:08) | ppc64le | CONTAINER | 134.52MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8/arm64 (1 more) | d64396d47fa7 | yes | Centos 8 arm64 (20210427_07:08) | aarch64 | CONTAINER | 121.83MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8/cloud (1 more) | 84803ca6e32d | yes | Centos 8 amd64 (20210427_07:08) | x86_64 | CONTAINER | 140.42MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8/cloud (1 more) | c98196cd9eec | yes | Centos 8 amd64 (20210427_07:08) | x86_64 | VIRTUAL-MACHINE | 536.00MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8/cloud/arm64 | 9d06684a9a4e | yes | Centos 8 arm64 (20210427_10:33) | aarch64 | CONTAINER | 136.49MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8/cloud/ppc64el | 18c13c448349 | yes | Centos 8 ppc64el (20210427_07:08) | ppc64le | CONTAINER | 144.66MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8/ppc64el (1 more) | 130c1c83c36c | yes | Centos 8 ppc64el (20210427_07:08) | ppc64le | CONTAINER | 129.53MB | Apr 27, 2021 at 12:00am (UTC) | -``` - -### Installare, Rinominare ed Elencare le Immagini - -Per il primo container, sceglieremo centos/8. Per installarlo, *si può* usare: - -`lxc launch images:centos/8 centos-test` - -Questo creerà un container basato su CentOS chiamato "centos-test". È possibile rinominare un container dopo che è stato creato, ma prima è necessario arrestare il container, che si avvia automaticamente quando viene lanciato. - -Per avviare manualmente il container, utilizzare: - -`lxc start centos-test` - -Ai fini di questa guida, per ora installate un'altra immagine: - -`lxc launch images:ubuntu/20.10 ubuntu-test` - -Ora diamo un'occhiata a ciò che abbiamo finora, elencando le nostre immagini: - -`lxc list` - -che dovrebbe restituire qualcosa di simile: - -``` -+-------------+---------+-----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+------+-----------+-----------+ -| centos-test | RUNNING | 10.199.182.72 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 10.199.182.236 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -``` - -### Profili LXD - -Quando si installa LXD si ottiene un profilo predefinito, che non può essere rimosso o modificato. Detto questo, è possibile utilizzare il profilo predefinito per creare nuovi profili da utilizzare con i propri container. - -Se si osserva l'elenco dei nostri container (sopra), si noterà che l'indirizzo IP in ogni caso è assegnato dall'interfaccia bridged. In un ambiente di produzione, si potrebbe voler usare qualcos'altro. Potrebbe trattarsi di un indirizzo assegnato via DHCP dall'interfaccia LAN o anche di un indirizzo assegnato staticamente dalla WAN. - -Se si configura il server LXD con due interfacce e si assegna a ciascuna un IP sulla WAN e sulla LAN, è possibile assegnare ai container indirizzi IP in base all'interfaccia verso cui il container deve essere rivolto. - -A partire dalla versione 8 di Rocky Linux (e in realtà qualsiasi copia di Red Hat Enterprise Linux, come CentOS nel nostro elenco precedente) il metodo per assegnare gli indirizzi IP in modo statico o dinamico utilizzando i profili sottostanti, è stato interrotto. - -Ci sono modi per aggirare questo problema, ma è fastidioso, perché la funzione che non funziona _dovrebbe essere_ parte del kernel Linux. Questa funzione è macvlan. Macvlan consente di creare più interfacce con indirizzi Layer 2 diversi. - -Per ora, sappiate che ciò che stiamo per suggerire ha degli svantaggi quando si scelgono immagini di container basate su RHEL. - -#### Creazione di un Profilo macvlan e sua Assegnazione - -Per creare il nostro profilo macvlan, basta usare questo comando: - -`lxc profile create macvlan` - -Si tenga presente che, se si dispone di una macchina con più interfacce e si desidera più di un modello macvlan in base alla rete che si desidera raggiungere, si può usare "lanmacvlan" o "wanmacvlan" o qualsiasi altro nome che si desidera usare per identificare il profilo. In altre parole, l'uso di "macvlan" nella dichiarazione di creazione del profilo dipende totalmente da voi. - -Una volta creato il profilo, è necessario modificarlo per ottenere i risultati desiderati. Per prima cosa, dobbiamo assicurarci che l'editor predefinito del server sia quello che vogliamo usare. Se non si esegue questo passaggio, l'editor sarà quello predefinito. Abbiamo scelto _vim_ come editor: - -`export EDITOR=/usr/bin/vim` - -Ora vogliamo modificare l'interfaccia macvlan, ma prima dobbiamo sapere qual è l'interfaccia principale del nostro server LXD. Si tratta dell'interfaccia che ha un IP assegnato alla LAN (in questo caso). Per determinare di quale interfaccia si tratta, utilizzare: - -`ip addr` - -Quindi cercare l'interfaccia con l'assegnazione dell'IP LAN nella rete 192.168.1.0/24: - -``` -2: enp3s0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 - link/ether 40:16:7e:a9:94:85 brd ff:ff:ff:ff:ff:ff - inet 192.168.1.106/24 brd 192.168.1.255 scope global dynamic noprefixroute enp3s0 - valid_lft 4040sec preferred_lft 4040sec - inet6 fe80::a308:acfb:fcb3:878f/64 scope link noprefixroute - valid_lft forever preferred_lft forever -``` - -In questo caso, l'interfaccia sarebbe "enp3s0". - -Ora modifichiamo il profilo: - -`lxc profile edit macvlan` - -Questo file sarà auto-documentato all'inizio. È necessario modificare il file come segue, sotto la sezione commentata: - -``` -config: {} -description: "" -devices: - eth0: - name: eth0 - nictype: macvlan - parent: enp3s0 - type: nic -name: macvlan -used_by: [] -``` - -Ovviamente si possono usare i profili per molte altre cose, ma l'assegnazione di un IP statico a un container o l'uso del proprio server DHCP come fonte per un indirizzo sono esigenze molto comuni. - -Per assegnare il profilo macvlan a centos-test è necessario procedere come segue: - -`lxc profile assign centos-test default,macvlan` - -Questo dice semplicemente che vogliamo il profilo predefinito e che vogliamo applicare anche il profilo macvlan. - -#### CentOS macvlan - -Nell'implementazione CentOS di Network Manager, sono riusciti a interrompere la funzionalità di macvlan nel kernel, o almeno nel kernel applicato alla loro immagine LXD. È così da quando è stato rilasciato CentOS 8 e nessuno sembra preoccuparsi di trovare una soluzione. - -In poche parole, se si vogliono eseguire container CentOS 8 (o qualsiasi altra release di RHEL 1-for-1, come Rocky Linux), bisogna fare i salti mortali per far funzionare macvlan. macvlan fa parte del kernel, quindi dovrebbe funzionare anche senza le correzioni seguenti, ma non è così. - -##### CentOS macvlan - La Soluzione DHCP - -L'assegnazione del profilo, tuttavia, non modifica la configurazione predefinita, che è impostata su DHCP. - -Per verificarlo, è sufficiente eseguire la seguente operazione: - -`lxc stop centos-test` - -E poi: - -`lxc start centos-test` - -Ora elencate nuovamente i vostri container e notate che centos-test non ha più un indirizzo IP: - -`lxc list` - -``` -+-------------+---------+-----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+------+-----------+-----------+ -| centos-test | RUNNING | | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 10.199.182.236 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -``` - -Per dimostrare ulteriormente il problema, dobbiamo eseguire `dhclient` sul container. È possibile farlo con: - -`lxc exec centos-test dhclient` - -Un nuovo elenco utilizzando `lxc list` mostra ora quanto segue: - -``` -+-------------+---------+-----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+------+-----------+-----------+ -| centos-test | RUNNING | 192.168.1.138 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 10.199.182.236 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -``` - -Questo sarebbe dovuto accadere con un semplice arresto e avvio del container, ma non è così. Supponendo di voler utilizzare sempre un indirizzo IP assegnato da DHCP, si può risolvere il problema con una semplice voce di crontab. Per farlo, è necessario ottenere l'accesso al container tramite shell, inserendo: - -`lxc exec centos-test bash` - -Quindi, determiniamo il percorso completo di `dhclient`: - -`which dhclient` - -che dovrebbe restituire: - -`/usr/sbin/dhclient` - -Quindi, modifichiamo il crontab di root: - -`crontab -e` - -E aggiungere questa riga: - -`@reboot /usr/sbin/dhclient` - -Il comando crontab inserito sopra utilizza _vi_, quindi per salvare le modifiche e uscire è sufficiente utilizzare: - -`SHIFT:wq!` - -Ora uscire dal container e arrestare centos-test: - -`lxc stop centos-test` - -e poi riavviarlo: - -`lxc start centos-test` - -Un nuovo elenco rivelerà che al container è stato assegnato l'indirizzo DHCP: - -``` -+-------------+---------+-----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+------+-----------+-----------+ -| centos-test | RUNNING | 192.168.1.138 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 10.199.182.236 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -``` - -##### CentOS macvlan - La soluzione per l'IP Statico - -Per assegnare staticamente un indirizzo IP, le cose si fanno ancora più complicate. Il processo di impostazione di un indirizzo IP statico su un container CentOS avviene tramite gli script di rete, che verranno eseguiti ora. L'IP che cercheremo di assegnare è 192.168.1.200. - -Per farlo, dobbiamo ottenere di nuovo l'accesso al container: - -`lxc exec centos-test bash` - -La prossima cosa da fare è modificare manualmente l'interfaccia denominata "eth0" e impostare il nostro indirizzo IP. Per modificare la configurazione, procedere come segue: - -`vi /etc/sysconfig/network-scripts/ifcfg-eth0` - -Che restituirà questo: - -``` -DEVICE=eth0 -BOOTPROTO=dhcp -ONBOOT=yes -HOSTNAME=centos-test -TYPE=Ethernet -MTU= -DHCP_HOSTNAME=centos-test -IPV6INIT=yes -``` - -Dobbiamo modificare questo file in modo che abbia il seguente aspetto: - -``` -DEVICE=eth0 -BOOTPROTO=none -ONBOOT=yes -IPADDR=192.168.1.200 -PREFIX=24 -GATEWAY=192.168.1.1 -DNS1=8.8.8.8 -DNS2=8.8.4.4 -HOSTNAME=centos-test -TYPE=Ethernet -MTU= -DHCP_HOSTNAME=centos-test -IPV6INIT=yes -``` - -Questo dice che vogliamo impostare il protocollo di avvio su nessuno (usato per le assegnazioni IP statiche), impostare l'indirizzo IP su 192.168.1.200, che questo indirizzo fa parte di una CLASSE C (PREFIX=24), che il gateway per questa rete è 192.168.1.1 e che vogliamo usare i server DNS aperti di Google per la risoluzione dei nomi. - -Salvare il file`(SHIFT:wq!`). - -Dobbiamo anche rimuovere il crontab per root, perché non è quello che vogliamo per un IP statico. Per farlo, è sufficiente `crontab -e` e sottolineare la riga @reboot con un "#", salvare le modifiche e uscire dal container. - -Fermare il container con: - -`lxc stop centos-test` - -e riavviarlo: - -`lxc start centos-test` - -Proprio come l'indirizzo assegnato da DHCP, l'indirizzo assegnato staticamente non verrà assegnato quando si elenca il container: - -``` -+-------------+---------+-----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+------+-----------+-----------+ -| centos-test | RUNNING | | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 10.199.182.236 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -``` - -Per risolvere questo problema è necessario interrompere Network Manager sul container. La seguente soluzione funziona, almeno per ora: - -`lxc exec centos-test dhclient` - -Poi entrate nel container: - -`lxc exec centos-test bash` - -Installare i vecchi script di rete: - -`dnf install network-scripts` - -Nuke Network Manager: - -`systemctl stop NetworkManager` `systemctl disable NetworkManager` - -Attivare il vecchio servizio di rete: - -`systemctl enable network.service` - -Uscire dal container, quindi arrestare e avviare nuovamente il container: - -`lxc stop centos-test` - -E poi eseguire: - -`lxc start centos-test` - -All'avvio del container, un nuovo elenco mostrerà l'IP staticamente assegnato corretto: - -``` -+-------------+---------+-----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+------+-----------+-----------+ -| centos-test | RUNNING | 192.168.1.200 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 10.199.182.236 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -``` - -Il problema con macvlan mostrato in entrambi gli esempi è direttamente correlato ai container basati su Red Hat Enterprise Linux (Centos 8, Rocky Linux 8). - -#### Ubuntu macvlan - -Fortunatamente, nell'implementazione di Ubuntu di Network Manager, lo stack macvlan NON è mancante, quindi è molto più facile da distribuire! - -Per prima cosa, proprio come nel caso del container centos-test, dobbiamo assegnare il template al nostro container: - -`lxc profile assign ubuntu-test default,macvlan` - -Questo dovrebbe essere tutto ciò che è necessario per ottenere un indirizzo assegnato da DHCP. Per scoprirlo, fermate e riavviate il container: - -`lxc stop ubuntu-test` - -E poi eseguire: - -`lxc start ubuntu-test` - -Quindi elencare nuovamente i container: - -``` -+-------------+---------+----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+----------------------+------+-----------+-----------+ -| centos-test | RUNNING | 192.168.1.200 (eth0) | | CONTAINER | 0 | -+-------------+---------+----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 192.168.1.139 (eth0) | | CONTAINER | 0 | -+-------------+---------+----------------------+------+-----------+-----------+ -``` - -Success! - -La configurazione dell'IP statico è leggermente diversa, ma non è affatto difficile. Occorre modificare il file .yaml associato alla connessione del contenitore (/10-lxc.yaml). Per questo IP statico, utilizzeremo 192.168.1.201: - -`vi /etc/netplan/10-lxc.yaml` - -E cambiare quello che c'è con il seguente: - -``` -network: - version: 2 - ethernets: - eth0: - dhcp4: false - addresses: [192.168.1.201/24] - gateway4: 192.168.1.1 - nameservers: - addresses: [8.8.8.8,8.8.4.4] -``` - -Salvare le modifiche`(SHFT:wq!`) e uscire dal container. - -Ora fermate e avviate il container: - -`lxc stop ubuntu-test` - -E poi eseguire: - -`lxc start ubuntu-test` - -Quando si elencano nuovamente i container, si dovrebbe vedere il nuovo IP statico: - -``` -+-------------+---------+----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+----------------------+------+-----------+-----------+ -| centos-test | RUNNING | 192.168.1.200 (eth0) | | CONTAINER | 0 | -+-------------+---------+----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 192.168.1.201 (eth0) | | CONTAINER | 0 | -+-------------+---------+----------------------+------+-----------+-----------+ -``` - -Success! - -Negli esempi utilizzati nella Parte 2, abbiamo scelto intenzionalmente un container difficile da configurare e uno facile. Ci sono ovviamente molte altre versioni di Linux disponibili nell'elenco delle immagini. Se ce n'è uno preferito, provare a installarlo, assegnando il modello macvlan e impostando gli IP. - -Questo completa la Parte 2. È possibile proseguire con la Parte 3 o tornare al [menu](#menu). - -## Parte 3: Opzioni di Configurazione del Container - -Ci sono molte opzioni per configurare il container una volta installato. Prima di vedere come visualizzano, però, diamo un'occhiata al comando info per un container. In questo esempio, utilizzeremo il container ubuntu-test: - -`lxc info ubuntu-test` - -Il risultato è simile al seguente: - -``` -Name: ubuntu-test -Location: none -Remote: unix:// -Architecture: x86_64 -Created: 2021/04/26 15:14 UTC -Status: Running -Type: container -Profiles: default, macvlan -Pid: 584710 -Ips: - eth0: inet 192.168.1.201 enp3s0 - eth0: inet6 fe80::216:3eff:fe10:6d6d enp3s0 - lo: inet 127.0.0.1 - lo: inet6 ::1 -Resources: - Processes: 13 - Disk usage: - root: 85.30MB - CPU usage: - CPU usage (in seconds): 1 - Memory usage: - Memory (current): 99.16MB - Memory (peak): 110.90MB - Network usage: - eth0: - Bytes received: 53.56kB - Bytes sent: 2.66kB - Packets received: 876 - Packets sent: 36 - lo: - Bytes received: 0B - Bytes sent: 0B - Packets received: 0 - Packets sent: 0 -``` - -Ci sono molte informazioni utili, dai profili applicati, alla memoria in uso, allo spazio su disco in uso e altro ancora. - -#### Informazioni sulla Configurazione e su Alcune Opzioni - -Per impostazione predefinita, LXD alloca al container la memoria di sistema, lo spazio su disco, i core della CPU e così via. Ma se volessimo essere più specifici? È assolutamente possibile. - -Tuttavia, questo comporta degli svantaggi. Per esempio, se si alloca la memoria di sistema e il container non la usa tutta, allora l'abbiamo sottratta a un altro container che potrebbe averne bisogno. Tuttavia, può accadere anche il contrario. Se un container ha una consumo esagerato in fatto di memoria, può impedire agli altri container di averne a sufficienza, riducendo così le loro prestazioni. - -Tenete presente che ogni azione compiuta per configurare un container _può_ avere effetti negativi da qualche altra parte. - -Piuttosto che scorrere tutte le opzioni di configurazione, utilizzare il completamento automatico delle schede per visualizzare le opzioni disponibili: - -`lxc config set ubuntu-test` e poi premere TAB. - -Mostra tutte le opzioni per la configurazione di un container. Se avete domande su cosa fa una delle opzioni di configurazione, consultate la [documentazione ufficiale di LXD](https://lxd.readthedocs.io/en/stable-4.0/instances/) e fate una ricerca per il parametro di configurazione, oppure cercate su Google l'intera stringa, ad esempio "lxc config set limits.memory" e date un'occhiata ai risultati della ricerca. - -Vediamo alcune delle opzioni di configurazione più utilizzate. Ad esempio, se si vuole impostare la quantità massima di memoria che un container può utilizzare: - -`lxc config set ubunt-test limits.memory 2GB` - -Questo dice che finché la memoria è disponibile per l'uso, in altre parole ci sono 2 GB di memoria libera, il contenitore può usare più di 2 GB se è disponibile. In altre parole, si tratta di un limite variabile. - -`lxc config set ubuntu-test limits.memory.enforce 2GB` - -Ciò significa che il contenitore non può mai utilizzare più di 2 GB di memoria, indipendentemente dal fatto che sia attualmente disponibile o meno. In questo caso si tratta di un limite rigido. - -`lxc config set ubuntu-test limits.cpu 2` - -Questo dice di limitare a 2 il numero di core della CPU che il container può utilizzare. - -Ricordate quando abbiamo impostato il nostro pool di archiviazione nella sezione [Abilitazione di zfs e Impostazione del Pool](#zfssetup) di cui sopra? Abbiamo chiamato il pool "storage", ma avremmo potuto chiamarlo in qualsiasi modo. Se vogliamo dare un'occhiata, possiamo usare questo comando: - -`lxc storage show storage` - -Questo mostra quanto segue: - -``` -config: - source: storage - volatile.initial_source: storage - zfs.pool_name: storage -description: "" -name: storage -driver: zfs -used_by: -- /1.0/images/0cc65b6ca6ab61b7bc025e63ca299f912bf8341a546feb8c2f0fe4e83843f221 -- /1.0/images/4f0019aee1515c109746d7da9aca6fb6203b72f252e3ee3e43d50b942cdeb411 -- /1.0/images/9954953f2f5bf4047259bf20b9b4f47f64a2c92732dbc91de2be236f416c6e52 -- /1.0/instances/centos-test -- /1.0/instances/ubuntu-test -- /1.0/profiles/default -status: Created -locations: -- none -``` - -Questo mostra che tutti i container utilizzano il pool di archiviazione zfs. Quando si usa ZFS, si può anche impostare una quota disco su un container. Per farlo, impostiamo una quota disco di 2 GB sul container ubuntu-test. Lo si fa con: - -`lxc config device override ubuntu-test root size=2GB` - -Come detto in precedenza, si dovrebbero usare le opzioni di configurazione con parsimonia, a meno che non si abbia un container che vuole usare molto più della sua quota di risorse. LXD, nella maggior parte dei casi, gestisce l'ambiente in modo autonomo. - -Esistono naturalmente molte altre opzioni che potrebbero essere di interesse per alcuni. Dovete fare le vostre ricerche per scoprire se uno di questi elementi è utile nel vostro ambiente. - -Questo completa la Parte 3. È possibile proseguire con la Parte 4 o tornare al [menu](#menu). - -## Parte 4: Istantanee del Container - -Le istantanee dei container, insieme a un server di istantanee (di cui parleremo più avanti), sono probabilmente l'aspetto più importante dell'esecuzione di un server LXD di produzione. Le istantanee assicurano un ripristino rapido e possono essere usate per sicurezza quando, ad esempio, si sta aggiornando il software primario che gira su un particolare container. Se durante l'aggiornamento accade qualcosa che interrompe l'applicazione, è sufficiente ripristinare l'istantanea per tornare operativi con un tempo di inattività di pochi secondi. - -L'autore ha utilizzato i container LXD per i server PowerDNS rivolti al pubblico e il processo di aggiornamento di queste applicazioni è diventato molto più semplice, poiché è possibile eseguire lo snapshot del container prima di continuare. - -È anche possibile eseguire l'istantanea di un container mentre è in esecuzione. Inizieremo ottenendo un'istantanea del container ubuntu-test utilizzando questo comando: - -`lxc snapshot ubuntu-test ubuntu-test-1` - -Qui chiamiamo l'istantanea "ubuntu-test-1", ma può essere chiamata come volete. Per assicurarsi di avere l'istantanea, eseguire un "lxc info" del contenitore: - -`lxc info ubuntu-test` - -Abbiamo già visto una schermata informativa, quindi se si scorre fino in fondo, si dovrebbe vedere: - -``` -Snapshots: - ubuntu-test-1 (taken at 2021/04/29 15:57 UTC) (stateless) -``` - -Success! La nostra istantanea è pronta. - -Ora, entrare nel container ubuntu-test: - -`lxc exec ubuntu-test bash` - -E creare un file vuoto con il comando _touch_: - -`touch this_file.txt` - -Uscire dal container. - -Prima di ripristinare il container come era prima della creazione del file, il modo più sicuro per ripristinare un container, in particolare se ci sono state molte modifiche, è quello di fermarlo prima: - -`lxc stop ubuntu-test` - -Quindi ripristinarlo: - -`lxc restore ubuntu-test ubuntu-test-1` - -Quindi riavviare il container: - -`lxc start ubuntu-test` - -Se si torna di nuovo nel container e si guarda, il file "this_file.txt" che abbiamo creato è sparito. - -Quando non si ha più bisogno di un'istantanea, è possibile eliminarla: - -`lxc delete ubuntu-test/ubuntu-test-1` - -**Importante:** è necessario eliminare sempre le istantanee con il contenitore in funzione. Perché? Il comando _lxc delete_ funziona anche per eliminare l'intero container. Se avessimo accidentalmente premuto invio dopo "ubuntu-test" nel comando precedente, E, se il container fosse stato fermato, il container sarebbe stato cancellato. Non viene dato alcun avviso, fa semplicemente quello che gli si chiede. - -Se il container è in esecuzione, tuttavia, viene visualizzato questo messaggio: - -`Error: The instance is currently running, stop it first or pass --force` - -Pertanto, eliminare sempre le istantanee con il contenitore in funzione. - -Il processo di creazione automatica delle istantanee, l'impostazione della scadenza dell'istantanea in modo che scompaia dopo un certo periodo di tempo e l'aggiornamento automatico delle istantanee al server delle istantanee saranno trattati in dettaglio nella sezione dedicata al server delle istantanee. - -Questo completa la Parte 4. È possibile proseguire con la Parte 5 o tornare al [menu](#menu). - -## Parte 5: Il Server Snapshot - -Come indicato all'inizio, il server snapshot per LXD deve essere in tutto e per tutto un mirror del server di produzione. Il motivo è che potrebbe essere necessario passare alla produzione in caso di guasto hardware, e avere non solo i backup, ma anche un modo rapido per ripristinare i container di produzione, consente di ridurre al minimo le telefonate e gli SMS di panico degli amministratori di sistema. QUESTO è SEMPRE un bene! - -Quindi il processo di creazione del server snapshot è esattamente come quello del server di produzione. Per emulare completamente la nostra configurazione del server di produzione, eseguite nuovamente tutta la [Parte 1](#part1) e, una volta completata, tornate a questo punto. - -Sei tornato!!! Congratulazioni, questo significa che avete completato con successo la Parte 1 per il server snapshot. Che bella notizia!!! - -### Impostazione della Relazione tra Server Primario e Server Snapshot - -Dobbiamo fare un po' di pulizia prima di continuare. Innanzitutto, se si opera in un ambiente di produzione, probabilmente si ha accesso a un server DNS che si può utilizzare per impostare la risoluzione IP-nome. - -Nel nostro laboratorio non abbiamo questo lusso. Forse anche voi avete lo stesso scenario. Per questo motivo, aggiungeremo gli indirizzi IP e i nomi di entrambi i server al file /etc/hosts sia sul server primario che sul server snapshot. È necessario farlo come utente root (o _sudo_). - -Nel nostro laboratorio, il server LXD primario è in esecuzione su 192.168.1.106 e il server LXD snapshot è in esecuzione su 192.168.1.141. Si entra in SSH in entrambi i server e si aggiunge quanto segue al file /etc/hosts: - -``` -192.168.1.106 lxd-primary -192.168.1.141 lxd-snapshot -``` -Successivamente, è necessario consentire tutto il traffico tra i due server. Per fare ciò, si modificherà il file /etc/firewall.conf con quanto segue. Per prima cosa, sul server lxd-primario, aggiungere questa riga: - -`IPTABLES -A INPUT -s 192.168.1.141 -j ACCEPT` - -E sul server lxd-snapshot, aggiungere questa riga: - -`IPTABLES -A INPUT -s 192.168.1.106 -j ACCEPT` - -In questo modo è possibile far viaggiare il traffico bidirezionale di tutti i tipi tra i due server. - -Successivamente, come utente "lxdadmin", dobbiamo impostare la relazione di fiducia tra le due macchine. Per farlo, eseguire il seguente comando su lxd-primary: - -`lxc remote add lxd-snapshot` - -Verrà visualizzato il certificato da accettare, quindi lo si eseguirà e verrà richiesta la password. Si tratta della "password di fiducia" impostata durante la fase di [inizializzazione di LXD](#lxdinit). Si spera che teniate traccia di tutte queste password in modo sicuro. Una volta immessa la password, si dovrebbe ricevere questo messaggio: - -`Client certificate stored at server: lxd-snapshot` - -Non fa male farlo fare anche al contrario. In altre parole, impostare la relazione di fiducia sul server lxd-snapshot in modo che, se necessario, le istantanee possano essere inviate al server lxd-primary. È sufficiente ripetere i passaggi e sostituire "lxd-primary" con "lxd-snapshot" - -### Migrazione della Nostra Prima Istantanea - -Prima di poter migrare la prima istantanea, è necessario che su lxd-snapshot vengano creati i profili che abbiamo creato su lxd-primary. Nel nostro caso, si tratta del profilo "macvlan". - -È necessario creare questo profilo per lxd-snapshot, quindi tornare a [LXD Profiles](#profiles) e creare il profilo "macvlan" su lxd-snapshot. Se i due server hanno nomi di interfacce padre identici ("enp3s0", ad esempio), è possibile copiare il profilo "macvlan" in lxd-snapshot senza ricrearlo: - -`lxc profile copy macvlan lxd-snapshot` - -Ora che abbiamo impostato tutte le relazioni e i profili, il passo successivo è quello di inviare effettivamente un'istantanea da lxd-primary a lxd-snapshot. Se avete seguito esattamente la procedura, probabilmente avete cancellato tutte le vostre istantanee, quindi creiamone una nuova: - -`lxc snapshot centos-test centos-snap1` - -Se si esegue il sottocomando "info" per lxc, si può vedere la nuova istantanea in fondo al nostro elenco: - -`lxc info centos-test` - -Che mostrerà qualcosa di simile in basso: - -`centos-snap1 (taken at 2021/05/13 16:34 UTC) (stateless)` - -Ok, incrociamo le dita! Proviamo a migrare la nostra istantanea: - -`lxc copy centos-test/centos-snap1 lxd-snapshot:centos-test` - -Questo comando dice che, all'interno del contenitore centos-test, vogliamo inviare l'istantanea centos-snap1 a lxd-snapshot e copiarla come centos-test. - -Dopo un breve periodo di tempo, la copia sarà completa. Volete scoprirlo con certezza? Eseguire un "lxc list" sul server lxd-snapshot. Che dovrebbe restituire quanto segue: - -``` -+-------------+---------+------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+------+------+-----------+-----------+ -| centos-test | STOPPED | | | CONTAINER | 0 | -+-------------+---------+------+------+-----------+-----------+ -``` - -Success! Ora proviamo ad avviarlo. Poiché lo stiamo avviando sul server lxd-snapshot, dobbiamo prima fermarlo sul server lxd-primary: - -`lxc stop centos-test` - -E sul server lxd-snapshot: - -`lxc start centos-test` - -Supponendo che tutto questo funzioni senza errori, arrestare il container su lxd-snapshot e riavviarlo su lxd-primary. - -### Il Server Snapshot - Impostazione di boot.autostart su Off per i Container - -Le istantanee copiate su lxd-snapshot saranno disattivate durante la migrazione, ma se si verifica un evento di alimentazione o se è necessario riavviare il server snapshot a causa di aggiornamenti o altro, si verificherà un problema poiché i container tenteranno di avviarsi sul server snapshot. - -Per eliminare questo problema, è necessario impostare i container migrati in modo che non vengano avviati al riavvio del server. Per il nostro container centos-test appena copiato, si procede come segue: - -`lxc config set centos-test boot.autostart 0` - -Eseguire questa operazione per ogni istantanea sul server lxd-snapshot. - -### Automatizzazione del Processo delle Istantanee - -Ok, è fantastico poter creare istantanee quando è necessario, ma a volte _è necessario_ creare manualmente un'istantanea. Si potrebbe anche copiare manualmente su lxd-snapshot. MA, una volta che le cose funzionano e che avete 25-30 container o più in esecuzione sulla vostra macchina lxd-primary, l'ultima cosa che volete fare è passare un pomeriggio a cancellare le istantanee sul server di snapshot, creare nuove istantanee e inviarle. - -La prima cosa da fare è pianificare un processo per automatizzare la creazione di snapshot su lxd-primary. Questa operazione deve essere eseguita per ogni container sul server lxd-primary, ma una volta impostata, si gestirà da sola. La sintassi è la seguente. Si noti la somiglianza con una voce di crontab per il timestamp: - -`lxc config set [container_name] snapshots.schedule "50 20 * * *"` - -Ciò significa che bisogna eseguire un'istantanea del nome del container ogni giorno alle 20:50. - -Per applicare questo al nostro contenitore centos-test: - -`lxc config set centos-test snapshots.schedule "50 20 * * *"` - -Vogliamo anche impostare il nome dell'istantanea in modo che sia significativo per la nostra data. LXD utilizza ovunque UTC, quindi la cosa migliore per tenere traccia delle cose è impostare il nome dell'istantanea con una data/ora in un formato più comprensibile: - -`lxc config set centos-test snapshots.pattern "centos-test-{{ creation_date|date:'2006-01-02_15-04-05' }}"` - -GRANDE, ma di certo non vogliamo una nuova istantanea ogni giorno senza sbarazzarci di quella vecchia, giusto? Riempiremmo il disco di istantanee. Quindi la prossima esecuzione: - -`lxc config set centos-test snapshots.expiry 1d` - -### Automatizzazione del Processo di Copia di Istantanee - -Anche in questo caso, il processo viene eseguito su lxd-primary. La prima cosa da fare è creare uno script che verrà eseguito da cron in /usr/local/sbin chiamato "refresh-containers" : - -`sudo vi /usr/local/sbin/refreshcontainers.sh` - -Lo script è piuttosto semplice: - -``` -#!/bin/bash -# This script is for doing an lxc copy --refresh against each container, copying -# and updating them to the snapshot server. - -for x in $(/var/lib/snapd/snap/bin/lxc ls -c n --format csv) - do echo "Refreshing $x" - /var/lib/snapd/snap/bin/lxc copy --refresh $x lxd-snapshot:$x - done - -``` - - Rendetelo eseguibile: - -`sudo chmod +x /usr/local/sbin/refreshcontainers.sh` - -Cambiare la proprietà di questo script all'utente e al gruppo lxdadmin: - -`sudo chown lxdadmin.lxdadmin /usr/local/sbin/refreshcontainers.sh` - -Impostare il crontab per l'utente lxdadmin per l'esecuzione di questo script, in questo caso alle 10 di sera: - -`crontab -e` - -La voce avrà il seguente aspetto: - -`00 22 * * * /usr/local/sbin/refreshcontainers.sh > /home/lxdadmin/refreshlog 2>&1` - -Salvare le modifiche e uscire. - -In questo modo si creerà un registro nella home directory di lxdadmin chiamato "refreshlog" che permetterà di sapere se il processo ha funzionato o meno. Molto importante! - -La procedura automatica a volte fallisce. Questo accade generalmente quando un particolare container non riesce ad aggiornarsi. È possibile rieseguire manualmente l'aggiornamento con il seguente comando (assumendo centos-test come container): - -`lxc copy --refresh centos-test lxd-snapshot:centos-test` - -## Conclusioni - -L'installazione e l'uso efficace di LXD sono numerosi. È certamente possibile installarlo sul proprio computer portatile o sulla propria workstation senza troppi problemi, in quanto si tratta di un'ottima piattaforma di sviluppo e di test. Se si desidera un approccio più serio con l'uso di container di produzione, la scelta migliore è quella di un approccio basato su server primario e snapshot. - -Anche se abbiamo toccato molte funzioni e impostazioni, abbiamo solo scalfito la superficie di ciò che si può fare con LXD. Il modo migliore per imparare questo sistema è installarlo e provarlo con le cose che si usano. Se trovate utile LXD, prendete in considerazione la possibilità di installarlo nel modo descritto in questo documento per sfruttare al meglio l'hardware per i container Linux. Rocky Linux funziona molto bene per questo! - -A questo punto è possibile uscire dal documento o tornare al [menu](#menu). Se volete. diff --git a/docs/guides/containers/lxd_server.md b/docs/guides/containers/lxd_server.md deleted file mode 100644 index 5548ea7a37..0000000000 --- a/docs/guides/containers/lxd_server.md +++ /dev/null @@ -1,1239 +0,0 @@ ---- -title: LXD Server -author: Steven Spencer -contributors: Ezequiel Bruni -tested with: 8.5, 8.6 -tags: - - lxd - - enterprise ---- - -# Creating a full LXD Server - -## Introduction - -LXD is best described on the [official website](https://linuxcontainers.org/lxd/introduction/), but think of it as a container system that provides the benefits of virtual servers in a container, or a container on steroids. - -It is very powerful, and with the right hardware and set up, can be leveraged to run a lot of server instances on a single piece of hardware. If you pair that with a snapshot server, you also have a set of containers that you can spin up almost immediately in the event that your primary server goes down. - -(You should not think of this as a traditional backup. You still need a regular backup system of some sort, like [rsnapshot](../backup/rsnapshot_backup.md).) - -The learning curve for LXD can be a bit steep, but this document will attempt to give you a wealth of knowledge at your fingertips, to help you deploy and use LXD on Rocky Linux. - -## Prerequisites And Assumptions - -* One Rocky Linux server, nicely configured. You should consider a separate hard drive for ZFS disk space (you have to if you are using ZFS) in a production environment. And yes, we are assuming this is a bare metal server, not a VPS. -* This should be considered an advanced topic, but we have tried our best to make it as easy to understand as possible for everyone. That said, knowing a few basic things about container management will take you a long way. -* You should be very comfortable at the command line on your machine(s), and fluent in a command line editor. (We are using _vi_ throughout this example, but you can substitute in your favorite editor.) -* You need to be an unprivileged user for the bulk of the LXD processes. Except where noted, enter LXD commands as your unprivileged user. We are assuming that you are logged in as a user named "lxdadmin" for LXD commands. The bulk of the set up _is_, done as root until you get past the LXD initialization. We will have you create the "lxdadmin" user later in the process. -* For ZFS, make sure that UEFI secure boot is NOT enabled. Otherwise, you will end up having to sign the ZFS module in order to get it to load. -* We will, for the moment, be using CentOS-based containers, as LXC does not yet have Rocky Linux images. Stay tuned for updates, because this will likely change with time. - -!!! Note - - This has changed! Feel free to substitute in Rocky Linux containers in the examples below. - -## Part 1 : Getting The Environment Ready - -Throughout "Part 1" you will need to be the root user or you will need to be able to _sudo_ to root. - -### Install EPEL and OpenZFS Repositories - -LXD requires the EPEL (Extra Packages for Enterprise Linux) repository, which is easy to install using: - -`dnf install epel-release` - -Once installed, check for updates: - -`dnf update` - -If you're using ZFS, install the OpenZFS repository with: - -`dnf install https://zfsonlinux.org/epel/zfs-release.el8_3.noarch.rpm` - -We also need the GPG key, so use this command to get that: - -`gpg --import --import-options show-only /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux` - -If there were kernel updates during the update process above, reboot your server - -### Install snapd, dkms And vim - -LXD must be installed from a snap for Rocky Linux. For this reason, we need to install snapd (and a few other useful programs) with: - -`dnf install snapd dkms vim` - -And now enable and start snapd: - -`systemctl enable snapd` - -And then run: - -`systemctl start snapd` - -Reboot the server before continuing here. - -### Install LXD - -Installing LXD requires the use of the snap command. At this point, we are just installing it, we are doing no set up: - -`sudo snap install lxd` - -### Install OpenZFS - -`dnf install kernel-devel zfs` - -### Environment Set up - -Most server kernel settings are not sufficient to run a large number of containers. If we assume from the beginning that we will be using our server in production, then we need to make these changes up front to avoid errors such as "Too many open files" from occurring. - -Luckily, tweaking the settings for LXD is easy with a few file modifications and a reboot. - -#### Modifying limits.conf - -The first file we need to modify is the limits.conf file. This file is self-documented, so look at the explanations in the file as to what this file does. To make our modifications type: - -`vi /etc/security/limits.conf` - -This entire file is remarked/commented out and, at the bottom, shows the current default settings. In the blank space above the end of file marker (#End of file) we need to add our custom settings. The end of the file will look like this when you are done: - -``` -# Modifications made for LXD - -* soft nofile 1048576 -* hard nofile 1048576 -root soft nofile 1048576 -root hard nofile 1048576 -* soft memlock unlimited -* hard memlock unlimited -``` - -Save your changes and exit. (`SHIFT:wq!` for _vi_) - -#### Modifying sysctl.conf With 90-lxd.override.conf - -With _systemd_, we can make changes to our system's overall configuration and kernel options *without* modifying the main configuration file. Instead, we'll put our settings in a separate file that will simply override the particular settings we need. - -To make these kernel changes, we are going to create a file called _90-lxd-override.conf_ in /etc/sysctl.d. To do this type: - -`vi /etc/sysctl.d/90-lxd-override.conf` - -Place the following content in that file. Note that if you are wondering what we are doing here, the file content below is self-documenting: - -``` -## The following changes have been made for LXD ## - -# fs.inotify.max_queued_events specifies an upper limit on the number of events that can be queued to the corresponding inotify instance - - (default is 16384) - -fs.inotify.max_queued_events = 1048576 - -# fs.inotify.max_user_instances This specifies an upper limit on the number of inotify instances that can be created per real user ID - -(default value is 128) - -fs.inotify.max_user_instances = 1048576 - -# fs.inotify.max_user_watches specifies an upper limit on the number of watches that can be created per real user ID - (default is 8192) - -fs.inotify.max_user_watches = 1048576 - -# vm.max_map_count contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of cal -ling malloc, directly by mmap and mprotect, and also when loading shared libraries - (default is 65530) - -vm.max_map_count = 262144 - -# kernel.dmesg_restrict denies container access to the messages in the kernel ring buffer. Please note that this also will deny access t -o non-root users on the host system - (default is 0) - -kernel.dmesg_restrict = 1 - -# This is the maximum number of entries in ARP table (IPv4). You should increase this if you create over 1024 containers. - -net.ipv4.neigh.default.gc_thresh3 = 8192 - -# This is the maximum number of entries in ARP table (IPv6). You should increase this if you plan to create over 1024 containers.Not nee -ded if not using IPv6, but... - -net.ipv6.neigh.default.gc_thresh3 = 8192 - -# This is a limit on the size of eBPF JIT allocations which is usually set to PAGE_SIZE * 40000. - -net.core.bpf_jit_limit = 3000000000 - -# This is the maximum number of keys a non-root user can use, should be higher than the number of containers - -kernel.keys.maxkeys = 2000 - -# This is the maximum size of the keyring non-root users can use - -kernel.keys.maxbytes = 2000000 - -# This is the maximum number of concurrent async I/O operations. You might need to increase it further if you have a lot of workloads th -at use the AIO subsystem (e.g. MySQL) - -fs.aio-max-nr = 524288 -``` - -At this point you should reboot the server. - -#### Checking _sysctl.conf_ Values - -Once the reboot has been completed, log back in as to the server. We need to spot check that our override file has actually done the job. - -This is easy to do. There's no need to check every setting unless you want to, but checking a few will verify that the settings have been changed. This is done with the _sysctl_ command: - -`sysctl net.core.bpf_jit_limit` - -Which should show you: - -`net.core.bpf_jit_limit = 3000000000` - -Do the same with a few other settings in the override file (above) to verify that changes have been made. - -### Enabling ZFS And Setting Up The Pool - -If you have UEFI secure boot turned off, this should be fairly easy. First, load the ZFS module with modprobe: - -`/sbin/modprobe zfs` - -This should not return an error, it should simply return to the command prompt when done. If you get an error, stop now and begin troubleshooting. Again, make sure that secure boot is off as that will be the most likely culprit. - -Next we need to take a look at the disks on our system, determine what has the OS loaded on it, and what is available to use for the ZFS pool. We will do this with _lsblk_: - -`lsblk` - -Which should return something like this (your system will be different!): - -``` -AME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT -loop0 7:0 0 32.3M 1 loop /var/lib/snapd/snap/snapd/11588 -loop1 7:1 0 55.5M 1 loop /var/lib/snapd/snap/core18/1997 -loop2 7:2 0 68.8M 1 loop /var/lib/snapd/snap/lxd/20037 -sda 8:0 0 119.2G 0 disk -├─sda1 8:1 0 600M 0 part /boot/efi -├─sda2 8:2 0 1G 0 part /boot -├─sda3 8:3 0 11.9G 0 part [SWAP] -├─sda4 8:4 0 2G 0 part /home -└─sda5 8:5 0 103.7G 0 part / -sdb 8:16 0 119.2G 0 disk -├─sdb1 8:17 0 119.2G 0 part -└─sdb9 8:25 0 8M 0 part -sdc 8:32 0 149.1G 0 disk -└─sdc1 8:33 0 149.1G 0 part -``` - -In this listing, we can see that */dev/sda* is in use by the operating system, so we are going to use */dev/sdb* for our zpool. Note that if you have multiple free hard drives, you may wish to consider using raidz (a software raid specifically for ZFS). - -That falls outside the scope of this document, but should definitely be a consideration for production, as it offers better performance and redundancy. For now, let's create our pool on the single device we have identified: - -`zpool create storage /dev/sdb` - -What this says is to create a pool called "storage" that is ZFS on the device */dev/sdb*. - -Once the pool is created, it's a good idea to reboot the server again at this point. - -### LXD Initialization - -Now that the environment is all set up, we are ready to initialize LXD. This is an automated script that asks a series of questions to get your LXD instance up and running: - -`lxd init` - -Here are the questions and our answers for the script, with a little explanation where warranted: - -`Would you like to use LXD clustering? (yes/no) [default=no]:` - -If you are interested in clustering, do some additional research on that [here](https://lxd.readthedocs.io/en/latest/clustering/) - -`Do you want to configure a new storage pool? (yes/no) [default=yes]:` - -This may seem counter-intuitive, since we have already created our ZFS pool, but it will be resolved in a later question. Accept the default. - -`Name of the new storage pool [default=default]: storage` - -You could leave this as default if you wanted to, but we have chosen to use the same name we gave our ZFS pool. - -`Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]:` - -Obviously we want to accept the default. - -`Create a new ZFS pool? (yes/no) [default=yes]: no` - -Here's where the earlier question about creating a storage pool is resolved. - -`Name of the existing ZFS pool or dataset: storage` - -`Would you like to connect to a MAAS server? (yes/no) [default=no]:` - -Metal As A Service (MAAS) is outside the scope of this document. - -`Would you like to create a new local network bridge? (yes/no) [default=yes]:` - -`What should the new bridge be called? [default=lxdbr0]: ` - -`What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:` - -`What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none` - -If you want to use IPv6 on your LXD containers, you can turn on this option. That is up to you. - -`Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes` - -This is necessary to snapshot the server, so answer "yes" here. - -`Address to bind LXD to (not including port) [default=all]:` - -`Port to bind LXD to [default=8443]:` - -`Trust password for new clients:` - -`Again:` - -This trust password is how you will connect to the snapshot server or back from the snapshot server, so set this with something that makes sense in your environment. Save this entry to a secure location, such as a password manager. - -`Would you like stale cached images to be updated automatically? (yes/no) [default=yes]` - -`Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:` - -#### Setting Up User Privileges - -Before we continue on, we need to create our "lxdadmin" user and make sure that it has the privileges it needs. We need the "lxdadmin" user to be able to _sudo_ to root and we need it to be a member of the lxd group. To add the user and make sure it is a member of both groups do: - -`useradd -G wheel,lxd lxdadmin` - -Then set the password: - -`passwd lxdadmin` - -As with the other passwords, save this to a secure location. - -### Firewall Set Up - iptables - -!!! note "A note regarding Rocky Linux 9.0" - - Starting with Rocky Linux 9.0, `iptables` and all of the associated utilities are officially deprecated. This means that in future versions of the OS, perhaps as early as 9.1, they will disappear altogether. For this reason, you should skip down to the `firewalld` procedure below before continuing. - -Before continuing, you will want a firewall set up on your server. This example is using _iptables_ and [this procedure](../security/enabling_iptables_firewall.md) to disable _firewalld_. If you prefer to use _firewalld_, simply substitute in _firewalld_ rules using the instructions below this section. - -Create your firewall.conf script: - -`vi /etc/firewall.conf` - -We are assuming an LXD server on a LAN network of 192.168.1.0/24 below. Note, too, that we are accepting all traffic from our bridged interface. This is important if you want your containers to get IP addresses from the bridge. - -This firewall script makes no other assumptions about the network services needed. There is an SSH rule to allow our LAN network IP's to SSH into the server. You can very easily have many more rules needed here, depending on your environment. Later, we will be adding a rule for bi-directional traffic between our production server and the snapshot server. - -``` -#!/bin/sh -# -#IPTABLES=/usr/sbin/iptables - -# Unless specified, the defaults for OUTPUT is ACCEPT -# The default for FORWARD and INPUT is DROP -# -echo " clearing any existing rules and setting default policy.." -iptables -F INPUT -iptables -P INPUT DROP -iptables -A INPUT -i lxdbr0 -j ACCEPT -iptables -A INPUT -p tcp -m tcp -s 192.168.1.0/24 --dport 22 -j ACCEPT -iptables -A INPUT -i lo -j ACCEPT -iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -iptables -A INPUT -p tcp -j REJECT --reject-with tcp-reset -iptables -A INPUT -p udp -j REJECT --reject-with icmp-port-unreachable - -/usr/sbin/service iptables save -``` -### Firewall Set Up - firewalld - -For _firewalld_ rules, we need to use [this basic procedure](../security/firewalld.md) or be familiar with those concepts. Our assumptions are the same as with the _iptables_ rules above: LAN network of 192.168.1.0/24 and a bridge named lxdbr0. To be clear, you might have multiple interfaces on your LXD server, with one perhaps facing your WAN as well. We are also going to create a zone for the bridged and local networks. This is just for zone clarity sake, as the other names do not really apply. The below assumes that you already know the basics of _firewalld_. - -``` -firewall-cmd --new-zone=bridge --permanent -``` - -You need to reload the firewall after adding a zone: - -``` -firewall-cmd --reload -``` - -We want to allow all traffic from the bridge, so let's just add the interface, and then change the target from "default" to "ACCEPT" and we will be done: - -!!! attention - - Changing the target of a firewalld zone *must* be done with the --permanent option, so we might as well just enter that flag in our other commands as well and forgo the --runtime-to-permanent option. - -!!! Note - - If you need to create a zone that you want to allow all access to the interface or source, but do not want to have to specify any protocols or services, then you *must* change the target from "default" to ACCEPT. The same is true of DROP and REJECT for a particular IP block that you have custom zones for. To be clear, the "drop" zone will take care of that for you as long as you aren't using a custom zone. - -``` -firewall-cmd --zone=bridge --add-interface=lxdbr0 --permanent -firewall-cmd --zone=bridge --set-target=ACCEPT --permanent -``` -Assuming no errors and everything is still working just do a reload: - -``` -firewall-cmd --reload -``` -If you list out your rules now with `firewall-cmd --zone=bridge --list-all` you should see something like the following: - -``` -bridge (active) - target: ACCEPT - icmp-block-inversion: no - interfaces: lxdbr0 - sources: - services: - ports: - protocols: - forward: no - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` -Note from the _iptables_ rules, that we also want to allow our local interface. Again, I do not like the included zones for this, so create a new zone and use the source IP range for the local interface to make sure you have access: - -``` -firewall-cmd --new-zone=local --permanent -firewall-cmd --reload -``` -Then we just need to add the source IP's for the local interface, change the target to "ACCEPT" and we are done with this as well: - -``` -firewall-cmd --zone=local --add-source=127.0.0.1/8 --permanent -firewall-cmd --zone=local --set-target=ACCEPT --permanent -firewall-cmd --reload -``` -Go ahead and list out the "local" zone to make sure your rules are there with `firewall-cmd --zone=local --list all` which should show you something like this: - -``` -local (active) - target: ACCEPT - icmp-block-inversion: no - interfaces: - sources: 127.0.0.1/8 - services: - ports: - protocols: - forward: no - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` - -Next we want to allow SSH from our trusted network. We will use the source IP's here, just like in our _iptables_ example, and the built-in "trusted" zone. The target for this zone is already "ACCEPT" by default. - -``` -firewall-cmd --zone=trusted --add-source=192.168.1.0/24 -``` -Then add the service to the zone: - -``` -firewall-cmd --zone=trusted --add-service=ssh -``` -And if everything is working, move your rules to permanent and reload the rules: - -``` -firewall-cmd --runtime-to-permanent -firewall-cmd --reload -``` -Listing out your "trusted" zone should now show you something like this: - -``` -trusted (active) - target: ACCEPT - icmp-block-inversion: no - interfaces: - sources: 192.168.1.0/24 - services: ssh - ports: - protocols: - forward: no - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` -By default, the "public" zone is enabled and has SSH allowed. We don't want this. Make sure that your zones are correct and that the access you are getting to the server is via one of the LAN IP's (in the case of our example) and is allowed to SSH. You could lock yourself out of the server if you don't verify this before continuing. Once you've made sure you have access from the correct interface, remove SSH from the "public" zone: - -``` -firewall-cmd --zone=public --remove-service=ssh -``` -Test access and make sure you aren't locked out. If not, then move your rules to permanent, reload, and list out zone "public" to be sure that SSH is removed: - -``` -firewall-cmd --runtime-to-permanent -firewall-cmd --reload -firewall-cmd --zone=public --list-all -``` -There may be other interfaces on your server to consider. You can use built-in zones where appropriate, but if you don't like the names (they don't appear logical, etc.), you can definitely add zones. Just remember that if you have no services or protocols that you need to allow or reject specifically, then you will need to modify the zone target. If it works to use interfaces, as we've done with the bridge, you can do that. If you need more granular access to services, uses source IP's instead. - -This completes Part 1. You can either continue on to Part 2, or return to the [menu](#menu). If you are working on the snapshot server, you can head down to [Part 5](#part5) now. - -## Part 2 : Setting Up And Managing Images - -Throughout Part 2, and from here on out unless otherwise noted, you will be running commands as your unprivileged user. ("lxdadmin" if you are following along with this document). - -### List Available Images - -Once you have your server environment set up, you'll probably be itching to get started with a container. There are a _lot_ of container OS possibilities. To get a feel for how many possibilities, enter this command: - -`lxc image list images: | more` - -Hit the space bar to page through the list. This list of containers and virtual machines continues to grow. For now, we are sticking with containers. - -The last thing you want to do is to page through looking for a container image to install, particularly if you know the image that you want to create. Let's modify the command above to show only CentOS Linux install options: - -`lxc image list images: | grep centos/8` - -This brings up a much more manageable list: - -``` -| centos/8 (3 more) | 98b4dbef0c29 | yes | Centos 8 amd64 (20210427_07:08) | x86_64 | VIRTUAL-MACHINE | 517.44MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8 (3 more) | 0427669ebee4 | yes | Centos 8 amd64 (20210427_07:08) | x86_64 | CONTAINER | 125.58MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream (3 more) | 961170f8934f | yes | Centos 8-Stream amd64 (20210427_07:08) | x86_64 | VIRTUAL-MACHINE | 586.44MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream (3 more) | e507fdc8935a | yes | Centos 8-Stream amd64 (20210427_07:08) | x86_64 | CONTAINER | 130.33MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream/arm64 (1 more) | e5bf98409ac6 | yes | Centos 8-Stream arm64 (20210427_10:33) | aarch64 | CONTAINER | 126.56MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream/cloud (1 more) | 5751ca14bf8f | yes | Centos 8-Stream amd64 (20210427_07:08) | x86_64 | CONTAINER | 144.75MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream/cloud (1 more) | ccf0bb20b0ca | yes | Centos 8-Stream amd64 (20210427_07:08) | x86_64 | VIRTUAL-MACHINE | 593.31MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream/cloud/arm64 | db3d915d12fd | yes | Centos 8-Stream arm64 (20210427_07:08) | aarch64 | CONTAINER | 140.60MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream/cloud/ppc64el | 11aa2ab878b2 | yes | Centos 8-Stream ppc64el (20210427_07:08) | ppc64le | CONTAINER | 149.45MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8-Stream/ppc64el (1 more) | a27665203e47 | yes | Centos 8-Stream ppc64el (20210427_07:08) | ppc64le | CONTAINER | 134.52MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8/arm64 (1 more) | d64396d47fa7 | yes | Centos 8 arm64 (20210427_07:08) | aarch64 | CONTAINER | 121.83MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8/cloud (1 more) | 84803ca6e32d | yes | Centos 8 amd64 (20210427_07:08) | x86_64 | CONTAINER | 140.42MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8/cloud (1 more) | c98196cd9eec | yes | Centos 8 amd64 (20210427_07:08) | x86_64 | VIRTUAL-MACHINE | 536.00MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8/cloud/arm64 | 9d06684a9a4e | yes | Centos 8 arm64 (20210427_10:33) | aarch64 | CONTAINER | 136.49MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8/cloud/ppc64el | 18c13c448349 | yes | Centos 8 ppc64el (20210427_07:08) | ppc64le | CONTAINER | 144.66MB | Apr 27, 2021 at 12:00am (UTC) | -| centos/8/ppc64el (1 more) | 130c1c83c36c | yes | Centos 8 ppc64el (20210427_07:08) | ppc64le | CONTAINER | 129.53MB | Apr 27, 2021 at 12:00am (UTC) | -``` - -### Installing, Renaming, And Listing Images - -For the first container, we are going to choose centos/8. To install it, we *could* use: - -`lxc launch images:centos/8 centos-test` - -That will create a CentOS-based containter named "centos-test". You can rename a container after it has been created, but you first need to stop the container, which starts automatically when it is launched. - -To start the container manually, use: - -`lxc start centos-test` - -For the purposes of this guide, go ahead and install one more image for now: - -`lxc launch images:ubuntu/20.10 ubuntu-test` - -Now let's take a look at what we have so far by listing our images: - -`lxc list` - -which should return something like this: - -``` -+-------------+---------+-----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+------+-----------+-----------+ -| centos-test | RUNNING | 10.199.182.72 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 10.199.182.236 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -``` - -### LXD Profiles - -You get a default profile when you install LXD, and this profile cannot be removed or modified. That said, you can use the default profile to create new profiles to use with your containers. - -If you look at our container listing (above) you will notice that the IP address in each case is assigned from the bridged interface. In a production environment, you may want to use something else. This might be a DHCP assigned address from your LAN interface or even a statically assigned address from your WAN. - -If you configure your LXD server with two interfaces, and assign each an IP on your WAN and LAN, then it is possible to assign your containers IP addresses based on which interface the container needs to be facing. - -As of version 8 of Rocky Linux (and really any bug for bug copy of Red Hat Enterprise Linux, such as CentOS in our listing above) the method for assigning IP addresses statically or dynamically using the profiles below, is broken out of the gate. - -There are ways to get around this, but it is annoying, as the feature that is broken _should be_ part of the Linux kernel. That feature is macvlan. Macvlan allows you to create multiple interfaces with different Layer 2 addresses. - -For now, just be aware that what we are going to suggest next has drawbacks when choosing container images based on RHEL. - -#### Creating A macvlan Profile And Assigning It - -To create our macvlan profile, simply use this command: - -`lxc profile create macvlan` - -Keep in mind that if we were on a multi-interface machine and wanted more than one macvlan template based on which network we wanted to reach, we could use "lanmacvlan" or "wanmacvlan" or any other name that we wanted to use to identify the profile. In other words, using "macvlan" in our profile create statement is totally up to you. - -Once the profile is created, we now need to modify it to do what we want. First, we need to make sure that the server's default editor is what we want to use. If we don't do this step, the editor will be whatever the default editor is. We are choosing _vim_ for our editor here: - -`export EDITOR=/usr/bin/vim` - -Now we want to modify the macvlan interface, but before we do, we need to know what the parent interface is for our LXD server. This will be the interface that has a LAN (in this case) assigned IP. To determine which interface that is, use: - -`ip addr` - -And then look for the interface with the LAN IP assignment in the 192.168.1.0/24 network: - -``` -2: enp3s0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 - link/ether 40:16:7e:a9:94:85 brd ff:ff:ff:ff:ff:ff - inet 192.168.1.106/24 brd 192.168.1.255 scope global dynamic noprefixroute enp3s0 - valid_lft 4040sec preferred_lft 4040sec - inet6 fe80::a308:acfb:fcb3:878f/64 scope link noprefixroute - valid_lft forever preferred_lft forever -``` - -So in this case, the interface would be "enp3s0". - -Now let's modify the profile: - -`lxc profile edit macvlan` - -This file will be self-documented at the top. What we need to do is modify the file as follows below the commented section: - -``` -config: {} -description: "" -devices: - eth0: - name: eth0 - nictype: macvlan - parent: enp3s0 - type: nic -name: macvlan -used_by: [] -``` - -Obviously, you can use profiles for lots of other things, but assigning a static IP to a container, or using your own DHCP server as a source for an address are very common needs. - -To assign the macvlan profile to centos-test we need to do the following: - -`lxc profile assign centos-test default,macvlan` - -This simply says, we want the default profile, and then we want to apply the macvlan profile as well. - -#### CentOS macvlan - -In the CentOS implementation of Network Manager, they have managed to break the functionality of macvlan in the kernel, or at least in the kernel applied to their LXD image. This has been this way since CentOS 8 was released and no one appears to be at all concerned about a fix. - -Simply put, if you want to run CentOS 8 containers (or any other RHEL 1-for-1 release, such as Rocky Linux), you've got to jump through some additional hoops to get macvlan to work. macvlan is part of the kernel, so it should work without the below fixes, but it doesn't. - -##### CentOS macvlan - The DHCP Fix - -Having the profile assigned, however, doesn't change the default configuration, which is set to DHCP by default. - -To test this, simply do the following: - -`lxc stop centos-test` - -And then: - -`lxc start centos-test` - -Now list your containers again and note that centos-test does not have an IP address anymore: - -`lxc list` - -``` -+-------------+---------+-----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+------+-----------+-----------+ -| centos-test | RUNNING | | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 10.199.182.236 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -``` - -To further demonstrate the problem here, we need to execute `dhclient` on the container. You can do this with: - -`lxc exec centos-test dhclient` - -A new listing using `lxc list` now shows the following: - -``` -+-------------+---------+-----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+------+-----------+-----------+ -| centos-test | RUNNING | 192.168.1.138 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 10.199.182.236 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -``` - -That should have happened with a simple stop and start of the container, but it does not. Assuming that we want to use a DHCP assigned IP address every time, then we can fix this with a simple crontab entry. To do this, we need to gain shell access to the container by entering: - -`lxc exec centos-test bash` - -Next, lets determine the complete path to `dhclient`: - -`which dhclient` - -which should return: - -`/usr/sbin/dhclient` - -Next, let's modify root's crontab: - -`crontab -e` - -And add this line: - -`@reboot /usr/sbin/dhclient` - -The crontab command entered above, uses _vi_ so to save your changes and exit simply use: - -`SHIFT:wq!` - -Now exit the container and stop centos-test: - -`lxc stop centos-test` - -and then start it again: - -`lxc start centos-test` - -A new listing will reveal that the container has been assigned the DHCP address: - -``` -+-------------+---------+-----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+------+-----------+-----------+ -| centos-test | RUNNING | 192.168.1.138 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 10.199.182.236 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -``` - -##### CentOS macvlan - The Static IP Fix - -To statically assign an IP address, things get even more convoluted. The process of setting a static IP address on a CentOS container is through the network-scripts, which we will do now. The IP we will attempt to assign is 192.168.1.200. - -To do this, we need to gain shell access to the container again: - -`lxc exec centos-test bash` - -The next thing we need to do is to manually modify the interface labelled "eth0", and set our IP address. To modify our configuration, do the following: - -`vi /etc/sysconfig/network-scripts/ifcfg-eth0` - -Which will return this: - -``` -DEVICE=eth0 -BOOTPROTO=dhcp -ONBOOT=yes -HOSTNAME=centos-test -TYPE=Ethernet -MTU= -DHCP_HOSTNAME=centos-test -IPV6INIT=yes -``` - -We need to modify this file so that it looks like this: - -``` -DEVICE=eth0 -BOOTPROTO=none -ONBOOT=yes -IPADDR=192.168.1.200 -PREFIX=24 -GATEWAY=192.168.1.1 -DNS1=8.8.8.8 -DNS2=8.8.4.4 -HOSTNAME=centos-test -TYPE=Ethernet -MTU= -DHCP_HOSTNAME=centos-test -IPV6INIT=yes -``` - -This says we want to set the boot protocol to none (used for static IP assignments), set the IP address to 192.168.1.200, that this address is part of a CLASS C (PREFIX=24) address, that the gateway for this network is 192.168.1.1 and then that we want to use Google's open DNS servers for name resolution. - -Save your file (`SHIFT:wq!`). - -We also need to remove our crontab for root, as this isn't what we want for a static IP. To do this, simply `crontab -e` and remark out the @reboot line with a "#", save your changes and exit the container. - -Stop the container with: - -`lxc stop centos-test` - -and start it again: - -`lxc start centos-test` - -Just like our DHCP assigned address, the statically assigned address will not be assigned when we list the container: - -``` -+-------------+---------+-----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+------+-----------+-----------+ -| centos-test | RUNNING | | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 10.199.182.236 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -``` - -To fix this requires breaking Network Manager on the container. The following works-at least for now: - -`lxc exec centos-test dhclient` - -Then get into the container: - -`lxc exec centos-test bash` - -Install the old network scripts: - -`dnf install network-scripts` - -Nuke Network Manager: - -`systemctl stop NetworkManager` -`systemctl disable NetworkManager` - -Enable the old Network service: - -`systemctl enable network.service` - -Exit the container and then stop and start the container again: - -`lxc stop centos-test` - -And then run: - -`lxc start centos-test` - -When the container starts, a new listing will show the correct statically assigned IP: - -``` -+-------------+---------+-----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+------+-----------+-----------+ -| centos-test | RUNNING | 192.168.1.200 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 10.199.182.236 (eth0) | | CONTAINER | 0 | -+-------------+---------+-----------------------+------+-----------+-----------+ -``` - -The issue with macvlan shown in both of these examples is directly related to containers based on Red Hat Enterprise Linux (Centos 8, Rocky Linux 8). - -#### Ubuntu macvlan - -Luckily, In Ubuntu's implementation of Network Manager, the macvlan stack is NOT broken, so it is much easier to deploy! - -First, just like with our centos-test container, we need to assign the template to our container: - -`lxc profile assign ubuntu-test default,macvlan` - -That should be all that is necessary to get a DHCP assigned address. To find out, stop and then start the container again: - -`lxc stop ubuntu-test` - -And then run: - -`lxc start ubuntu-test` - -Then list the containers again: - -``` -+-------------+---------+----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+----------------------+------+-----------+-----------+ -| centos-test | RUNNING | 192.168.1.200 (eth0) | | CONTAINER | 0 | -+-------------+---------+----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 192.168.1.139 (eth0) | | CONTAINER | 0 | -+-------------+---------+----------------------+------+-----------+-----------+ -``` - -Success! - -Configuring the Static IP is just a little different, but not at all hard. We need to modify the .yaml file associated with the container's connection (/10-lxc.yaml). For this static IP, we will use 192.168.1.201: - -`vi /etc/netplan/10-lxc.yaml` - -And change what is there to the following: - -``` -network: - version: 2 - ethernets: - eth0: - dhcp4: false - addresses: [192.168.1.201/24] - gateway4: 192.168.1.1 - nameservers: - addresses: [8.8.8.8,8.8.4.4] -``` - -Save your changes (`SHFT:wq!`) and exit the container. - -Now stop and start the container: - -`lxc stop ubuntu-test` - -And then run: - -`lxc start ubuntu-test` - -When you list your containers again, you should see our new static IP: - -``` -+-------------+---------+----------------------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+----------------------+------+-----------+-----------+ -| centos-test | RUNNING | 192.168.1.200 (eth0) | | CONTAINER | 0 | -+-------------+---------+----------------------+------+-----------+-----------+ -| ubuntu-test | RUNNING | 192.168.1.201 (eth0) | | CONTAINER | 0 | -+-------------+---------+----------------------+------+-----------+-----------+ -``` - -Success! - -In the examples used in Part 2, we have intentionally chosen a hard container to configure, and an easy one. There are obviously many more versions of Linux available in the image listing. If you have a favorite, try installing it, assigning the macvlan template, and setting IP's. - -This completes Part 2. You can either continue on to Part 3, or return to the [menu](#menu). - -## Part 3 : Container Configuration Options - -There are a wealth of options for configuring the container once you have it installed. Before we get into how to see those, however, let's take a look at the info command for a container. In this example, we will use the ubuntu-test container: - -`lxc info ubuntu-test` - -This shows something like the following: - -``` -Name: ubuntu-test -Location: none -Remote: unix:// -Architecture: x86_64 -Created: 2021/04/26 15:14 UTC -Status: Running -Type: container -Profiles: default, macvlan -Pid: 584710 -Ips: - eth0: inet 192.168.1.201 enp3s0 - eth0: inet6 fe80::216:3eff:fe10:6d6d enp3s0 - lo: inet 127.0.0.1 - lo: inet6 ::1 -Resources: - Processes: 13 - Disk usage: - root: 85.30MB - CPU usage: - CPU usage (in seconds): 1 - Memory usage: - Memory (current): 99.16MB - Memory (peak): 110.90MB - Network usage: - eth0: - Bytes received: 53.56kB - Bytes sent: 2.66kB - Packets received: 876 - Packets sent: 36 - lo: - Bytes received: 0B - Bytes sent: 0B - Packets received: 0 - Packets sent: 0 -``` - -There's a lot of good information there, from the profiles applied, to the memory in use, disk space in use, and more. - -#### A Word About Configuration And Some Options - -By default, LXD will allocate the required system memory, disk space, CPU cores, etc., to the container. But what if we want to be more specific? That is totally possible. - -There are trade-offs to doing this, though. For instance, if we allocate system memory and the container doesn't actually use it all, then we have kept it from another container that might actually need it. The reverse, though, can happen. If a container is a complete pig on memory, then it can keep other containers from getting enough, thereby pinching their performance. - -Just keep in mind that every action you make to configure a container _can_ have negative effects somewhere else. - -Rather than run through all of the options for configuration, use the tab auto-complete to see the options available: - -`lxc config set ubuntu-test` and then hit TAB. - -This shows you all of the options for configuring a container. If you have questions about what one of the configuration options does, head up to the [official documentation for LXD](https://lxd.readthedocs.io/en/stable-4.0/instances/) and do a search for the configuration parameter, or Google the entire string, such as "lxc config set limits.memory" and take a look at the results of the search. - -We will look at a few of the most used configuration options. For example, if you want to set the max amount of memory that a container can use: - -`lxc config set ubunt-test limits.memory 2GB` - -That says that as long as the memory is available to use, in other words there is 2GB of memory free, then the container can actually use more than 2GB if it's available. It's a soft limit, in other words. - -`lxc config set ubuntu-test limits.memory.enforce 2GB` - -That says that the container can never use more than 2GB of memory, whether it's currently available or not. In this case it's a hard limit. - -`lxc config set ubuntu-test limits.cpu 2` - -That says to limit the number of cpu cores that the container can use to 2. - -Remember when we set up our storage pool in the [Enabling zfs And Setting Up The Pool](#zfssetup) above? We named the pool "storage," but we could have named it anything. If we want to look at this, we can use this command: - -`lxc storage show storage` - -This shows the following: - -``` -config: - source: storage - volatile.initial_source: storage - zfs.pool_name: storage -description: "" -name: storage -driver: zfs -used_by: -- /1.0/images/0cc65b6ca6ab61b7bc025e63ca299f912bf8341a546feb8c2f0fe4e83843f221 -- /1.0/images/4f0019aee1515c109746d7da9aca6fb6203b72f252e3ee3e43d50b942cdeb411 -- /1.0/images/9954953f2f5bf4047259bf20b9b4f47f64a2c92732dbc91de2be236f416c6e52 -- /1.0/instances/centos-test -- /1.0/instances/ubuntu-test -- /1.0/profiles/default -status: Created -locations: -- none -``` - -This shows that all of our containers are using our zfs storage pool. When using ZFS, you can also set a disk quota on a container. Let's do this by setting a 2GB disk quota on the ubuntu-test container. You do this with: - -`lxc config device override ubuntu-test root size=2GB` - -As stated earlier, you should use configuration options sparingly, unless you've got a container that wants to use way more than its share of resources. LXD, for the most part, will manage the environment well on its own. - -There are, of course, many more options that may be of interest to some people. You should do your own research to find out if any of those are of value in your environment. - -This completes Part 3. You can either continue on to Part 4, or return to the [menu](#menu). - -## Part 4: Container Snapshots - -Container snapshots, along with a snapshot server (which we will get to more later), are probably the most important aspect of running a production LXD server. Snapshots ensure quick recovery, and can be used for safety when you are, say, updating the primary software that runs on a particular container. If something happens during the update that breaks that application, you simply restore the snapshot and you are back up and running with only a few seconds worth of downtime. - -The author used LXD containers for PowerDNS public facing servers, and the process of updating those applications became so much more worry-free, since you can snapshot the container first before continuing. - -You can even snapshot a container while it is running. We'll start by getting a snapshot of the ubuntu-test container by using this command: - -`lxc snapshot ubuntu-test ubuntu-test-1` - -Here, we are calling the snapshot "ubuntu-test-1", but it can be called anything you like. To make sure that you have the snapshot, do an "lxc info" of the container: - -`lxc info ubuntu-test` - -We've looked at an info screen already, so if you scroll to the bottom, you should see: - -``` -Snapshots: - ubuntu-test-1 (taken at 2021/04/29 15:57 UTC) (stateless) -``` - -Success! Our snapshot is in place. - -Now, get into the ubuntu-test container: - -`lxc exec ubuntu-test bash` - -And create an empty file with the _touch_ command: - -`touch this_file.txt` - -Now exit the container. - -Before we restore the container as it was prior to creating the file, the safest way to restore a container, particularly if there have been a lot of changes, is to stop it first: - -`lxc stop ubuntu-test` - -Then restore it: - -`lxc restore ubuntu-test ubuntu-test-1` - -Then start the container again: - -`lxc start ubuntu-test` - -If you get back into the container again and look, our "this_file.txt" that we created is now gone. - -Once you don't need a snapshot anymore, you can delete it: - -`lxc delete ubuntu-test/ubuntu-test-1` - -**Important:** You should always delete snapshots with the container running. Why? Well the _lxc delete_ command also works to delete the entire container. If we had accidentally hit enter after "ubuntu-test" in the command above, AND, if the container was stopped, the container would be deleted. No warning is given, it simply does what you ask. - -If the container is running, however, you will get this message: - -`Error: The instance is currently running, stop it first or pass --force` - -So always delete snapshots with the container running. - -The process of creating snapshots automatically, setting expiration of the snapshot so that it goes away after a certain length of time, and auto refreshing the snapshots to the snapshot server will be covered in detail in the section dealing with the snapshot server. - -This completes Part 4. You can either continue on to Part 5, or return to the [menu](#menu). - -## Part 5: The Snapshot Server - -As noted at the beginning, the snapshot server for LXD should be a mirror of the production server in every way possible. The reason is that you may need to take it to production in the event of a hardware failure, and having not only backups, but a quick way to bring up production containers, keeps those systems administrator panic phone calls and text messages to a minimum. THAT is ALWAYS good! - -So the process of building the snapshot server is exactly like the production server. To fully emulate our production server set up, do all of [Part 1](#part1) again, and when completed, return to this spot. - -You're back!! Congratulations, this must mean that you have successfully completed Part 1 for the snapshot server. That's great news!! - -### Setting Up The Primary and Snapshot Server Relationship - -We've got some housekeeping to do before we continue. First, if you are running in a production environment, you probably have access to a DNS server that you can use for setting up IP to name resolution. - -In our lab, we don't have that luxury. Perhaps you've got the same scenario running. For this reason, we are going to add both servers IP addresses and names to the /etc/hosts file on BOTH the primary and the snapshot server. You'll need to do this as your root (or _sudo_) user. - -In our lab, the primary LXD server is running on 192.168.1.106 and the snapshot LXD server is running on 192.168.1.141. We will SSH into both servers and add the following to the /etc/hosts file: - -``` -192.168.1.106 lxd-primary -192.168.1.141 lxd-snapshot -``` -Next, we need to allow all traffic between the two servers. To do this, we are going to modify the /etc/firewall.conf file with the following. First, on the lxd-primary server, add this line: - -`IPTABLES -A INPUT -s 192.168.1.141 -j ACCEPT` - -And on the lxd-snapshot server, add this line: - -`IPTABLES -A INPUT -s 192.168.1.106 -j ACCEPT` - -This allows bi-directional traffic of all types to travel between the two servers. - -Next, as the "lxdadmin" user, we need to set the trust relationship between the two machines. This is done by executing the following on lxd-primary: - -`lxc remote add lxd-snapshot` - -This will display the certificate to accept, so do that, and then it will prompt for your password. This is the "trust password" that you set up when doing the [LXD initialization](#lxdinit) step. Hopefully, you are securely keeping track of all of these passwords. Once you enter the password, you should receive this: - -`Client certificate stored at server: lxd-snapshot` - -It does not hurt to have this done in reverse as well. In other words, set the trust relationship on the lxd-snapshot server so that, if needed, snapshots can be sent back to the lxd-primary server. Simply repeat the steps and substitute in "lxd-primary" for "lxd-snapshot." - -### Migrating Our First Snapshot - -Before we can migrate our first snapshot, we need to have any profiles created on lxd-snapshot that we have created on the lxd-primary. In our case, this is the "macvlan" profile. - -You'll need to create this for lxd-snapshot, so go back to [LXD Profiles](#profiles) and create the "macvlan" profile on lxd-snapshot. If your two servers have identical parent interface names ("enp3s0" for example) then you can copy the "macvlan" profile over to lxd-snapshot without recreating it: - -`lxc profile copy macvlan lxd-snapshot` - -Now that we have all of the relationships and profiles set up, the next step is to actually send a snapshot from lxd-primary over to lxd-snapshot. If you've been following along exactly, you've probably deleted all of your snapshots, so let's create a new one: - -`lxc snapshot centos-test centos-snap1` - -If you run the "info" sub-command for lxc, you can see the new snapshot on the bottom of our listing: - -`lxc info centos-test` - -Which will show something like this at the bottom: - -`centos-snap1 (taken at 2021/05/13 16:34 UTC) (stateless)` - -OK, fingers crossed! Let's try to migrate our snapshot: - -`lxc copy centos-test/centos-snap1 lxd-snapshot:centos-test` - -What this command says is, that within the container centos-test, we want to send the snapshot, centos-snap1 over to lxd-snapshot and copy it as centos-test. - -After a short period of time has expired, the copy will be complete. Want to find out for sure? Do an "lxc list" on the lxd-snapshot server. Which should return the following: - -``` -+-------------+---------+------+------+-----------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+------+------+-----------+-----------+ -| centos-test | STOPPED | | | CONTAINER | 0 | -+-------------+---------+------+------+-----------+-----------+ -``` - -Success! Now let's try starting it. Because we are starting it on the lxd-snapshot server, we need to stop it first on the lxd-primary server: - -`lxc stop centos-test` - -And on the lxd-snapshot server: - -`lxc start centos-test` - -Assuming all of this works without error, stop the container on lxd-snapshot and start it again on lxd-primary. - -### The Snapshot Server - Setting boot.autostart To Off For Containers - -The snapshots copied to lxd-snapshot will be down when they are migrated, but if you have a power event or need to reboot the snapshot server because of updates or something, you will end up with a problem as those containers will attempt to start on the snapshot server. - -To eliminate this, we need to set the migrated containers so that they will not start on reboot of the server. For our newly copied centos-test container, this is done with the following: - -`lxc config set centos-test boot.autostart 0` - -Do this for each snapshot on the lxd-snapshot server. - -### Automating The Snapshot Process - -Ok, so it's great that you can create snapshots when you need to, and sometimes you _do_ need to manually create a snapshot. You might even want to manually copy it over to lxd-snapshot. BUT, once you've got things going and you've got 25 to 30 containers or more running on your lxd-primary machine, the very last thing you want to do is spend an afternoon deleting snapshots on the snapshot server, creating new snapshots and sending them over. - -The first thing we need to do is schedule a process to automate snapshot creation on lxd-primary. This has to be done for each container on the lxd-primary server, but once it is set up, it will take care of itself. This is done with the following syntax. Note the similarities to a crontab entry for the timestamp: - -`lxc config set [container_name] snapshots.schedule "50 20 * * *"` - -What this is saying is, do a snapshot of the container name every day at 8:50 PM. - -To apply this to our centos-test container: - -`lxc config set centos-test snapshots.schedule "50 20 * * *"` - -We also want to set up the name of the snapshot to be meaningful by our date. LXD uses UTC everywhere, so our best bet to keep track of things, is to set the snapshot name with a date/time stamp that is in a more understandable format: - -`lxc config set centos-test snapshots.pattern "centos-test-{{ creation_date|date:'2006-01-02_15-04-05' }}"` - -GREAT, but we certainly don't want a new snapshot every day without getting rid of an old one, right? We'd fill up the drive with snapshots. So next we run: - -`lxc config set centos-test snapshots.expiry 1d` - -### Automating The Snapshot Copy Process - -Again, this process is performed on lxd-primary. First thing we need to do is create a script that will be run by cron in /usr/local/sbin called "refresh-containers" : - -`sudo vi /usr/local/sbin/refreshcontainers.sh` - -The script is pretty simple: - -``` -#!/bin/bash -# This script is for doing an lxc copy --refresh against each container, copying -# and updating them to the snapshot server. - -for x in $(/var/lib/snapd/snap/bin/lxc ls -c n --format csv) - do echo "Refreshing $x" - /var/lib/snapd/snap/bin/lxc copy --refresh $x lxd-snapshot:$x - done - -``` - - Make it executable: - -`sudo chmod +x /usr/local/sbin/refreshcontainers.sh` - -Change the ownership of this script to your lxdadmin user and group: - -`sudo chown lxdadmin.lxdadmin /usr/local/sbin/refreshcontainers.sh` - -Set up the crontab for the lxdadmin user to run this script, in this case at 10 PM: - -`crontab -e` - -And your entry will look like this: - -`00 22 * * * /usr/local/sbin/refreshcontainers.sh > /home/lxdadmin/refreshlog 2>&1` - -Save your changes and exit. - -This will create a log in lxdadmin's home directory called "refreshlog" which will give you knowledge of whether your process worked or not. Very important! - -The automated procedure will fail sometimes. This generally happens when a particular container fails to refresh. You can manually re-run the refresh with the following command (assuming centos-test here, as our container): - -`lxc copy --refresh centos-test lxd-snapshot:centos-test` - -## Conclusions - -There is a great deal to installing and effectively using LXD. You can certainly install it on your laptop or workstation without all the fuss, as it makes a great developing and testing platform. If you want a more serious approach using production containers, then a primary and snapshot server approach is your best bet. - -Even though we've touched on a lot of features and settings, we have only scratched the surface of what you can do with LXD. The best way to learn this system, is to install it and try it out with things that you will use. If you find LXD useful, consider installing it in the fashion described in this document for the best possible leveraging of hardware for Linux containers. Rocky Linux works very well for this! - -You can now exit this document, or return to the [menu](#menu). You know, if you want.