diff --git a/doc/backup.md b/doc/backup.md index 0fb8b82311..105d16a0c8 100644 --- a/doc/backup.md +++ b/doc/backup.md @@ -3,14 +3,14 @@ When planning to backup a LXD server, consider all the different objects that are stored/managed by LXD: - - Containers (database records and filesystems) + - Instances (database records and filesystems) - Images (database records, image files and filesystems) - Networks (database records and state files) - Profiles (database records) - Storage volumes (database records and filesystems) -Only backing up the database or only backing up the container filesystem -will not get you a fully functional backup. +Only backing up the database or only backing up the instances will not +get you a fully functional backup. In some disaster recovery scenarios, that may be reasonable but if your goal is to get back online quickly, consider all the different pieces of @@ -30,15 +30,15 @@ directory, restoring the backup and any external dependency it requires. Then start LXD again and check that everything works fine. ## Secondary backup LXD server -LXD supports copying and moving containers and storage volumes between two hosts. +LXD supports copying and moving instances and storage volumes between two hosts. -So with a spare server, you can copy your containers and storage volumes +So with a spare server, you can copy your instances and storage volumes to that secondary server every so often, allowing it to act as either an offline spare or just as a storage server that you can copy your -containers back from if needed. +instances back from if needed. -## Container backups -The `lxc export` command can be used to export containers to a backup tarball. +## Instance backups +The `lxc export` command can be used to export instances to a backup tarball. Those tarballs will include all snapshots by default and an "optimized" tarball can be obtained if you know that you'll be restoring on a LXD server using the same storage pool backend. @@ -47,14 +47,14 @@ Those tarballs can be saved any way you want on any filesystem you want and can be imported back into LXD using the `lxc import` command. ## Disaster recovery -Additionally, LXD maintains a `backup.yaml` file in each container's storage +Additionally, LXD maintains a `backup.yaml` file in each instance's storage volume. This file contains all necessary information to recover a given -container, such as container configuration, attached devices and storage. +instance, such as instance configuration, attached devices and storage. This file can be processed by the `lxd import` command, not to be confused with `lxc import`. -To use the disaster recovery mechanism, you must mount the container's +To use the disaster recovery mechanism, you must mount the instance's storage to its expected location, usually under `storage-pools/NAME-OF-POOL/containers/NAME-OF-CONTAINER`. @@ -64,5 +64,5 @@ any snapshot you want to restore (needed for `dir` and `btrfs`). Once everything is mounted where it should be, you can now run `lxd import NAME-OF-CONTAINER`. If any matching database entry for resources declared in `backup.yaml` is found -during import, the command will refuse to restore the container. This can be +during import, the command will refuse to restore the instance. This can be overridden by passing `--force`. diff --git a/doc/cloud-init.md b/doc/cloud-init.md index 92b734eb64..ccec7d12c8 100644 --- a/doc/cloud-init.md +++ b/doc/cloud-init.md @@ -1,9 +1,9 @@ # Custom network configuration with cloud-init -[cloud-init](https://launchpad.net/cloud-init) may be used for custom network configuration of containers. +[cloud-init](https://launchpad.net/cloud-init) may be used for custom network configuration of instances. Before trying to use it, however, first determine which image source you are -about to use as not all container images have cloud-init package installed. +about to use as not all images have cloud-init package installed. At the time of writing, images provided at images.linuxcontainers.org do not have the cloud-init package installed, therefore, any of the configuration options mentioned in this guide will not work. On the contrary, images @@ -17,7 +17,7 @@ and also have a templates directory in their archive populated with and others not related to cloud-init. -Templates provided with container images at cloud-images.ubuntu.com have +Templates provided with images at cloud-images.ubuntu.com have the following in their `metadata.yaml`: ```yaml @@ -28,14 +28,14 @@ the following in their `metadata.yaml`: template: cloud-init-network.tpl ``` -Therefore, either when you create or copy a container it gets a newly rendered +Therefore, either when you create or copy an instance it gets a newly rendered network configuration from a pre-defined template. cloud-init uses the network-config file to render the relevant network configuration on the system using either ifupdown or netplan depending on the Ubuntu release. -The default behavior is to use a DHCP client on a container's eth0 interface. +The default behavior is to use a DHCP client on an instance's eth0 interface. In order to change this you need to define your own network configuration using user.network-config key in the config dictionary which will override @@ -62,7 +62,7 @@ config: address: 10.10.10.254 ``` -A container's rootfs will contain the following files as a result: +An instance's rootfs will contain the following files as a result: * `/var/lib/cloud/seed/nocloud-net/network-config` * `/etc/network/interfaces.d/50-cloud-init.cfg` (if using ifupdown) @@ -102,7 +102,7 @@ config: ``` The template syntax is the one used in the pongo2 template engine. A custom -`config_get` function is defined to retrieve values from a container +`config_get` function is defined to retrieve values from an instance configuration. Options available with such a template structure: diff --git a/doc/clustering.md b/doc/clustering.md index 905da70a24..8ae120e9d9 100644 --- a/doc/clustering.md +++ b/doc/clustering.md @@ -1,6 +1,6 @@ # Clustering -LXD can be run in clustering mode, where any number of LXD instances +LXD can be run in clustering mode, where any number of LXD servers share the same distributed database and can be managed uniformly using the lxc client or the REST API. @@ -10,7 +10,7 @@ Note that this feature was introduced as part of the API extension ## Forming a cluster First you need to choose a bootstrap LXD node. It can be an existing -LXD instance or a brand new one. Then you need to initialize the +LXD server or a brand new one. Then you need to initialize the bootstrap node and join further nodes to the cluster. This can be done interactively or with a preseed file. @@ -39,7 +39,7 @@ network bridge. At this point your first cluster node should be up and available on your network. You can now join further nodes to the cluster. Note however that these -nodes should be brand new LXD instances, or alternatively you should +nodes should be brand new LXD servers, or alternatively you should clear their contents before joining, since any existing data on them will be lost. @@ -166,7 +166,7 @@ if there are still nodes in the cluster that have not been upgraded and that are running an older version. When a node is in the Blocked state it will not serve any LXD API requests (in particular, lxc commands on that node will not work, although any running -container will continue to run). +instance will continue to run). You can see if some nodes are blocked by running `lxc cluster list` on a node which is not blocked. @@ -207,8 +207,8 @@ online. Note that no information has been deleted from the database, in particular all information about the cluster members that you have lost is still there, -including the metadata about their containers. This can help you with further -recovery steps in case you need to re-create the lost containers. +including the metadata about their instances. This can help you with further +recovery steps in case you need to re-create the lost instances. In order to permanently delete the cluster members that you have lost, you can run the command: @@ -220,9 +220,9 @@ lxc cluster remove --force Note that this time you have to use the regular ```lxc``` command line tool, not ```lxd```. -## Containers +## Instances -You can launch a container on any node in the cluster from any node in +You can launch an instance on any node in the cluster from any node in the cluster. For example, from node1: ```bash @@ -231,12 +231,11 @@ lxc launch --target node2 ubuntu:16.04 xenial will launch an Ubuntu 16.04 container on node2. -When you launch a container without defining a target, the container will be -launched on the server which has the lowest number of containers. -If all the servers have the same amount of containers, it will choose one -at random. +When you launch an instance without defining a target, the instance will be +launched on the server which has the lowest number of instances. +If all the servers have the same amount of instances, it will choose one at random. -You can list all containers in the cluster with: +You can list all instances in the cluster with: ```bash lxc list @@ -244,7 +243,7 @@ lxc list The NODE column will indicate on which node they are running. -After a container is launched, you can operate it from any node. For +After an instance is launched, you can operate it from any node. For example, from node1: ```bash diff --git a/doc/configuration.md b/doc/configuration.md index dabf5db6ee..33482269b7 100644 --- a/doc/configuration.md +++ b/doc/configuration.md @@ -2,7 +2,7 @@ Current LXD stores configurations for a few components: - [Server](server.md) -- [Containers](containers.md) +- [Instances](instances.md) - [Network](networks.md) - [Profiles](profiles.md) - [Storage](storage.md) diff --git a/doc/daemon-behavior.md b/doc/daemon-behavior.md index 326de520b7..a5cb13fd7e 100644 --- a/doc/daemon-behavior.md +++ b/doc/daemon-behavior.md @@ -10,27 +10,27 @@ On every start, LXD checks that its directory structure exists. If it doesn't, it'll create the required directories, generate a keypair and initialize the database. -Once the daemon is ready for work, LXD will scan the containers table -for any container for which the stored power state differs from the -current one. If a container's power state was recorded as running and the -container isn't running, LXD will start it. +Once the daemon is ready for work, LXD will scan the instances table +for any instance for which the stored power state differs from the +current one. If an instance's power state was recorded as running and the +instance isn't running, LXD will start it. ## Signal handling ### SIGINT, SIGQUIT, SIGTERM For those signals, LXD assumes that it's being temporarily stopped and -will be restarted at a later time to continue handling the containers. +will be restarted at a later time to continue handling the instances. -The containers will keep running and LXD will close all connections and +The instances will keep running and LXD will close all connections and exit cleanly. ### SIGPWR Indicates to LXD that the host is going down. -LXD will attempt a clean shutdown of all the containers. After 30s, it -will kill any remaining container. +LXD will attempt a clean shutdown of all the instances. After 30s, it +will kill any remaining instance. -The container `power_state` in the containers table is kept as it was so -that LXD after the host is done rebooting can restore the containers as +The instance `power_state` in the instances table is kept as it was so +that LXD after the host is done rebooting can restore the instances as they were. ### SIGUSR1 diff --git a/doc/database.md b/doc/database.md index 1630e0bee9..27422e9bb0 100644 --- a/doc/database.md +++ b/doc/database.md @@ -3,19 +3,19 @@ ## Introduction So first of all, why a database? -Rather than keeping the configuration and state within each container's +Rather than keeping the configuration and state within each instance's directory as is traditionally done by LXC, LXD has an internal database which stores all of that information. This allows very quick queries -against all containers configuration. +against all instances configuration. -An example is the rather obvious question "what containers are using br0?". +An example is the rather obvious question "what instances are using br0?". To answer that question without a database, LXD would have to iterate -through every single container, load and parse its configuration and +through every single instance, load and parse its configuration and then look at what network devices are defined in there. -While that may be quick with a few containers, imagine how many -filesystem access would be required for 2000 containers. Instead with a +While that may be quick with a few instance, imagine how many +filesystem access would be required for 2000 instances. Instead with a database, it's only a matter of accessing the already cached database with a pretty simple query. diff --git a/doc/debugging.md b/doc/debugging.md index 0225e8c498..2c5d4bf296 100644 --- a/doc/debugging.md +++ b/doc/debugging.md @@ -1,6 +1,5 @@ # Debugging - -For information on debugging container issues, see [Frequently Asked Questions](faq.md) +For information on debugging instance issues, see [Frequently Asked Questions](faq.md) ## Debugging `lxc` and `lxd` diff --git a/doc/dev-lxd.md b/doc/dev-lxd.md index 00f9cb2fcf..8cf2f453ac 100644 --- a/doc/dev-lxd.md +++ b/doc/dev-lxd.md @@ -1,12 +1,12 @@ -# Communication between container and host +# Communication between instance and host ## Introduction -Communication between the hosted workload (container) and its host while +Communication between the hosted workload (instance) and its host while not strictly needed is a pretty useful feature. In LXD, this feature is implemented through a `/dev/lxd/sock` node which is -created and setup for all LXD containers. +created and setup for all LXD instances. -This file is a Unix socket which processes inside the container can +This file is a Unix socket which processes inside the instance can connect to. It's multi-threaded so multiple clients can be connected at the same time. @@ -14,18 +14,18 @@ same time. LXD on the host binds `/var/lib/lxd/devlxd/sock` and starts listening for new connections on it. -This socket is then bind-mounted into every single container started by +This socket is then exposed into every single instance started by LXD at `/dev/lxd/sock`. -The bind-mount is required so we can exceed 4096 containers, otherwise, -LXD would have to bind a different socket for every container, quickly +The single socket is required so we can exceed 4096 instances, otherwise, +LXD would have to bind a different socket for every instance, quickly reaching the FD limit. ## Authentication Queries on `/dev/lxd/sock` will only return information related to the -requesting container. To figure out where a request comes from, LXD will +requesting instance. To figure out where a request comes from, LXD will extract the initial socket ucred and compare that to the list of -containers it manages. +instances it manages. ## Protocol The protocol on `/dev/lxd/sock` is plain-text HTTP with JSON messaging, so very @@ -74,12 +74,12 @@ Return value: * Description: List of configuration keys * Return: list of configuration keys URL -Note that the configuration key names match those in the container +Note that the configuration key names match those in the instance config, however not all configuration namespaces will be exported to `/dev/lxd/sock`. -Currently only the `user.*` keys are accessible to the container. +Currently only the `user.*` keys are accessible to the instance. -At this time, there also aren't any container-writable namespace. +At this time, there also aren't any instance-writable namespace. Return value: diff --git a/doc/faq.md b/doc/faq.md index 32059f56f0..6e2997c9d4 100644 --- a/doc/faq.md +++ b/doc/faq.md @@ -129,7 +129,7 @@ safe to do. ### Beware of 'port security' Many switches do *not* allow MAC address changes, and will either drop traffic -with an incorrect MAC, or, disable the port totally. If you can ping a LXD container +with an incorrect MAC, or, disable the port totally. If you can ping a LXD instance from the host, but are not able to ping it from a _different_ host, this could be the cause. The way to diagnose this is to run a tcpdump on the uplink (in this case, eth1), and you will see either 'ARP Who has xx.xx.xx.xx tell yy.yy.yy.yy', with you diff --git a/doc/image-handling.md b/doc/image-handling.md index 9469bd7a88..737c4440c8 100644 --- a/doc/image-handling.md +++ b/doc/image-handling.md @@ -6,20 +6,20 @@ where the user or external tools can import images. Containers are then started from those images. -It's possible to spawn remote containers using local images or local -containers using remote images. In such cases, the image may be cached +It's possible to spawn remote instances using local images or local +instances using remote images. In such cases, the image may be cached on the target LXD. ## Caching -When spawning a container from a remote image, the remote image is +When spawning an instance from a remote image, the remote image is downloaded into the local image store with the cached bit set. The image will be kept locally as a private image until either it's been unused -(no new container spawned) for the number of days set in +(no new instance spawned) for the number of days set in `images.remote_cache_expiry` or until the image's expiry is reached whichever comes first. LXD keeps track of image usage by updating the `last_used_at` image -property every time a new container is spawned from the image. +property every time a new instance is spawned from the image. ## Auto-update LXD can keep images up to date. By default, any image which comes from a @@ -40,31 +40,31 @@ manually copying an image from a remote server. If a new upstream image update is published and the local LXD has the -previous image in its cache when the user requests a new container to be +previous image in its cache when the user requests a new instance to be created from it, LXD will use the previous version of the image rather -than delay the container creation. +than delay the instance creation. This behavior only happens if the current image is scheduled to be auto-updated and can be disabled by setting `images.auto_update_interval` to 0. ## Profiles A list of profiles can be associated with an image using the `lxc image edit` -command. After associating profiles with an image, a container launched +command. After associating profiles with an image, an instance launched using the image will have the profiles applied in order. If `nil` is passed as the list of profiles, only the `default` profile will be associated with the image. If an empty list is passed, then no profile will be associated with the image, not even the `default` profile. An image's associated -profiles can be overridden when launching a container by using the +profiles can be overridden when launching an instance by using the `--profile` and the `--no-profiles` flags to `lxc launch`. ## Image format LXD currently supports two LXD-specific image formats. The first is a unified tarball, where a single tarball -contains both the container rootfs and the needed metadata. +contains both the instance root and the needed metadata. -The second is a split model, using two tarballs instead, one containing -the rootfs, the other containing the metadata. +The second is a split model, using two files instead, one containing +the root, the other containing the metadata. The former is what's produced by LXD itself and what people should be using for LXD-specific images. @@ -99,9 +99,10 @@ The tarball(s) can be compressed using bz2, gz, xz, lzma, tar (uncompressed) or it can also be a squashfs image. ### Content -The rootfs directory (or tarball) contains a full file system tree of what will become the container's `/`. +For containers, the rootfs directory (or tarball) contains a full file system tree of what will become the `/`. +For VMs, this is instead a `root.img` file which becomes the main disk device. -The templates directory contains pongo2-formatted templates of files inside the container. +The templates directory contains pongo2-formatted templates of files inside the instance. `metadata.yaml` contains information relevant to running the image under LXD, at the moment, this contains: @@ -139,24 +140,25 @@ pretty common. For templates, the `when` key can be one or more of: - - `create` (run at the time a new container is created from the image) - - `copy` (run when a container is created from an existing one) - - `start` (run every time the container is started) + - `create` (run at the time a new instance is created from the image) + - `copy` (run when an instance is created from an existing one) + - `start` (run every time the instance is started) The templates will always receive the following context: - `trigger`: name of the event which triggered the template (string) - `path`: path of the file being templated (string) - - `container`: key/value map of container properties (name, architecture, privileged and ephemeral) (map[string]string) - - `config`: key/value map of the container's configuration (map[string]string) - - `devices`: key/value map of the devices assigned to this container (map[string]map[string]string) + - `container`: key/value map of instance properties (name, architecture, privileged and ephemeral) (map[string]string) (deprecated in favor of `instance`) + - `instance`: key/value map of instance properties (name, architecture, privileged and ephemeral) (map[string]string) + - `config`: key/value map of the instance's configuration (map[string]string) + - `devices`: key/value map of the devices assigned to this instance (map[string]map[string]string) - `properties`: key/value map of the template properties specified in metadata.yaml (map[string]string) The `create_only` key can be set to have LXD only only create missing files but not overwrite an existing file. As a general rule, you should never template a file which is owned by a package or is otherwise expected to be overwritten by normal operation -of the container. +of the instance. For convenience the following functions are exported to pongo templates: diff --git a/doc/index.md b/doc/index.md index d5d3b1225d..3ae2553695 100644 --- a/doc/index.md +++ b/doc/index.md @@ -1,7 +1,7 @@ [![LXD](https://linuxcontainers.org/static/img/containers.png)](https://linuxcontainers.org/lxd) # LXD -LXD is a next generation system container manager. -It offers a user experience similar to virtual machines but using Linux containers instead. +LXD is a next generation system container and virtual machine manager. +It offers a unified user experience around full Linux systems running inside containers or virtual machines. It's image based with pre-made images available for a [wide number of Linux distributions](https://images.linuxcontainers.org) and is built around a very powerful, yet pretty simple, REST API. @@ -139,8 +139,7 @@ export LD_LIBRARY_PATH="${GOPATH}/deps/sqlite/.libs/:${GOPATH}/deps/dqlite/.libs Now, the `lxd` and `lxc` binaries will be available to you and can be used to set up LXD. The binaries will automatically find and use the dependencies built in `$GOPATH/deps` thanks to the `LD_LIBRARY_PATH` environment variable. ### Machine Setup -You'll need sub{u,g}ids for root, so that LXD can create the unprivileged -containers: +You'll need sub{u,g}ids for root, so that LXD can create the unprivileged containers: ```bash echo "root:1000000:65536" | sudo tee -a /etc/subuid /etc/subgid @@ -154,7 +153,7 @@ sudo -E LD_LIBRARY_PATH=$LD_LIBRARY_PATH $GOPATH/bin/lxd --group sudo ``` ## Security -LXD, similar to other container managers provides a UNIX socket for local communication. +LXD, similar to other container and VM managers provides a UNIX socket for local communication. **WARNING**: Anyone with access to that socket can fully control LXD, which includes the ability to attach host devices and filesystems, this should diff --git a/doc/migration.md b/doc/migration.md index ba405f53f9..5767be3c59 100644 --- a/doc/migration.md +++ b/doc/migration.md @@ -1,11 +1,10 @@ # Live Migration in LXD ## Overview - Migration has two pieces, a "source", that is, the host that already has the -container, and a "sink", the host that's getting the container. Currently, +instance, and a "sink", the host that's getting the instance. Currently, in the `pull` mode, the source sets up an operation, and the sink connects -to the source and pulls the container. +to the source and pulls the instance. There are three websockets (channels) used in migration: @@ -13,9 +12,9 @@ There are three websockets (channels) used in migration: 2. the criu images stream 3. the filesystem stream -When a migration is initiated, information about the container, its +When a migration is initiated, information about the instance, its configuration, etc. are sent over the control channel (a full -description of this process is below), the criu images and container +description of this process is below), the criu images and instance filesystem are synced over their respective channels, and the result of the restore operation is sent from the sink to the source over the control channel. @@ -28,11 +27,10 @@ filesystem socket can speak btrfs-send/receive. Additionally, although we do a will happen over the criu socket at some later time. ## Control Socket - Once all three websockets are connected between the two endpoints, the source sends a MigrationHeader (protobuf description found in -`/lxd/migration/migrate.proto`). This header contains the container -configuration which will be added to the new container. +`/lxd/migration/migrate.proto`). This header contains the instance +configuration which will be added to the new instance. There are also two fields indicating the filesystem and criu protocol to speak. For example, if a server is hosted on a btrfs filesystem, it can indicate that it diff --git a/doc/networks.md b/doc/networks.md index 51143a42d5..085147a206 100644 --- a/doc/networks.md +++ b/doc/networks.md @@ -29,7 +29,7 @@ under the `bridge` namespace can be used to configure it. Additionally, LXD can utilize a pre-existing Linux bridge. In this case, the bridge does not need to be created via -`lxd network` and can simply be referenced in a container or +`lxd network` and can simply be referenced in an instance or profile device configuration as follows: ``` diff --git a/doc/production-setup.md b/doc/production-setup.md index 324fadab8e..d7f26a47e9 100644 --- a/doc/production-setup.md +++ b/doc/production-setup.md @@ -52,19 +52,23 @@ Then, reboot the server. [2]: https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt ### Network Bandwidth Tweaking -If you have at least 1GbE NIC on your lxd host with a lot of local activity (container - container connections, or host - container connections), or you have 1GbE or better internet connection on your lxd host it worth play with txqueuelen. These settings work even better with 10GbE NIC. +If you have at least 1GbE NIC on your lxd host with a lot of local +activity (container - container connections, or host - container +connections), or you have 1GbE or better internet connection on your lxd +host it worth play with txqueuelen. These settings work even better with +10GbE NIC. #### Server Changes - ##### txqueuelen +You need to change `txqueuelen` of your real NIC to 10000 (not sure +about the best possible value for you), and change and change lxdbr0 +interface `txqueuelen` to 10000. -You need to change `txqueuelen` of your real NIC to 10000 (not sure about the best possible value for you), and change and change lxdbr0 interface `txqueuelen` to 10000. In Debian-based distros you can change `txqueuelen` permanently in `/etc/network/interfaces` You can add for ex.: `up ip link set eth0 txqueuelen 10000` to your interface configuration to set txqueuelen value on boot. You could set it txqueuelen temporary (for test purpose) with `ifconfig txqueuelen 10000` ##### /etc/sysctl.conf - You also need to increase `net.core.netdev_max_backlog` value. You can add `net.core.netdev_max_backlog = 182757` to `/etc/sysctl.conf` to set it permanently (after reboot) You set `netdev_max_backlog` temporary (for test purpose) with `echo 182757 > /proc/sys/net/core/netdev_max_backlog` @@ -72,14 +76,16 @@ Note: You can find this value too high, most people prefer set `netdev_max_backl For example I use this values `net.ipv4.tcp_mem = 182757 243679 365514` #### Containers changes - You also need to change txqueuelen value for all you ethernet interfaces in containers. In Debian-based distros you can change txqueuelen permanently in `/etc/network/interfaces` You can add for ex.: `up ip link set eth0 txqueuelen 10000` to your interface configuration to set txqueuelen value on boot. #### Notes regarding this change - -10000 txqueuelen value commonly used with 10GbE NICs. Basically small txqueuelen values used with slow devices with a high latency, and higher with devices with low latency. I personally have like 3-5% improvement with these settings for local (host with container, container vs container) and internet connections. Good thing about txqueuelen value tweak, the more containers you use, the more you can be can benefit from this tweak. And you can always temporary set this values and check this tweak in your environment without lxd host reboot. - - - +10000 txqueuelen value commonly used with 10GbE NICs. Basically small +txqueuelen values used with slow devices with a high latency, and higher +with devices with low latency. I personally have like 3-5% improvement +with these settings for local (host with container, container vs +container) and internet connections. Good thing about txqueuelen value +tweak, the more containers you use, the more you can be can benefit from +this tweak. And you can always temporary set this values and check this +tweak in your environment without lxd host reboot. diff --git a/doc/projects.md b/doc/projects.md index f855d3c31b..626729194d 100644 --- a/doc/projects.md +++ b/doc/projects.md @@ -1,6 +1,6 @@ # Project configuration LXD supports projects as a way to split your LXD server. -Each project holds its own set of containers and may also have its own images and profiles. +Each project holds its own set of instances and may also have its own images and profiles. What a project contains is defined through the `features` configuration keys. When a feature is disabled, the project inherits from the `default` project. diff --git a/doc/rest-api.md b/doc/rest-api.md index b9ac07a126..fd31d117ca 100644 --- a/doc/rest-api.md +++ b/doc/rest-api.md @@ -54,7 +54,7 @@ The body is a dict with the following structure: "type": "async", "status": "OK", "status_code": 100, - "operation": "/1.0/containers/", // URL to the background operation + "operation": "/1.0/instances/", // URL to the background operation "metadata": {} // Operation metadata (see below) } ``` @@ -71,7 +71,7 @@ The operation metadata structure looks like: "status_code": 103, // Integer version of the operation's status (use this rather than status) "resources": { // Dictionary of resource types (container, snapshots, images) and affected resources "containers": [ - "/1.0/containers/test" + "/1.0/instances/test" ] }, "metadata": { // Metadata specific to the operation in question (in this case, exec) @@ -192,21 +192,21 @@ won't work and PUT needs to be used instead. * [`/1.0`](#10) * [`/1.0/certificates`](#10certificates) * [`/1.0/certificates/`](#10certificatesfingerprint) - * [`/1.0/containers`](#10containers) - * [`/1.0/containers/`](#10containersname) - * [`/1.0/containers//console`](#10containersnameconsole) - * [`/1.0/containers//exec`](#10containersnameexec) - * [`/1.0/containers//files`](#10containersnamefiles) - * [`/1.0/containers//snapshots`](#10containersnamesnapshots) - * [`/1.0/containers//snapshots/`](#10containersnamesnapshotsname) - * [`/1.0/containers//state`](#10containersnamestate) - * [`/1.0/containers//logs`](#10containersnamelogs) - * [`/1.0/containers//logs/`](#10containersnamelogslogfile) - * [`/1.0/containers//metadata`](#10containersnamemetadata) - * [`/1.0/containers//metadata/templates`](#10containersnamemetadatatemplates) - * [`/1.0/containers//backups`](#10containersnamebackups) - * [`/1.0/containers//backups/`](#10containersnamebackupsname) - * [`/1.0/containers//backups//export`](#10containersnamebackupsnameexport) + * [`/1.0/instances`](#10instances) + * [`/1.0/instances/`](#10instancesname) + * [`/1.0/instances//console`](#10instancesnameconsole) + * [`/1.0/instances//exec`](#10instancesnameexec) + * [`/1.0/instances//files`](#10instancesnamefiles) + * [`/1.0/instances//snapshots`](#10instancesnamesnapshots) + * [`/1.0/instances//snapshots/`](#10instancesnamesnapshotsname) + * [`/1.0/instances//state`](#10instancesnamestate) + * [`/1.0/instances//logs`](#10instancesnamelogs) + * [`/1.0/instances//logs/`](#10instancesnamelogslogfile) + * [`/1.0/instances//metadata`](#10instancesnamemetadata) + * [`/1.0/instances//metadata/templates`](#10instancesnamemetadatatemplates) + * [`/1.0/instances//backups`](#10instancesnamebackups) + * [`/1.0/instances//backups/`](#10instancesnamebackupsname) + * [`/1.0/instances//backups//export`](#10instancesnamebackupsnameexport) * [`/1.0/events`](#10events) * [`/1.0/images`](#10images) * [`/1.0/images/`](#10imagesfingerprint) @@ -442,38 +442,38 @@ Input (none at present): HTTP code for this should be 202 (Accepted). -### `/1.0/containers` +### `/1.0/instances` #### GET - * Description: List of containers + * Description: List of instances * Authentication: trusted * Operation: sync - * Return: list of URLs for containers this server publishes + * Return: list of URLs for instances this server hosts Return value: ```json [ - "/1.0/containers/blah", - "/1.0/containers/blah1" + "/1.0/instances/blah", + "/1.0/instances/blah1" ] ``` #### POST (optional `?target=`) - * Description: Create a new container + * Description: Create a new instance * Authentication: trusted * Operation: async * Return: background operation or standard error -Input (container based on a local image with the "ubuntu/devel" alias): +Input (instance based on a local image with the "ubuntu/devel" alias): ```js { - "name": "my-new-container", // 64 chars max, ASCII, no slash, no colon and no comma + "name": "my-new-instance", // 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], // List of profiles - "ephemeral": true, // Whether to destroy the container on shutdown + "ephemeral": true, // Whether to destroy the instance on shutdown "config": {"limits.cpu": "2"}, // Config override. - "devices": { // optional list of devices the container should have + "devices": { // Optional list of devices the instance should have "kvm": { "path": "/dev/kvm", "type": "unix-char" @@ -485,16 +485,16 @@ Input (container based on a local image with the "ubuntu/devel" alias): } ``` -Input (container based on a local image identified by its fingerprint): +Input (instance based on a local image identified by its fingerprint): ```js { - "name": "my-new-container", // 64 chars max, ASCII, no slash, no colon and no comma + "name": "my-new-instance", // 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], // List of profiles - "ephemeral": true, // Whether to destroy the container on shutdown + "ephemeral": true, // Whether to destroy the instance on shutdown "config": {"limits.cpu": "2"}, // Config override. - "devices": { // optional list of devices the container should have + "devices": { // Optional list of devices the instance should have "kvm": { "path": "/dev/kvm", "type": "unix-char" @@ -505,16 +505,16 @@ Input (container based on a local image identified by its fingerprint): } ``` -Input (container based on most recent match based on image properties): +Input (instance based on most recent match based on image properties): ```js { - "name": "my-new-container", // 64 chars max, ASCII, no slash, no colon and no comma + "name": "my-new-instance", // 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], // List of profiles - "ephemeral": true, // Whether to destroy the container on shutdown + "ephemeral": true, // Whether to destroy the instance on shutdown "config": {"limits.cpu": "2"}, // Config override. - "devices": { // optional list of devices the container should have + "devices": { // Optional list of devices the instance should have "kvm": { "path": "/dev/kvm", "type": "unix-char" @@ -529,16 +529,16 @@ Input (container based on most recent match based on image properties): } ``` -Input (container without a pre-populated rootfs, useful when attaching to an existing one): +Input (instance without a pre-populated rootfs, useful when attaching to an existing one): ```js { - "name": "my-new-container", // 64 chars max, ASCII, no slash, no colon and no comma + "name": "my-new-instance", // 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], // List of profiles - "ephemeral": true, // Whether to destroy the container on shutdown + "ephemeral": true, // Whether to destroy the instance on shutdown "config": {"limits.cpu": "2"}, // Config override. - "devices": { // optional list of devices the container should have + "devices": { // Optional list of devices the instance should have "kvm": { "path": "/dev/kvm", "type": "unix-char" @@ -552,12 +552,12 @@ Input (using a public remote image): ```js { - "name": "my-new-container", // 64 chars max, ASCII, no slash, no colon and no comma + "name": "my-new-instance", // 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], // List of profiles - "ephemeral": true, // Whether to destroy the container on shutdown + "ephemeral": true, // Whether to destroy the instance on shutdown "config": {"limits.cpu": "2"}, // Config override. - "devices": { // optional list of devices the container should have + "devices": { // Optional list of devices the instance should have "kvm": { "path": "/dev/kvm", "type": "unix-char" @@ -576,12 +576,12 @@ Input (using a private remote image after having obtained a secret for that imag ```js { - "name": "my-new-container", // 64 chars max, ASCII, no slash, no colon and no comma + "name": "my-new-instance", // 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], // List of profiles - "ephemeral": true, // Whether to destroy the container on shutdown + "ephemeral": true, // Whether to destroy the instance on shutdown "config": {"limits.cpu": "2"}, // Config override. - "devices": { // optional list of devices the container should have + "devices": { // Optional list of devices the instance should have "kvm": { "path": "/dev/kvm", "type": "unix-char" @@ -596,16 +596,16 @@ Input (using a private remote image after having obtained a secret for that imag } ``` -Input (using a remote container, sent over the migration websocket): +Input (using a remote instance, sent over the migration websocket): ```js { - "name": "my-new-container", // 64 chars max, ASCII, no slash, no colon and no comma + "name": "my-new-instance", // 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], // List of profiles - "ephemeral": true, // Whether to destroy the container on shutdown + "ephemeral": true, // Whether to destroy the instance on shutdown "config": {"limits.cpu": "2"}, // Config override. - "devices": { // optional list of devices the container should have + "devices": { // Optional list of devices the instance should have "kvm": { "path": "/dev/kvm", "type": "unix-char" @@ -615,8 +615,8 @@ Input (using a remote container, sent over the migration websocket): "mode": "pull", // "pull" and "push" is supported for now "operation": "https://10.0.2.3:8443/1.0/operations/", // Full URL to the remote operation (pull mode only) "certificate": "PEM certificate", // Optional PEM certificate. If not mentioned, system CA is used. - "base-image": "", // Optional, the base image the container was created from - "container_only": true, // Whether to migrate only the container without snapshots. Can be "true" or "false". + "base-image": "", // Optional, the base image the instance was created from + "instance_only": true, // Whether to migrate only the instance without snapshots. Can be "true" or "false". "secrets": {"control": "my-secret-string", // Secrets to use when talking to the migration source "criu": "my-other-secret", "fs": "my third secret"} @@ -624,36 +624,36 @@ Input (using a remote container, sent over the migration websocket): } ``` -Input (using a local container): +Input (using a local instance): ```js { - "name": "my-new-container", // 64 chars max, ASCII, no slash, no colon and no comma + "name": "my-new-instance", // 64 chars max, ASCII, no slash, no colon and no comma "profiles": ["default"], // List of profiles - "ephemeral": true, // Whether to destroy the container on shutdown + "ephemeral": true, // Whether to destroy the instance on shutdown "config": {"limits.cpu": "2"}, // Config override. - "devices": { // optional list of devices the container should have + "devices": { // Optional list of devices the instance should have "kvm": { "path": "/dev/kvm", "type": "unix-char" }, }, "source": {"type": "copy", // Can be: "image", "migration", "copy" or "none" - "container_only": true, // Whether to copy only the container without snapshots. Can be "true" or "false". - "source": "my-old-container"} // Name of the source container + "instance_only": true, // Whether to copy only the instance without snapshots. Can be "true" or "false". + "source": "my-old-instance"} // Name of the source instance } ``` -Input (using a remote container, in push mode sent over the migration websocket via client proxying): +Input (using a remote instance, in push mode sent over the migration websocket via client proxying): ```js { - "name": "my-new-container", // 64 chars max, ASCII, no slash, no colon and no comma + "name": "my-new-instance", // 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], // List of profiles - "ephemeral": true, // Whether to destroy the container on shutdown + "ephemeral": true, // Whether to destroy the instance on shutdown "config": {"limits.cpu": "2"}, // Config override. - "devices": { // optional list of devices the container should have + "devices": { // Optional list of devices the instance should have "kvm": { "path": "/dev/kvm", "type": "unix-char" @@ -661,9 +661,9 @@ Input (using a remote container, in push mode sent over the migration websocket }, "source": {"type": "migration", // Can be: "image", "migration", "copy" or "none" "mode": "push", // "pull" and "push" are supported - "base-image": "", // Optional, the base image the container was created from + "base-image": "", // Optional, the base image the instance was created from "live": true, // Whether migration is performed live - "container_only": true} // Whether to migrate only the container without snapshots. Can be "true" or "false". + "instance_only": true} // Whether to migrate only the instance without snapshots. Can be "true" or "false". } ``` @@ -671,12 +671,12 @@ Input (using a backup): Raw compressed tarball as provided by a backup download. -### `/1.0/containers/` +### `/1.0/instances/` #### GET - * Description: Container information + * Description: Instance information * Authentication: trusted * Operation: sync - * Return: dict of the container configuration and current state. + * Return: dict of the instance configuration and current state. Output: @@ -696,12 +696,12 @@ Output: } }, "ephemeral": false, - "expanded_config": { // the result of expanding profiles and adding the container's local config + "expanded_config": { // the result of expanding profiles and adding the instance's local config "limits.cpu": "3", "volatile.base_image": "97d97a3d1d053840ca19c86cdd0596cf1be060c5157d31407f2a4f9f350c78cc", "volatile.eth0.hwaddr": "00:16:3e:1c:94:38" }, - "expanded_devices": { // the result of expanding profiles and adding the container's local devices + "expanded_devices": { // the result of expanding profiles and adding the instance's local devices "eth0": { "name": "eth0", "nictype": "bridged", @@ -714,23 +714,23 @@ Output: } }, "last_used_at": "2016-02-16T01:05:05Z", - "name": "my-container", + "name": "my-instance", "profiles": [ "default" ], - "stateful": false, // If true, indicates that the container has some stored state that can be restored on startup + "stateful": false, // If true, indicates that the instance has some stored state that can be restored on startup "status": "Running", "status_code": 103 } ``` #### PUT (ETag supported) - * Description: replaces container configuration or restore snapshot + * Description: replaces instance configuration or restore snapshot * Authentication: trusted * Operation: async * Return: background operation or standard error -Input (update container configuration): +Input (update instance configuration): ```json { @@ -766,7 +766,7 @@ Input (restore snapshot): ``` #### PATCH (ETag supported) - * Description: update container configuration + * Description: update instance configuration * Introduced: with API extension `patch` * Authentication: trusted * Operation: sync @@ -789,7 +789,7 @@ Input: ``` #### POST (optional `?target=`) - * Description: used to rename/migrate the container + * Description: used to rename/migrate the instance * Authentication: trusted * Operation: async * Return: background operation or standard error @@ -832,7 +832,7 @@ Output in metadata section (for migration): These are the secrets that should be passed to the create call. #### DELETE - * Description: remove the container + * Description: remove the instance * Authentication: trusted * Operation: async * Return: background operation or standard error @@ -846,15 +846,15 @@ Input (none at present): HTTP code for this should be 202 (Accepted). -### `/1.0/containers//console` +### `/1.0/instances//console` #### GET - * Description: returns the contents of the container's console log + * Description: returns the contents of the instance's console log * Authentication: trusted * Operation: N/A * Return: the contents of the console log #### POST - * Description: attach to a container's console devices + * Description: attach to an instance's console devices * Authentication: trusted * Operation: async * Return: standard error @@ -884,12 +884,12 @@ Control (window size change): ``` #### DELETE - * Description: empty the container's console log + * Description: empty the instance's console log * Authentication: trusted * Operation: Sync * Return: empty response or standard error -### `/1.0/containers//exec` +### `/1.0/instances//exec` #### POST * Description: run a remote command * Authentication: trusted @@ -985,8 +985,8 @@ Return (with interactive=false and record-output=true): ```json { "output": { - "1": "/1.0/containers/example/logs/exec_b0f737b4-2c8a-4edf-a7c1-4cc7e4e9e155.stdout", - "2": "/1.0/containers/example/logs/exec_b0f737b4-2c8a-4edf-a7c1-4cc7e4e9e155.stderr" + "1": "/1.0/instances/example/logs/exec_b0f737b4-2c8a-4edf-a7c1-4cc7e4e9e155.stdout", + "2": "/1.0/instances/example/logs/exec_b0f737b4-2c8a-4edf-a7c1-4cc7e4e9e155.stderr" }, "return": 0 } @@ -1001,9 +1001,9 @@ operation's metadata: } ``` -### `/1.0/containers//files` -#### GET (`?path=/path/inside/the/container`) - * Description: download a file or directory listing from the container +### `/1.0/instances//files` +#### GET (`?path=/path/inside/the/instance`) + * Description: download a file or directory listing from the instance * Authentication: trusted * Operation: sync * Return: if the type of the file is a directory, the return is a sync @@ -1020,8 +1020,8 @@ The following headers will be set (on top of standard size and mimetype headers) This is designed to be easily usable from the command line or even a web browser. -#### POST (`?path=/path/inside/the/container`) - * Description: upload a file to the container +#### POST (`?path=/path/inside/the/instance`) + * Description: upload a file to the instance * Authentication: trusted * Operation: sync * Return: standard return value or standard error @@ -1040,8 +1040,8 @@ The following headers may be set by the client: This is designed to be easily usable from the command line or even a web browser. -#### DELETE (`?path=/path/inside/the/container`) - * Description: delete a file in the container +#### DELETE (`?path=/path/inside/the/instance`) + * Description: delete a file in the instance * Introduced: with API extension `file_delete` * Authentication: trusted * Operation: sync @@ -1054,18 +1054,18 @@ Input (none at present): } ``` -### `/1.0/containers//snapshots` +### `/1.0/instances//snapshots` #### GET * Description: List of snapshots * Authentication: trusted * Operation: sync - * Return: list of URLs for snapshots for this container + * Return: list of URLs for snapshots for this instance Return value: ```json [ - "/1.0/containers/blah/snapshots/snap0" + "/1.0/instances/blah/snapshots/snap0" ] ``` @@ -1084,7 +1084,7 @@ Input: } ``` -### `/1.0/containers//snapshots/` +### `/1.0/instances//snapshots/` #### GET * Description: Snapshot information * Authentication: trusted @@ -1206,7 +1206,7 @@ Input: HTTP code for this should be 202 (Accepted). -### `/1.0/containers//state` +### `/1.0/instances//state` #### GET * Description: current state * Authentication: trusted @@ -1360,7 +1360,7 @@ Output: ``` #### PUT - * Description: change the container state + * Description: change the instance state * Authentication: trusted * Operation: async * Return: background operation or standard error @@ -1371,15 +1371,15 @@ Input: { "action": "stop", // State change action (stop, start, restart, freeze or unfreeze) "timeout": 30, // A timeout after which the state change is considered as failed - "force": true, // Force the state change (currently only valid for stop and restart where it means killing the container) + "force": true, // Force the state change (currently only valid for stop and restart where it means killing the instance) "stateful": true // Whether to store or restore runtime state before stopping or startiong (only valid for stop and start, defaults to false) } ``` -### `/1.0/containers//logs` +### `/1.0/instances//logs` #### GET - * Description: Returns a list of the log files available for this container. - Note that this works on containers that have been deleted (or were never + * Description: Returns a list of the log files available for this instance. + Note that this works on instances that have been deleted (or were never created) to enable people to get logs for failed creations. * Authentication: trusted * Operation: Sync @@ -1389,13 +1389,13 @@ Return: ```json [ - "/1.0/containers/blah/logs/forkstart.log", - "/1.0/containers/blah/logs/lxc.conf", - "/1.0/containers/blah/logs/lxc.log" + "/1.0/instances/blah/logs/forkstart.log", + "/1.0/instances/blah/logs/lxc.conf", + "/1.0/instances/blah/logs/lxc.log" ] ``` -### `/1.0/containers//logs/` +### `/1.0/instances//logs/` #### GET * Description: returns the contents of a particular log file. * Authentication: trusted @@ -1408,13 +1408,13 @@ Return: * Operation: Sync * Return: empty response or standard error -### `/1.0/containers//metadata` +### `/1.0/instances//metadata` #### GET - * Description: Container metadata + * Description: Instance metadata * Introduced: with API extension `container_edit_metadata` * Authentication: trusted * Operation: Sync - * Return: dict representing container metadata + * Return: dict representing instance metadata Return: @@ -1443,7 +1443,7 @@ Return: ``` #### PUT (ETag supported) - * Description: Replaces container metadata + * Description: Replaces instance metadata * Introduced: with API extension `container_edit_metadata` * Authentication: trusted * Operation: sync @@ -1475,13 +1475,13 @@ Input: } ``` -### `/1.0/containers//metadata/templates` +### `/1.0/instances//metadata/templates` #### GET - * Description: List container templates + * Description: List instance templates * Introduced: with API extension `container_edit_metadata` * Authentication: trusted * Operation: Sync - * Return: a list with container template names + * Return: a list with instance template names Return: @@ -1493,7 +1493,7 @@ Return: ``` #### GET (`?path=