Skip to content

Commit

Permalink
doc: s/container/instance/
Browse files Browse the repository at this point in the history
Signed-off-by: Stéphane Graber <stgraber@ubuntu.com>
  • Loading branch information
stgraber committed Jan 14, 2020
1 parent 75bffa2 commit 505048b
Show file tree
Hide file tree
Showing 18 changed files with 302 additions and 299 deletions.
24 changes: 12 additions & 12 deletions doc/backup.md
Expand Up @@ -3,14 +3,14 @@
When planning to backup a LXD server, consider all the different objects
that are stored/managed by LXD:

- Containers (database records and filesystems)
- Instances (database records and filesystems)
- Images (database records, image files and filesystems)
- Networks (database records and state files)
- Profiles (database records)
- Storage volumes (database records and filesystems)

Only backing up the database or only backing up the container filesystem
will not get you a fully functional backup.
Only backing up the database or only backing up the instances will not
get you a fully functional backup.

In some disaster recovery scenarios, that may be reasonable but if your
goal is to get back online quickly, consider all the different pieces of
Expand All @@ -30,15 +30,15 @@ directory, restoring the backup and any external dependency it requires.
Then start LXD again and check that everything works fine.

## Secondary backup LXD server
LXD supports copying and moving containers and storage volumes between two hosts.
LXD supports copying and moving instances and storage volumes between two hosts.

So with a spare server, you can copy your containers and storage volumes
So with a spare server, you can copy your instances and storage volumes
to that secondary server every so often, allowing it to act as either an
offline spare or just as a storage server that you can copy your
containers back from if needed.
instances back from if needed.

## Container backups
The `lxc export` command can be used to export containers to a backup tarball.
## Instance backups
The `lxc export` command can be used to export instances to a backup tarball.
Those tarballs will include all snapshots by default and an "optimized"
tarball can be obtained if you know that you'll be restoring on a LXD
server using the same storage pool backend.
Expand All @@ -47,14 +47,14 @@ Those tarballs can be saved any way you want on any filesystem you want
and can be imported back into LXD using the `lxc import` command.

## Disaster recovery
Additionally, LXD maintains a `backup.yaml` file in each container's storage
Additionally, LXD maintains a `backup.yaml` file in each instance's storage
volume. This file contains all necessary information to recover a given
container, such as container configuration, attached devices and storage.
instance, such as instance configuration, attached devices and storage.

This file can be processed by the `lxd import` command, not to
be confused with `lxc import`.

To use the disaster recovery mechanism, you must mount the container's
To use the disaster recovery mechanism, you must mount the instance's
storage to its expected location, usually under
`storage-pools/NAME-OF-POOL/containers/NAME-OF-CONTAINER`.

Expand All @@ -64,5 +64,5 @@ any snapshot you want to restore (needed for `dir` and `btrfs`).
Once everything is mounted where it should be, you can now run `lxd import NAME-OF-CONTAINER`.

If any matching database entry for resources declared in `backup.yaml` is found
during import, the command will refuse to restore the container. This can be
during import, the command will refuse to restore the instance. This can be
overridden by passing `--force`.
14 changes: 7 additions & 7 deletions doc/cloud-init.md
@@ -1,9 +1,9 @@
# Custom network configuration with cloud-init

[cloud-init](https://launchpad.net/cloud-init) may be used for custom network configuration of containers.
[cloud-init](https://launchpad.net/cloud-init) may be used for custom network configuration of instances.

Before trying to use it, however, first determine which image source you are
about to use as not all container images have cloud-init package installed.
about to use as not all images have cloud-init package installed.
At the time of writing, images provided at images.linuxcontainers.org do not
have the cloud-init package installed, therefore, any of the configuration
options mentioned in this guide will not work. On the contrary, images
Expand All @@ -17,7 +17,7 @@ and also have a templates directory in their archive populated with

and others not related to cloud-init.

Templates provided with container images at cloud-images.ubuntu.com have
Templates provided with images at cloud-images.ubuntu.com have
the following in their `metadata.yaml`:

```yaml
Expand All @@ -28,14 +28,14 @@ the following in their `metadata.yaml`:
template: cloud-init-network.tpl
```

Therefore, either when you create or copy a container it gets a newly rendered
Therefore, either when you create or copy an instance it gets a newly rendered
network configuration from a pre-defined template.

cloud-init uses the network-config file to render the relevant network
configuration on the system using either ifupdown or netplan depending
on the Ubuntu release.

The default behavior is to use a DHCP client on a container's eth0 interface.
The default behavior is to use a DHCP client on an instance's eth0 interface.

In order to change this you need to define your own network configuration
using user.network-config key in the config dictionary which will override
Expand All @@ -62,7 +62,7 @@ config:
address: 10.10.10.254
```

A container's rootfs will contain the following files as a result:
An instance's rootfs will contain the following files as a result:

* `/var/lib/cloud/seed/nocloud-net/network-config`
* `/etc/network/interfaces.d/50-cloud-init.cfg` (if using ifupdown)
Expand Down Expand Up @@ -102,7 +102,7 @@ config:
```

The template syntax is the one used in the pongo2 template engine. A custom
`config_get` function is defined to retrieve values from a container
`config_get` function is defined to retrieve values from an instance
configuration.

Options available with such a template structure:
Expand Down
27 changes: 13 additions & 14 deletions doc/clustering.md
@@ -1,6 +1,6 @@
# Clustering

LXD can be run in clustering mode, where any number of LXD instances
LXD can be run in clustering mode, where any number of LXD servers
share the same distributed database and can be managed uniformly using
the lxc client or the REST API.

Expand All @@ -10,7 +10,7 @@ Note that this feature was introduced as part of the API extension
## Forming a cluster

First you need to choose a bootstrap LXD node. It can be an existing
LXD instance or a brand new one. Then you need to initialize the
LXD server or a brand new one. Then you need to initialize the
bootstrap node and join further nodes to the cluster. This can be done
interactively or with a preseed file.

Expand Down Expand Up @@ -39,7 +39,7 @@ network bridge. At this point your first cluster node should be up and
available on your network.

You can now join further nodes to the cluster. Note however that these
nodes should be brand new LXD instances, or alternatively you should
nodes should be brand new LXD servers, or alternatively you should
clear their contents before joining, since any existing data on them
will be lost.

Expand Down Expand Up @@ -166,7 +166,7 @@ if there are still nodes in the cluster that have not been upgraded
and that are running an older version. When a node is in the
Blocked state it will not serve any LXD API requests (in particular,
lxc commands on that node will not work, although any running
container will continue to run).
instance will continue to run).

You can see if some nodes are blocked by running `lxc cluster list` on
a node which is not blocked.
Expand Down Expand Up @@ -207,8 +207,8 @@ online.

Note that no information has been deleted from the database, in particular all
information about the cluster members that you have lost is still there,
including the metadata about their containers. This can help you with further
recovery steps in case you need to re-create the lost containers.
including the metadata about their instances. This can help you with further
recovery steps in case you need to re-create the lost instances.

In order to permanently delete the cluster members that you have lost, you can
run the command:
Expand All @@ -220,9 +220,9 @@ lxc cluster remove <name> --force
Note that this time you have to use the regular ```lxc``` command line tool, not
```lxd```.

## Containers
## Instances

You can launch a container on any node in the cluster from any node in
You can launch an instance on any node in the cluster from any node in
the cluster. For example, from node1:

```bash
Expand All @@ -231,20 +231,19 @@ lxc launch --target node2 ubuntu:16.04 xenial

will launch an Ubuntu 16.04 container on node2.

When you launch a container without defining a target, the container will be
launched on the server which has the lowest number of containers.
If all the servers have the same amount of containers, it will choose one
at random.
When you launch an instance without defining a target, the instance will be
launched on the server which has the lowest number of instances.
If all the servers have the same amount of instances, it will choose one at random.

You can list all containers in the cluster with:
You can list all instances in the cluster with:

```bash
lxc list
```

The NODE column will indicate on which node they are running.

After a container is launched, you can operate it from any node. For
After an instance is launched, you can operate it from any node. For
example, from node1:

```bash
Expand Down
2 changes: 1 addition & 1 deletion doc/configuration.md
Expand Up @@ -2,7 +2,7 @@
Current LXD stores configurations for a few components:

- [Server](server.md)
- [Containers](containers.md)
- [Instances](instances.md)
- [Network](networks.md)
- [Profiles](profiles.md)
- [Storage](storage.md)
20 changes: 10 additions & 10 deletions doc/daemon-behavior.md
Expand Up @@ -10,27 +10,27 @@ On every start, LXD checks that its directory structure exists. If it
doesn't, it'll create the required directories, generate a keypair and
initialize the database.

Once the daemon is ready for work, LXD will scan the containers table
for any container for which the stored power state differs from the
current one. If a container's power state was recorded as running and the
container isn't running, LXD will start it.
Once the daemon is ready for work, LXD will scan the instances table
for any instance for which the stored power state differs from the
current one. If an instance's power state was recorded as running and the
instance isn't running, LXD will start it.

## Signal handling
### SIGINT, SIGQUIT, SIGTERM
For those signals, LXD assumes that it's being temporarily stopped and
will be restarted at a later time to continue handling the containers.
will be restarted at a later time to continue handling the instances.

The containers will keep running and LXD will close all connections and
The instances will keep running and LXD will close all connections and
exit cleanly.

### SIGPWR
Indicates to LXD that the host is going down.

LXD will attempt a clean shutdown of all the containers. After 30s, it
will kill any remaining container.
LXD will attempt a clean shutdown of all the instances. After 30s, it
will kill any remaining instance.

The container `power_state` in the containers table is kept as it was so
that LXD after the host is done rebooting can restore the containers as
The instance `power_state` in the instances table is kept as it was so
that LXD after the host is done rebooting can restore the instances as
they were.

### SIGUSR1
Expand Down
12 changes: 6 additions & 6 deletions doc/database.md
Expand Up @@ -3,19 +3,19 @@
## Introduction
So first of all, why a database?

Rather than keeping the configuration and state within each container's
Rather than keeping the configuration and state within each instance's
directory as is traditionally done by LXC, LXD has an internal database
which stores all of that information. This allows very quick queries
against all containers configuration.
against all instances configuration.


An example is the rather obvious question "what containers are using br0?".
An example is the rather obvious question "what instances are using br0?".
To answer that question without a database, LXD would have to iterate
through every single container, load and parse its configuration and
through every single instance, load and parse its configuration and
then look at what network devices are defined in there.

While that may be quick with a few containers, imagine how many
filesystem access would be required for 2000 containers. Instead with a
While that may be quick with a few instance, imagine how many
filesystem access would be required for 2000 instances. Instead with a
database, it's only a matter of accessing the already cached database
with a pretty simple query.

Expand Down
3 changes: 1 addition & 2 deletions doc/debugging.md
@@ -1,6 +1,5 @@
# Debugging

For information on debugging container issues, see [Frequently Asked Questions](faq.md)
For information on debugging instance issues, see [Frequently Asked Questions](faq.md)

## Debugging `lxc` and `lxd`

Expand Down
24 changes: 12 additions & 12 deletions doc/dev-lxd.md
@@ -1,31 +1,31 @@
# Communication between container and host
# Communication between instance and host
## Introduction
Communication between the hosted workload (container) and its host while
Communication between the hosted workload (instance) and its host while
not strictly needed is a pretty useful feature.

In LXD, this feature is implemented through a `/dev/lxd/sock` node which is
created and setup for all LXD containers.
created and setup for all LXD instances.

This file is a Unix socket which processes inside the container can
This file is a Unix socket which processes inside the instance can
connect to. It's multi-threaded so multiple clients can be connected at the
same time.

## Implementation details
LXD on the host binds `/var/lib/lxd/devlxd/sock` and starts listening for new
connections on it.

This socket is then bind-mounted into every single container started by
This socket is then exposed into every single instance started by
LXD at `/dev/lxd/sock`.

The bind-mount is required so we can exceed 4096 containers, otherwise,
LXD would have to bind a different socket for every container, quickly
The single socket is required so we can exceed 4096 instances, otherwise,
LXD would have to bind a different socket for every instance, quickly
reaching the FD limit.

## Authentication
Queries on `/dev/lxd/sock` will only return information related to the
requesting container. To figure out where a request comes from, LXD will
requesting instance. To figure out where a request comes from, LXD will
extract the initial socket ucred and compare that to the list of
containers it manages.
instances it manages.

## Protocol
The protocol on `/dev/lxd/sock` is plain-text HTTP with JSON messaging, so very
Expand Down Expand Up @@ -74,12 +74,12 @@ Return value:
* Description: List of configuration keys
* Return: list of configuration keys URL

Note that the configuration key names match those in the container
Note that the configuration key names match those in the instance
config, however not all configuration namespaces will be exported to
`/dev/lxd/sock`.
Currently only the `user.*` keys are accessible to the container.
Currently only the `user.*` keys are accessible to the instance.

At this time, there also aren't any container-writable namespace.
At this time, there also aren't any instance-writable namespace.

Return value:

Expand Down
2 changes: 1 addition & 1 deletion doc/faq.md
Expand Up @@ -129,7 +129,7 @@ safe to do.
### Beware of 'port security'

Many switches do *not* allow MAC address changes, and will either drop traffic
with an incorrect MAC, or, disable the port totally. If you can ping a LXD container
with an incorrect MAC, or, disable the port totally. If you can ping a LXD instance
from the host, but are not able to ping it from a _different_ host, this could be
the cause. The way to diagnose this is to run a tcpdump on the uplink (in this case,
eth1), and you will see either 'ARP Who has xx.xx.xx.xx tell yy.yy.yy.yy', with you
Expand Down

0 comments on commit 505048b

Please sign in to comment.