Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup for the automatically generated config options #13229

Merged
merged 5 commits into from
Mar 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 6 additions & 0 deletions doc/.sphinx/_static/custom.css
Original file line number Diff line number Diff line change
Expand Up @@ -356,3 +356,9 @@ p code.literal {
.configoption .fields p code.literal {
font-size: var(--font-size--small--2);
}

/* Headings and section links in the configuration option index */

.domainindex-table .cap, .domainindex-jumpbox {
text-transform: uppercase;
}
6 changes: 3 additions & 3 deletions doc/.sphinx/_templates/domainindex.html
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ <h1>{{ indextitle }}</h1>
<!-- rufu: add note for config option index -->
{% if indextitle == "Configuration options" %}
<p>This page lists the available configuration options for different entities.</p>
<div class="admonition important">
<p class="admonition-title">Important</p>
<p>Currently, this page does not cover all available configuration options, but only the ones that are extracted from the code base.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>The <a href="#cap-sysctl">SYSCTL</a> configuration options are settings specified in the server's <code class="docutils literal notranslate"><span class="pre">/etc/sysctl.conf</span></code> file rather than LXD configuration options.</code> </p>
</div>
{% endif %}
<!-- end note -->
Expand Down
168 changes: 84 additions & 84 deletions doc/api-extensions.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion doc/config_options.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2293,7 +2293,7 @@ Specify the mounts of a given file system that should be redirected to their FUS

```

```{config:option} security.syscalls.intercept.sched_setcheduler instance-security
```{config:option} security.syscalls.intercept.sched_setscheduler instance-security
:condition: "container"
:defaultdesc: "`false`"
:liveupdate: "no"
Expand Down
2 changes: 1 addition & 1 deletion doc/dev-lxd.md
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@ This never returns. Each notification is sent as a separate JSON object:

* Description: Download a public/cached image from the host
* Return: raw image or error
* Access: Requires `security.devlxd.images` set to `true`
* Access: Requires {config:option}`instance-security:security.devlxd.images` set to `true`

Return value:

Expand Down
2 changes: 1 addition & 1 deletion doc/explanation/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ def instance_placement(request, candidate_members):
return # Return empty to allow instance placement to proceed.
```

The scriptlet must be applied to LXD by storing it in the `instances.placement.scriptlet` global configuration setting.
The scriptlet must be applied to LXD by storing it in the {config:option}`server-miscellaneous:instances.placement.scriptlet` global configuration setting.

For example, if the scriptlet is saved inside a file called `instance_placement.star`, then it can be applied to LXD with the following command:

Expand Down
4 changes: 2 additions & 2 deletions doc/explanation/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,14 +55,14 @@ The root user and all members of the `lxd` group can interact with the local dae
### Access to the remote API

By default, access to the daemon is only possible locally.
By setting the `core.https_address` configuration option, you can expose the same API over the network on a {abbr}`TLS (Transport Layer Security)` socket.
By setting the {config:option}`server-core:core.https_address` configuration option, you can expose the same API over the network on a {abbr}`TLS (Transport Layer Security)` socket.
See {ref}`server-expose` for instructions.
Remote clients can then connect to LXD and access any image that is marked for public use.

There are several ways to authenticate remote clients as trusted clients to allow them to access the API.
See {ref}`authentication` for details.

In a production setup, you should set `core.https_address` to the single address where the server should be available (rather than any address on the host).
In a production setup, you should set {config:option}`server-core:core.https_address` to the single address where the server should be available (rather than any address on the host).
In addition, you should set firewall rules to allow access to the LXD port only from authorized hosts/subnets.

(container-security)=
Expand Down
2 changes: 1 addition & 1 deletion doc/howto/cluster_config_networks.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# How to configure networks for a cluster

All members of a cluster must have identical networks defined.
The only configuration keys that may differ between networks on different members are [`bridge.external_interfaces`](network-bridge-options), [`parent`](ref-networks), [`bgp.ipv4.nexthop`](network-bridge-options) and [`bgp.ipv6.nexthop`](network-bridge-options).
The only configuration keys that may differ between networks on different members are {config:option}`network-bridge-network-conf:bridge.external_interfaces`, {config:option}`network-physical-network-conf:parent`, {config:option}`network-bridge-network-conf:bgp.ipv4.nexthop`, and {config:option}`network-bridge-network-conf:bgp.ipv6.nexthop`.
See {ref}`clustering-member-config` for more information.

Creating additional networks is a two-step process:
Expand Down
2 changes: 1 addition & 1 deletion doc/howto/cluster_manage.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ This command migrates all instances on the given server, moving them to other cl
The evacuated cluster member is then transitioned to an "evacuated" state, which prevents the creation of any instances on it.

You can control how each instance is moved through the {config:option}`instance-miscellaneous:cluster.evacuate` instance configuration key.
Instances are shut down cleanly, respecting the `boot.host_shutdown_timeout` configuration key.
Instances are shut down cleanly, respecting the {config:option}`instance-boot:boot.host_shutdown_timeout` configuration key.

When the evacuated server is available again, use the [`lxc cluster restore`](lxc_cluster_restore.md) command to move the server back into a normal running state.
This command also moves the evacuated instances back from the servers that were temporarily holding them.
Expand Down
2 changes: 1 addition & 1 deletion doc/howto/instances_create.md
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ Lastly, attach the custom ISO volume to the VM using the following command:
```
````

The `boot.priority` configuration key ensures that the VM will boot from the ISO first.
The {config:option}`device-disk-device-conf:boot.priority` configuration key ensures that the VM will boot from the ISO first.
Start the VM and {ref}`connect to the console <instances-console>` as there might be a menu you need to interact with:

````{tabs}
Expand Down
2 changes: 1 addition & 1 deletion doc/howto/instances_troubleshoot.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ The {doc}`container requirements <../container-environment>` specify that every
If those directories don't exist, LXD cannot mount them, and `systemd` will then try to do so.
As this is an unprivileged container, `systemd` does not have the ability to do this, and it then freezes.

So you can see the environment before anything is changed, and you can explicitly change the init system in a container using the `raw.lxc` configuration parameter.
So you can see the environment before anything is changed, and you can explicitly change the init system in a container using the {config:option}`instance-raw:raw.lxc` configuration parameter.
This is equivalent to setting `init=/bin/bash` on the Linux kernel command line.

lxc config set systemd raw.lxc 'lxc.init.cmd = /bin/bash'
Expand Down
4 changes: 2 additions & 2 deletions doc/howto/network_bridge_resolved.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ If the system that runs LXD uses `systemd-resolved` to perform DNS lookups, you
To do so, add the DNS servers and domains provided by a LXD network bridge to the `resolved` configuration.

```{note}
The `dns.mode` option (see {ref}`network-bridge-options`) must be set to `managed` or `dynamic` if you want to use this feature.
The {config:option}`network-bridge-network-conf:dns.mode` option must be set to `managed` or `dynamic` if you want to use this feature.

Depending on the configured `dns.domain`, you might need to disable DNSSEC in `resolved` to allow for DNS resolution.
Depending on the configured {config:option}`network-bridge-network-conf:dns.domain`, you might need to disable DNSSEC in `resolved` to allow for DNS resolution.
This can be done through the `DNSSEC` option in `resolved.conf`.
```

Expand Down
2 changes: 1 addition & 1 deletion doc/howto/network_zones.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ For example: `lxc network zone set lxd.example.net peers.whatever.address=192.0.
```{note}
It is not enough for the address to be of the same machine that `dig` is calling from; it needs to
match as a string with what the DNS server in `lxd` thinks is the exact remote address. `dig` binds to
`0.0.0.0`, therefore the address you need is most likely the same that you provided to `core.dns_address`.
`0.0.0.0`, therefore the address you need is most likely the same that you provided to {config:option}`server-core:core.dns_address`.
```

For example, running `dig @<DNS_server_IP> -p <DNS_server_PORT> axfr lxd.example.net` might give the following output:
Expand Down
2 changes: 1 addition & 1 deletion doc/howto/projects_work.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ For example, to move the instance `my-instance` from the `default` project to `m

### Copy a profile to another project

If you create a project with the default settings, profiles are isolated in the project ([`features.profiles`](project-features) is set to `true`).
If you create a project with the default settings, profiles are isolated in the project ({config:option}`project-features:features.profiles` is set to `true`).
Therefore, the project does not have access to the default profile (which is part of the `default` project), and you will see an error similar to the following when trying to create an instance:

```{terminal}
Expand Down
2 changes: 1 addition & 1 deletion doc/reference/devices_disk.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ Initial volume configuration allows setting specific configurations for the root
These settings are prefixed with `initial.` and are only applied when the instance is created.
This method allows creating instances that have unique configurations, independent of the default storage pool settings.

For example, you can add an initial volume configuration for `zfs.block_mode` to an existing profile, and this
For example, you can add an initial volume configuration for {config:option}`storage-zfs-volume-conf:zfs.block_mode` to an existing profile, and this
will then take effect for each new instance you create using this profile:

lxc profile device set <profile_name> <device_name> initial.zfs.block_mode=true
Expand Down
2 changes: 1 addition & 1 deletion doc/reference/devices_nic.md
Original file line number Diff line number Diff line change
Expand Up @@ -463,7 +463,7 @@ A bridge also lets you use MAC filtering and I/O limits, which cannot be applied

If you're using MAAS to manage the physical network under your LXD host and want to attach your instances directly to a MAAS-managed network, LXD can be configured to interact with MAAS so that it can track your instances.

At the daemon level, you must configure `maas.api.url` and `maas.api.key`, and then set the `maas.subnet.ipv4` and/or `maas.subnet.ipv6` keys on the instance or profile's `nic` entry.
At the daemon level, you must configure {config:option}`server-miscellaneous:maas.api.url` and {config:option}`server-miscellaneous:maas.api.key`, and then set the NIC-specific `maas.subnet.ipv4` and/or `maas.subnet.ipv6` keys on the instance or profile's `nic` entry.

With this configuration, LXD registers all your instances with MAAS, giving them proper DHCP leases and DNS records.

Expand Down
32 changes: 16 additions & 16 deletions doc/reference/instance_options.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,21 +93,21 @@ See {ref}`instance-options-limits-kernel` for more information.

You have different options to limit CPU usage:

- Set `limits.cpu` to restrict which CPUs the instance can see and use.
- Set {config:option}`instance-resource-limits:limits.cpu` to restrict which CPUs the instance can see and use.
See {ref}`instance-options-limits-cpu` for how to set this option.
- Set `limits.cpu.allowance` to restrict the load an instance can put on the available CPUs.
- Set {config:option}`instance-resource-limits:limits.cpu.allowance` to restrict the load an instance can put on the available CPUs.
This option is available only for containers.
See {ref}`instance-options-limits-cpu-container` for how to set this option.

It is possible to set both options at the same time to restrict both which CPUs are visible to the instance and the allowed usage of those instances.
However, if you use `limits.cpu.allowance` with a time limit, you should avoid using `limits.cpu` in addition, because that puts a lot of constraints on the scheduler and might lead to less efficient allocations.
However, if you use {config:option}`instance-resource-limits:limits.cpu.allowance` with a time limit, you should avoid using {config:option}`instance-resource-limits:limits.cpu` in addition, because that puts a lot of constraints on the scheduler and might lead to less efficient allocations.

The CPU limits are implemented through a mix of the `cpuset` and `cpu` cgroup controllers.

(instance-options-limits-cpu)=
#### CPU pinning

`limits.cpu` results in CPU pinning through the `cpuset` controller.
{config:option}`instance-resource-limits:limits.cpu` results in CPU pinning through the `cpuset` controller.
You can specify either which CPUs or how many CPUs are visible and available to the instance:

- To specify which CPUs to use, set `limits.cpu` to either a set of CPUs (for example, `1,2,3`) or a CPU range (for example, `0-3`).
Expand All @@ -119,18 +119,18 @@ You can specify either which CPUs or how many CPUs are visible and available to
##### CPU limits for virtual machines

```{note}
LXD supports live-updating the `limits.cpu` option.
LXD supports live-updating the {config:option}`instance-resource-limits:limits.cpu` option.
However, for virtual machines, this only means that the respective CPUs are hotplugged.
Depending on the guest operating system, you might need to either restart the instance or complete some manual actions to bring the new CPUs online.
```

LXD virtual machines default to having just one vCPU allocated, which shows up as matching the host CPU vendor and type, but has a single core and no threads.

When `limits.cpu` is set to a single integer, LXD allocates multiple vCPUs and exposes them to the guest as full cores.
When {config:option}`instance-resource-limits:limits.cpu` is set to a single integer, LXD allocates multiple vCPUs and exposes them to the guest as full cores.
Those vCPUs are not pinned to specific physical cores on the host.
The number of vCPUs can be updated while the VM is running.

When `limits.cpu` is set to a range or comma-separated list of CPU IDs (as provided by [`lxc info --resources`](lxc_info.md)), the vCPUs are pinned to those physical cores.
When {config:option}`instance-resource-limits:limits.cpu` is set to a range or comma-separated list of CPU IDs (as provided by [`lxc info --resources`](lxc_info.md)), the vCPUs are pinned to those physical cores.
In this scenario, LXD checks whether the CPU configuration lines up with a realistic hardware topology and if it does, it replicates that topology in the guest.
When doing CPU pinning, it is not possible to change the configuration while the VM is running.

Expand All @@ -144,24 +144,24 @@ All this allows for very high performance operations in the guest as the guest s
(instance-options-limits-cpu-container)=
#### Allowance and priority (container only)

`limits.cpu.allowance` drives either the CFS scheduler quotas when passed a time constraint, or the generic CPU shares mechanism when passed a percentage value:
{config:option}`instance-resource-limits:limits.cpu.allowance` drives either the CFS scheduler quotas when passed a time constraint, or the generic CPU shares mechanism when passed a percentage value:

- The time constraint (for example, `20ms/50ms`) is a hard limit.
For example, if you want to allow the container to use a maximum of one CPU, set `limits.cpu.allowance` to a value like `100ms/100ms`.
For example, if you want to allow the container to use a maximum of one CPU, set {config:option}`instance-resource-limits:limits.cpu.allowance` to a value like `100ms/100ms`.
The value is relative to one CPU worth of time, so to restrict to two CPUs worth of time, use something like `100ms/50ms` or `200ms/100ms`.
- When using a percentage value, the limit is a soft limit that is applied only when under load.
It is used to calculate the scheduler priority for the instance, relative to any other instance that is using the same CPU or CPUs.
For example, to limit the CPU usage of the container to one CPU when under load, set `limits.cpu.allowance` to `100%`.
For example, to limit the CPU usage of the container to one CPU when under load, set {config:option}`instance-resource-limits:limits.cpu.allowance` to `100%`.

`limits.cpu.nodes` can be used to restrict the CPUs that the instance can use to a specific set of NUMA nodes.
To specify which NUMA nodes to use, set `limits.cpu.nodes` to either a set of NUMA node IDs (for example, `0,1`) or a set of NUMA node ranges (for example, `0-1,2-4`).
{config:option}`instance-resource-limits:limits.cpu.nodes` can be used to restrict the CPUs that the instance can use to a specific set of NUMA nodes.
To specify which NUMA nodes to use, set {config:option}`instance-resource-limits:limits.cpu.nodes` to either a set of NUMA node IDs (for example, `0,1`) or a set of NUMA node ranges (for example, `0-1,2-4`).

`limits.cpu.priority` is another factor that is used to compute the scheduler priority score when a number of instances sharing a set of CPUs have the same percentage of CPU assigned to them.
{config:option}`instance-resource-limits:limits.cpu.priority` is another factor that is used to compute the scheduler priority score when a number of instances sharing a set of CPUs have the same percentage of CPU assigned to them.

(instance-options-limits-hugepages)=
### Huge page limits

LXD allows to limit the number of huge pages available to a container through the `limits.hugepage.[size]` key.
LXD allows to limit the number of huge pages available to a container through the `limits.hugepage.[size]` key (for example, {config:option}`instance-resource-limits:limits.hugepages.1MB`).

Architectures often expose multiple huge-page sizes.
The available huge-page sizes depend on the architecture.
Expand All @@ -176,7 +176,7 @@ Limiting huge pages is done through the `hugetlb` cgroup controller, which means
(instance-options-limits-kernel)=
### Kernel resource limits

For container instances, LXD exposes a generic namespaced key `limits.kernel.*` that can be used to set resource limits.
For container instances, LXD exposes a generic namespaced key {config:option}`instance-resource-limits:limits.kernel.*` that can be used to set resource limits.

It is generic in the sense that LXD does not perform any validation on the resource that is specified following the `limits.kernel.*` prefix.
LXD cannot know about all the possible resources that a given kernel supports.
Expand Down Expand Up @@ -266,7 +266,7 @@ For example:
- To add devices that are not supported by LXD before the machines boots.
- To remove devices that conflict with the guest OS.

To override the configuration, set the `raw.qemu.conf` option.
To override the configuration, set the {config:option}`instance-raw:raw.qemu.conf` option.
It supports a format similar to `qemu.conf`, with some additions.
Since it is a multi-line configuration option, you can use it to modify multiple sections or keys.

Expand Down