Skip to content

Commit

Permalink
Revert "nixos/doc: re-format"
Browse files Browse the repository at this point in the history
This reverts commit ea6e877. The new
format is not an improvement.

(cherry picked from commit b0ccd6d)

(Also synced rl-19.09.xml with master.)
  • Loading branch information
edolstra committed Oct 8, 2019
1 parent 1475797 commit 5d1649a
Show file tree
Hide file tree
Showing 123 changed files with 5,718 additions and 2,381 deletions.
32 changes: 26 additions & 6 deletions nixos/doc/manual/administration/boot-problems.xml
Expand Up @@ -6,15 +6,22 @@
<title>Boot Problems</title>

<para>
If NixOS fails to boot, there are a number of kernel command line parameters that may help you to identify or fix the issue. You can add these parameters in the GRUB boot menu by pressing “e” to modify the selected boot entry and editing the line starting with <literal>linux</literal>. The following are some useful kernel command line parameters that are recognised by the NixOS boot scripts or by systemd:
If NixOS fails to boot, there are a number of kernel command line parameters
that may help you to identify or fix the issue. You can add these parameters
in the GRUB boot menu by pressing “e” to modify the selected boot entry
and editing the line starting with <literal>linux</literal>. The following
are some useful kernel command line parameters that are recognised by the
NixOS boot scripts or by systemd:
<variablelist>
<varlistentry>
<term>
<literal>boot.shell_on_fail</literal>
</term>
<listitem>
<para>
Start a root shell if something goes wrong in stage 1 of the boot process (the initial ramdisk). This is disabled by default because there is no authentication for the root shell.
Start a root shell if something goes wrong in stage 1 of the boot process
(the initial ramdisk). This is disabled by default because there is no
authentication for the root shell.
</para>
</listitem>
</varlistentry>
Expand All @@ -24,7 +31,10 @@
</term>
<listitem>
<para>
Start an interactive shell in stage 1 before anything useful has been done. That is, no modules have been loaded and no file systems have been mounted, except for <filename>/proc</filename> and <filename>/sys</filename>.
Start an interactive shell in stage 1 before anything useful has been
done. That is, no modules have been loaded and no file systems have been
mounted, except for <filename>/proc</filename> and
<filename>/sys</filename>.
</para>
</listitem>
</varlistentry>
Expand All @@ -44,7 +54,11 @@
</term>
<listitem>
<para>
Boot into rescue mode (a.k.a. single user mode). This will cause systemd to start nothing but the unit <literal>rescue.target</literal>, which runs <command>sulogin</command> to prompt for the root password and start a root login shell. Exiting the shell causes the system to continue with the normal boot process.
Boot into rescue mode (a.k.a. single user mode). This will cause systemd
to start nothing but the unit <literal>rescue.target</literal>, which
runs <command>sulogin</command> to prompt for the root password and start
a root login shell. Exiting the shell causes the system to continue with
the normal boot process.
</para>
</listitem>
</varlistentry>
Expand All @@ -54,7 +68,8 @@
</term>
<listitem>
<para>
Make systemd very verbose and send log messages to the console instead of the journal.
Make systemd very verbose and send log messages to the console instead of
the journal.
</para>
</listitem>
</varlistentry>
Expand All @@ -65,6 +80,11 @@
</para>

<para>
If no login prompts or X11 login screens appear (e.g. due to hanging dependencies), you can press Alt+ArrowUp. If you’re lucky, this will start rescue mode (described above). (Also note that since most units have a 90-second timeout before systemd gives up on them, the <command>agetty</command> login prompts should appear eventually unless something is very wrong.)
If no login prompts or X11 login screens appear (e.g. due to hanging
dependencies), you can press Alt+ArrowUp. If you’re lucky, this will start
rescue mode (described above). (Also note that since most units have a
90-second timeout before systemd gives up on them, the
<command>agetty</command> login prompts should appear eventually unless
something is very wrong.)
</para>
</section>
32 changes: 24 additions & 8 deletions nixos/doc/manual/administration/cleaning-store.xml
Expand Up @@ -5,43 +5,59 @@
xml:id="sec-nix-gc">
<title>Cleaning the Nix Store</title>
<para>
Nix has a purely functional model, meaning that packages are never upgraded in place. Instead new versions of packages end up in a different location in the Nix store (<filename>/nix/store</filename>). You should periodically run Nix’s <emphasis>garbage collector</emphasis> to remove old, unreferenced packages. This is easy:
Nix has a purely functional model, meaning that packages are never upgraded
in place. Instead new versions of packages end up in a different location in
the Nix store (<filename>/nix/store</filename>). You should periodically run
Nix’s <emphasis>garbage collector</emphasis> to remove old, unreferenced
packages. This is easy:
<screen>
<prompt>$ </prompt>nix-collect-garbage
</screen>
Alternatively, you can use a systemd unit that does the same in the background:
Alternatively, you can use a systemd unit that does the same in the
background:
<screen>
<prompt># </prompt>systemctl start nix-gc.service
</screen>
You can tell NixOS in <filename>configuration.nix</filename> to run this unit automatically at certain points in time, for instance, every night at 03:15:
You can tell NixOS in <filename>configuration.nix</filename> to run this unit
automatically at certain points in time, for instance, every night at 03:15:
<programlisting>
<xref linkend="opt-nix.gc.automatic"/> = true;
<xref linkend="opt-nix.gc.dates"/> = "03:15";
</programlisting>
</para>
<para>
The commands above do not remove garbage collector roots, such as old system configurations. Thus they do not remove the ability to roll back to previous configurations. The following command deletes old roots, removing the ability to roll back to them:
The commands above do not remove garbage collector roots, such as old system
configurations. Thus they do not remove the ability to roll back to previous
configurations. The following command deletes old roots, removing the ability
to roll back to them:
<screen>
<prompt>$ </prompt>nix-collect-garbage -d
</screen>
You can also do this for specific profiles, e.g.
<screen>
<prompt>$ </prompt>nix-env -p /nix/var/nix/profiles/per-user/eelco/profile --delete-generations old
</screen>
Note that NixOS system configurations are stored in the profile <filename>/nix/var/nix/profiles/system</filename>.
Note that NixOS system configurations are stored in the profile
<filename>/nix/var/nix/profiles/system</filename>.
</para>
<para>
Another way to reclaim disk space (often as much as 40% of the size of the Nix store) is to run Nix’s store optimiser, which seeks out identical files in the store and replaces them with hard links to a single copy.
Another way to reclaim disk space (often as much as 40% of the size of the
Nix store) is to run Nix’s store optimiser, which seeks out identical files
in the store and replaces them with hard links to a single copy.
<screen>
<prompt>$ </prompt>nix-store --optimise
</screen>
Since this command needs to read the entire Nix store, it can take quite a while to finish.
Since this command needs to read the entire Nix store, it can take quite a
while to finish.
</para>
<section xml:id="sect-nixos-gc-boot-entries">
<title>NixOS Boot Entries</title>

<para>
If your <filename>/boot</filename> partition runs out of space, after clearing old profiles you must rebuild your system with <literal>nixos-rebuild</literal> to update the <filename>/boot</filename> partition and clear space.
If your <filename>/boot</filename> partition runs out of space, after
clearing old profiles you must rebuild your system with
<literal>nixos-rebuild</literal> to update the <filename>/boot</filename>
partition and clear space.
</para>
</section>
</chapter>
26 changes: 21 additions & 5 deletions nixos/doc/manual/administration/container-networking.xml
Expand Up @@ -6,7 +6,10 @@
<title>Container Networking</title>

<para>
When you create a container using <literal>nixos-container create</literal>, it gets it own private IPv4 address in the range <literal>10.233.0.0/16</literal>. You can get the container’s IPv4 address as follows:
When you create a container using <literal>nixos-container create</literal>,
it gets it own private IPv4 address in the range
<literal>10.233.0.0/16</literal>. You can get the container’s IPv4 address
as follows:
<screen>
<prompt># </prompt>nixos-container show-ip foo
10.233.4.2
Expand All @@ -17,21 +20,34 @@
</para>

<para>
Networking is implemented using a pair of virtual Ethernet devices. The network interface in the container is called <literal>eth0</literal>, while the matching interface in the host is called <literal>ve-<replaceable>container-name</replaceable></literal> (e.g., <literal>ve-foo</literal>). The container has its own network namespace and the <literal>CAP_NET_ADMIN</literal> capability, so it can perform arbitrary network configuration such as setting up firewall rules, without affecting or having access to the host’s network.
Networking is implemented using a pair of virtual Ethernet devices. The
network interface in the container is called <literal>eth0</literal>, while
the matching interface in the host is called
<literal>ve-<replaceable>container-name</replaceable></literal> (e.g.,
<literal>ve-foo</literal>). The container has its own network namespace and
the <literal>CAP_NET_ADMIN</literal> capability, so it can perform arbitrary
network configuration such as setting up firewall rules, without affecting or
having access to the host’s network.
</para>

<para>
By default, containers cannot talk to the outside network. If you want that, you should set up Network Address Translation (NAT) rules on the host to rewrite container traffic to use your external IP address. This can be accomplished using the following configuration on the host:
By default, containers cannot talk to the outside network. If you want that,
you should set up Network Address Translation (NAT) rules on the host to
rewrite container traffic to use your external IP address. This can be
accomplished using the following configuration on the host:
<programlisting>
<xref linkend="opt-networking.nat.enable"/> = true;
<xref linkend="opt-networking.nat.internalInterfaces"/> = ["ve-+"];
<xref linkend="opt-networking.nat.externalInterface"/> = "eth0";
</programlisting>
where <literal>eth0</literal> should be replaced with the desired external interface. Note that <literal>ve-+</literal> is a wildcard that matches all container interfaces.
where <literal>eth0</literal> should be replaced with the desired external
interface. Note that <literal>ve-+</literal> is a wildcard that matches all
container interfaces.
</para>

<para>
If you are using Network Manager, you need to explicitly prevent it from managing container interfaces:
If you are using Network Manager, you need to explicitly prevent it from
managing container interfaces:
<programlisting>
networking.networkmanager.unmanaged = [ "interface-name:ve-*" ];
</programlisting>
Expand Down
19 changes: 16 additions & 3 deletions nixos/doc/manual/administration/containers.xml
Expand Up @@ -5,15 +5,28 @@
xml:id="ch-containers">
<title>Container Management</title>
<para>
NixOS allows you to easily run other NixOS instances as <emphasis>containers</emphasis>. Containers are a light-weight approach to virtualisation that runs software in the container at the same speed as in the host system. NixOS containers share the Nix store of the host, making container creation very efficient.
NixOS allows you to easily run other NixOS instances as
<emphasis>containers</emphasis>. Containers are a light-weight approach to
virtualisation that runs software in the container at the same speed as in
the host system. NixOS containers share the Nix store of the host, making
container creation very efficient.
</para>
<warning>
<para>
Currently, NixOS containers are not perfectly isolated from the host system. This means that a user with root access to the container can do things that affect the host. So you should not give container root access to untrusted users.
Currently, NixOS containers are not perfectly isolated from the host system.
This means that a user with root access to the container can do things that
affect the host. So you should not give container root access to untrusted
users.
</para>
</warning>
<para>
NixOS containers can be created in two ways: imperatively, using the command <command>nixos-container</command>, and declaratively, by specifying them in your <filename>configuration.nix</filename>. The declarative approach implies that containers get upgraded along with your host system when you run <command>nixos-rebuild</command>, which is often not what you want. By contrast, in the imperative approach, containers are configured and updated independently from the host system.
NixOS containers can be created in two ways: imperatively, using the command
<command>nixos-container</command>, and declaratively, by specifying them in
your <filename>configuration.nix</filename>. The declarative approach implies
that containers get upgraded along with your host system when you run
<command>nixos-rebuild</command>, which is often not what you want. By
contrast, in the imperative approach, containers are configured and updated
independently from the host system.
</para>
<xi:include href="imperative-containers.xml" />
<xi:include href="declarative-containers.xml" />
Expand Down
33 changes: 27 additions & 6 deletions nixos/doc/manual/administration/control-groups.xml
Expand Up @@ -5,10 +5,16 @@
xml:id="sec-cgroups">
<title>Control Groups</title>
<para>
To keep track of the processes in a running system, systemd uses <emphasis>control groups</emphasis> (cgroups). A control group is a set of processes used to allocate resources such as CPU, memory or I/O bandwidth. There can be multiple control group hierarchies, allowing each kind of resource to be managed independently.
To keep track of the processes in a running system, systemd uses
<emphasis>control groups</emphasis> (cgroups). A control group is a set of
processes used to allocate resources such as CPU, memory or I/O bandwidth.
There can be multiple control group hierarchies, allowing each kind of
resource to be managed independently.
</para>
<para>
The command <command>systemd-cgls</command> lists all control groups in the <literal>systemd</literal> hierarchy, which is what systemd uses to keep track of the processes belonging to each service or user session:
The command <command>systemd-cgls</command> lists all control groups in the
<literal>systemd</literal> hierarchy, which is what systemd uses to keep
track of the processes belonging to each service or user session:
<screen>
<prompt>$ </prompt>systemd-cgls
├─user
Expand All @@ -26,19 +32,34 @@
│ └─2376 dhcpcd --config /nix/store/f8dif8dsi2yaa70n03xir8r653776ka6-dhcpcd.conf
└─ <replaceable>...</replaceable>
</screen>
Similarly, <command>systemd-cgls cpu</command> shows the cgroups in the CPU hierarchy, which allows per-cgroup CPU scheduling priorities. By default, every systemd service gets its own CPU cgroup, while all user sessions are in the top-level CPU cgroup. This ensures, for instance, that a thousand run-away processes in the <literal>httpd.service</literal> cgroup cannot starve the CPU for one process in the <literal>postgresql.service</literal> cgroup. (By contrast, it they were in the same cgroup, then the PostgreSQL process would get 1/1001 of the cgroup’s CPU time.) You can limit a service’s CPU share in <filename>configuration.nix</filename>:
Similarly, <command>systemd-cgls cpu</command> shows the cgroups in the CPU
hierarchy, which allows per-cgroup CPU scheduling priorities. By default,
every systemd service gets its own CPU cgroup, while all user sessions are in
the top-level CPU cgroup. This ensures, for instance, that a thousand
run-away processes in the <literal>httpd.service</literal> cgroup cannot
starve the CPU for one process in the <literal>postgresql.service</literal>
cgroup. (By contrast, it they were in the same cgroup, then the PostgreSQL
process would get 1/1001 of the cgroup’s CPU time.) You can limit a
service’s CPU share in <filename>configuration.nix</filename>:
<programlisting>
<link linkend="opt-systemd.services._name_.serviceConfig">systemd.services.httpd.serviceConfig</link>.CPUShares = 512;
</programlisting>
By default, every cgroup has 1024 CPU shares, so this will halve the CPU allocation of the <literal>httpd.service</literal> cgroup.
By default, every cgroup has 1024 CPU shares, so this will halve the CPU
allocation of the <literal>httpd.service</literal> cgroup.
</para>
<para>
There also is a <literal>memory</literal> hierarchy that controls memory allocation limits; by default, all processes are in the top-level cgroup, so any service or session can exhaust all available memory. Per-cgroup memory limits can be specified in <filename>configuration.nix</filename>; for instance, to limit <literal>httpd.service</literal> to 512 MiB of RAM (excluding swap):
There also is a <literal>memory</literal> hierarchy that controls memory
allocation limits; by default, all processes are in the top-level cgroup, so
any service or session can exhaust all available memory. Per-cgroup memory
limits can be specified in <filename>configuration.nix</filename>; for
instance, to limit <literal>httpd.service</literal> to 512 MiB of RAM
(excluding swap):
<programlisting>
<link linkend="opt-systemd.services._name_.serviceConfig">systemd.services.httpd.serviceConfig</link>.MemoryLimit = "512M";
</programlisting>
</para>
<para>
The command <command>systemd-cgtop</command> shows a continuously updated list of all cgroups with their CPU and memory usage.
The command <command>systemd-cgtop</command> shows a continuously updated
list of all cgroups with their CPU and memory usage.
</para>
</chapter>

0 comments on commit 5d1649a

Please sign in to comment.