NOTICE: This is a work in progress. This document is terse to convey intent that exists at the time it was written. Things may change.
The platform image is based on Debian utilizing ZFS on Linux.
-
Install Debian 10 64-bit on a machine (e.g. in VMware)
-
Install ZFS
-
Create a zpool (warning - this will destroy the disk partition), e.g.
zpool create data /dev/sdb sudo touch /data/.system_pool
-
Install Git
apt install -y git
-
Build the Image - clone this repo and run the debian-live image builder:
git clone https://github.com/TritonDataCenter/linux-live cd linux-live ./tools/debian-live
-
Copy the resulting image (ISO or USB) out of the Debian machine and use that for the compute node boot (e.g. in a different VMware virtual machine, or on real hardware).
The root file system of the image will be a squashfs image that is mounted with a tmpfs overlay, allowing the root file system to be read-write. Since the overlay is backed by tmpfs, it is not persisted across reboots.
There are a small number of Triton-specific files that need to be delivered into the image. In general they fall into the following categories:
- Things needed to bootstrap Triton. For instance,
ur-agent
and its dependencies (e.g./usr/node
) need to be present. - Things needed for sane operation. Currently this includes:
- Setting the locale to quiet various warnings.
- Fixing a very poor default setting for
mouse
invim
so that copy and paste will work. - Adding Triton paths to
$PATH
- Use of DHCP when booted as not part of Triton
- SSH host key generation and preservation
- Altering service dependencies so that service configuration stored in ZFS is respected.
- Generate the same hostid every time to keep ZFS happy.
- Utilities that SmartOS admins expect to have. This includes
json
andbunyan
, among other things.
As much as possible, Triton components that are part of the image are installed
under /usr/triton
. When needed, symbolic links should be installed
for compatibility. For instance, much of Triton software may assume that
/usr/node/bin/node
is the platform's node installation, so /usr/node
is a
symbolic link to the appropriate directory under /usr/triton
.
Files used for configuring systemd (service files, drop-ins, etc.) are installed
under /usr/lib/systemd
.
Special effort is made to keep some directories empty so that ZFS file systems
can be mounted over them. For instance, persistent network configuration is
stored as drop-in files in /etc/systemd/network
, which is mounted from
<system pool>/platform/etc/systemd/network
.
Several conventions from SmartOS have been carried forward to Linux. This includes, but is not limited to:
- noimport=true boot parameter will skip importing pools
- destroy_zpools=true will destroy the system pool at boot
- marking zones/var for factory reset will destroy the system pool at boot
- sdc-factoryreset will mark zones/var for reset
/opt/custom
. This includes/opt/custom/systemd
for custom unit files
Initially images will be available via Manta. At a later time, these will be
distributed via updates.tritondatacenter.com
.
The layout of a Linux platform.tgz file will be:
platform/
platform/etc/
platform/etc/version/
platform/etc/version/platform
platform/etc/version/os-release
platform/etc/version/gitstatus
platform/x86_64/vmlinuz
platform/x86_64/initrd
platform/x86_64/initrd.hash
platform/x86_64/filesystem.squashfs
platform/x86_64/filesystem.squashfs.hash
platform/x86_64/filesystem.packages
platform/x86_64/filesystem.manifest
platform/x86_64/build.tgz
platform/x86_64/build.tgz.hash
platform/x86_64/build.tgz.packages
platform/x86_64/build.tgz.manifest
The existing sdcadm platform
command is used to install Linux
platform images.
sdcadm platform install /path/to/image.tar.gz
When a Linux platform is assigned to a CN, the following files will be
configured in the dhcpd zone under /tftboot
:
menu.lst.01<MAC>
: grub configuration (legacy configuration)boot.ipxe.01<MAC>
: ipxe configurationbootfs/<MAC>/networking.json
: same as SmartOS
Note: Triton compute nodes will boot Loader from the USB first, then load iPXE and chain load the boot.ipxe file. Chain loading grub is legacy, but still supported. In most cases it is not needed and should not be used.
boot.ipxe.01<MAC>
will resemble:
#!ipxe
kernel /os/20210731T223008Z/platform/x86_64/vmlinuz boot=live console=tty0 console=ttyS1,115200n8 <EXTRA OPTS> fetch=http://10.33.166.12/os/20210731T223008Z/platform/x86_64/filesystem.squashfs
initrd http://<booter>/os/20210731T223008Z/platform/x86_64/initrd
module http://<booter>/extra/joysetup/node.config /etc/node.config
module http://${next-server}/bootfs/<MAC>/networking.json /etc/triton-networking.json
module http://${next-server}/bootfs/<MAC>/networking.json.hash /etc/triton-networking.json.hash
boot
This example assumes that booter is configured to serve files via HTTP, not TFTP. The current Triton default is for HTTP boot, and can significantly speed up boot times. If you have an older Triton installation and would like to switch to HTTP boot, use the following commands:
dhcpd_svc=$(sdc-sapi /services?name=dhcpd | json -Ha uuid)
sapiadm update "$dhcpd_svc" metadata.http_pxe_boot=true
The boot of a Linux CN via iPXE uses the following general procedure:
- ipxe downloads
/os/<platform-timestamp>/platform/x86_64/vmlinuz
as the kernel. - ipxe downloads
/os/<platform-timestamp>/platform/x86_64/initrd.img
as the initial ramdisk. - ipxe downloads
/bootfs/<MAC>/network.json
as/networking.json
- The kernel starts, loads
initrd.img
into a initramfs and mounts at/
./networking.json
and/packages.tar
are visible. - The live-boot scripts download
filesystem.squashfs
and creates the required overlay mounts to make it writable. /networking.json
is moved to a location that will be accessible in the new root. This is at a path that a systemd generator will find it to generate the appropriate networking configuration under/run/systemd
.- live-boot pivots root and transfers control to systemd.
A new CN will boot the default image, which may be a SmartOS or a Linux image.
The server may need to be explicitly assigned a linux platform image via
sdcadm platform assign
and rebooted to the desired operating system before
proceeding with sdc-server setup
.
As stated earlier in this document, in general the image creation process is roughly:
git clone https://github.com/TritonDataCenter/linux-live
cd linux-live
sudo tools/debian-live
If all goes well, the final few lines will tell you what the name of the
generated .iso
and .usb.gz
files are.
The debian-live image can be created on an existing debian-live system. The
tools/debian-live
script can install the necessary dependencies.
The build process is self-hosting. The first image needs to be created elsewhere - a Debian 10 (64 bit) box with the right packages well do. Once your organization has the first image, it is probably easiest to go with the self hosting route.
If your organization has not yet built its own image, you will need to use a generic Debian 10 system to build the the first image. The procedure is as follows.
Debian uses dash
as the Bourne shell. It is less featureful than bash
,
which causes problems for some Triton Makefiles. To work around this:
Over time, each problematic Makefile
should be changed to include:
SHELL = /bin/bash
Note that /bin/bash
works across various Linux distros and SmartOS.
/usr/bin/bash
does not exist on at least some Linux distros.
See https://wiki.debian.org/ZFS#Installation or some other "Getting Started" link found at https://zfsonlinux.org/. For licensing reasons, the installation will need to build ZFS, which will take several minutes.
Once you have a Debian instance with ZFS, perform the following steps.
For now, the image creation script assumes that there is a pool named triton
where it can create images.
sudo zpool create data /dev/vdb
If you will be developing and testing agents on this box, you probably want to tell them that your pool is the system pool.
sudo touch /data/.system_pool
Note: This step assumes you you have a non-root user with a home directory on
a persistent file system (ZFS, ext4, etc.) so that it survives reboots. If
running on debian-live, see 4-zfs.md and look for sysusers
.
Run the preflight check to see what other things you need. If it tells you to install other packages, do the needful.
$ git clone https://github.com/TritonDataCenter/linux-live
$ cd linux-live
$ ./tools/debian-live preflight_check
$ echo $?
0
If you are developing on debian-live, the additional software that you install
to build the image or other software will not survive reboot. You will need to
follow the advice given by debian-live preflight_check
after each reboot.
This section talks a bit more about the structure of this repository and the use
of the debian-live
script.
The repository has the following directories that largely conform to what you would expect from a Triton repository.
doc
: the directory containing the file you are reading now.proto
: extra files that are copied directly into the repository. In addition, each systemd service that is included inproto/usr/lib/systemd/system
is enabled in the image. That is, the files are delivered into the image, then the appropriate systemd symbolic links are created.tools
: home to thedebian-live
script and perhaps other tools.
As stated above, the typical way that an image is built is with:
sudo tools/debian-live
That's great when things will be smooth sailing from start to finish. However,
some of the early steps take several minutes to complete. When tweaking files
in the proto
directory, it is quite annoying to have to wait for a ZFS build
to fix a typo in a file. For this reason, the build can be rolled back to the
beginning of any stage so that stage (and presumably others) can be executed.
To list the available stages, use tools/debian-live -h
:
$ ./tools/debian-live -h
./tools/debian-live: illegal option -- h
Usage:
Create a new image from scratch
debian-live
Run just the specified steps on an existing image build
debian-live -r release step [...]
Valid steps are:
preflight_check
create_bootstrap
install_live
build_zfs
reduce_zfs
install_usr_triton
install_proto
postinstall
prepare_archive_bits
create_iso
create_usb
...
Literal ... may be used to specify all remaining steps.
If I only want to roll back to before install_proto
and re-run that step:
$ zfs list -Ho name | grep debian-live | tail -1
triton/debian-live-20200105T222739Z
$ sudo tools/debian-live -r 20200105T222739Z install_proto
To inspect the outcome of that, you can chroot /data/debian-live/<platform_stamp>/chroot
and poke around. More commonly you
will want to run install_proto
and all steps after it. You can use ...
for
that.
sudo tools/debian-live -r 20200105T222739Z install_proto ...