Build system for µCernVM images
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
boot add consoleblank=0 kernel parameter, bump release May 2, 2017
docker fix docker cosmetics Mar 16, 2017
kernel adjust to new kernel url Feb 23, 2016
packages.d fix dependecy upstream url Mar 1, 2018
scripts.d set back test timeouts May 29, 2018
tmp add empty tmp directory Feb 4, 2014
AUTHORS import from svn Sep 26, 2013
ChangeLog bump kernel Jul 19, 2018
LICENSE import from svn Sep 26, 2013
Makefile fix name in ova image Jun 13, 2018 document useglideinWMS May 11, 2016
branch2path keep 'prod' flavor, point it to v4prod Jun 12, 2018
branch2repository keep 'prod' flavor, point it to v4prod Jun 12, 2018
branch2server fix name in ova image Jun 13, 2018 Minor edit Feb 13, 2017 bump kernel Jul 19, 2018
dependencies update dependencies Feb 20, 2018 add docker image build system Jul 15, 2015 add docker image build system Jul 15, 2015
ec2commands import from svn Sep 26, 2013
os-repo-changes import from svn Sep 26, 2013
release bump kernel Jul 19, 2018 Comma fixes in JSON Dec 4, 2015
vagrant-user-data add vagrant support Aug 12, 2015


Build system for µCernVM images

This build system collects various bits and pieces to compile the µCernVM ISO and harddisk images. The build system is purely make based. All components as well as the final images are versioned. The component version numbers are in

Other requirements include syslinux, mkisofs, parted.


The components comprising a µCernVM image are

  • A Linux kernel, as compiled from cernvm-kernel config
  • An init ramdisk containing busybox, CernVM-FS, a few extras, and the init bash script
  • The bootloader, which according to the image type is isolinux, syslinux, or the grub config file for EC2 PV-GRUB kernels


The build system produces the init ramdisk, a tar file containing init ramdisk and kernel suitable to update existing µCernVM images, and images in formats ISO (VMware, VirtualBox), raw harddisk (Openstack), and FAT file system (EC2).

The build system produces separate images for every CernVM-FS repository that should be used, although the repository can be changed by contextualization.

Directory Structure

/boot : config files and binaries for the bootloaders. Binaries are taken from syslinux 4.06.

/include : bash helper functions for the init script

/kernel : downloaded µCernVM kernel images

/packages : extra utilites not covered by busybox

/scripts : scriptlets that comprise the init script

/ : Creates the init ramdisk

/release The release number

How does it work

The final images are between 10M and 20M. They are considered read-only. On first boot, the scripts in the init ramdisk will look for a free partition or harddisk to use as ephemeral storage. If none is available but the harddisk containing the root partition has free space, a second partition on this harddisk is used.

The ephemeral storage is then used to host the CernVM-FS cache as well as the read-write layer of the union file system. As a result, the CernVM-FS operating system repository becomes locally writable.

In addition, the bootloader keeps auxiliary files on the ephemeral storage:

  • Spacer files to recover from a full hard disk
  • The CernVM-FS snapshot to use (fixed after first boot)
  • The user-data used to contextualize the image
  • A few extra logs and files from the init ramdisk exposed to user space

Connection between µCernVM and the CernVM-FS OS Repository

The µCernVM bootloader and the CernVM-FS repository are mostly independent meaning that most versions of the bootloader can load most repostory versions. There are a few connecting points, however.

Kernel modules

Kernel modules are posted into the OS tree so that they can be loaded at a later stage.

Pinned files

Files listed in /.ucernvm_pinfiles are pinned by the CernVM-FS module in the bootloader. The idea is to always keep files in the cache necessary to recover from broken network (i.e. 'sudo /sbin/service network restart').

OS provided boot script

Just before chrooting into the OS repository, the script /.cernvm_bootstrap is executed in the bootloader. As a parameter it gets the root directory in the context of the bootloader.

PIDs of the Bootloader CernVM-FS

The file .cvmfs_pids contains the PIDs of the processes in the bootloader that the user space must not kill. The system's halt script needs to be changed to not kill this process on shutdown and to unwind the AUFS file system stack instead of just unmounting all file systems.

Contextualization of µCernVM

The bootloader can process EC2, Openstack, and vSphere user data. Within the user data everything is ignored expect a block of the form


The following key-value pairs are recognized:

Key Value Comments
resize_rootfs on/off use all of the harddisk instead of the first 20G
cvmfs_http_proxy HTTP proxy in CernVM-FS notation
cvmfs_pac_urls WPAD proxy autoconfig URLs URLs are ';'-separated, should return PAC files
cvmfs_server List of Stratum 1 servers E.g.:,
cvmfs_branch The repository name The url is currently fixed to
cvmfs_tag The snapshot name For long-term data preservation
cernvm_inject Base64 encoded .tar.gz ball Extracted in the root tree (without leading /)
useglideinWMS on/off (default: on) Set off to stop glidinwms user data auto detect

Extra User Data

In addition to the usual way to deliver user data to a VM, CernVM supports "extra user data". Extra user data is injected into the image. µCernVM will look through all partitions if it finds a file /cernvm/extra-user-data, in which case the content of the file is appended to the extra user data. The complete extra user data is parsed by the µCernVM bootloader and put into /cernvm/extra-user-data of the mounted OS repository.

The complete extra user data should be valid user data of its own but also work when appended to the normal user data.