Repo: rumprun

Pradeep Gowda edited this page May 3, 2016 · 39 revisions
Clone this wiki locally

The page describes the Rumprun repository. If you are completely unfamiliar the rump kernel ecosystem, we suggest following the Getting Started Tutorial before attempting to read the rest of this page.

The Rumprun repository provides the Rumprun unikernel for various platforms. Rumprun uses the drivers offered by rump kernels, adds a libc and an application environment on top, and provides a toolchain with which to build existing POSIX-y applications as Rumprun unikernels.

The strong point of the Rumprun unikernel is that thanks to a foundation of rump kernels, Rumprun will support a great deal of existing application-level software without the need to port it to Rumprun or rewrite functionality. Benefits of unikernels are still retained, and the memory footprint and service bootstrap time is a fraction of that of a full OS. Yet, the application performance that Rumprun provides can exceed that of a full OS.

Limitations include applications which do not fit into a single-process no-VM model, such as applications using fork() or execve(). Another limitation is that the build system used by the application must support cross-compilation. If necessary, these limitations may typically be overcome with a small amount of porting work.

Platforms currently support by Rumprun are hw/x86+x64 and Xen/x86+x64, with more being worked on. Platform support is modular with the maximal amount of code shared between platforms, and generally speaking only bootstrap code and glue to I/O devices and the clock are required for supporting a new platform.

Major subsections on this page are:


There are three stages to building a runnable Rumprun unikernel. We will first give an overview of the steps, and detail the procedures in subsequent sections.

First, you must build the component libraries for constructing the unikernels. This is analogous to building the libraries your application requires, except you will also be building the "kernel" libraries. This part of the build process consists invariably of executing one command, the syntax of which is detailed below.

Second, you must build the application of your choosing and produce the runnable Rumprun unikernel image. Notably, the build process never runs on a rump kernel, and therefore the Rumprun unikernel is always cross-compiled. For software with proper cross-compilation support, this second phase can be as simple as setting $CC and compiling the software as usual.

Finally, you must "bake" the application(s) into the final runnable image. The backing stage links in the component libraries for the target configuration that you want. For example, consider compiling your application for x86. If you now bake in PCI device drivers, you can boot your unikernel on a regular PC. If you bake in virtio driver, you can use your unikernel on the cloud.

Building and installing the component libraries and tools

Short version:

git submodule init
git submodule update
CC=target-cc ./ platform

For example, if you want to build your unikernels for Xen, and are happy with using "cc" to do so, simply running ./ xen will work.

Rumprun does not build a toolchain, and instead we create wrappers around a toolchain you supply to Specifying CC becomes necessary when you do not want to use cc, e.g. when you want to build for a non-native machine architecture. For example, assuming you are building on x86 and want to build for some ARM variant, you would supply something in the likes of CC=arm-crosscompiler. Notably, the CC you supply at this stage will also be used for building the application layers of the Rumprun unikernel, assuming you follow the method described below.

You can also specify additional flags to the toolchain in the following way:

./ platform -- -F ACLFLAGS=flags

For example, assuming your toolchain is by default targeting x86_64, a valid assumption if you are doing development on a 64bit x86 host, and you want to build 32bit binaries, you would use the following:

./ platform -- -F ACLFLAGS=-m32

After the build, the components necessary for creating unikernels are installed into a destination directory hierarchy. The root of the hierarchy is by default ./rumprun, but you can change it with -d. Currently, we do not recommend installing to a directory which you cannot remove with a simple rm -rf. Putting destdir/bin into your shell's $PATH is recommended.

Building application binaries

After the component libraries and toolchain wrappers for the platform/machine of your choice are built, you can build runnable unikernels. The process will vary from software package to software package due to different build methodologies, and we cannot give general instructions. Cross-compile ready software should more or less build just as is.

For example, to compile a program consisting of a single C module for x86_64, use:

x86_64-rumprun-netbsd-gcc -o test test.c

Assuming you have a more complex project and a cross-compile respecting Makefile for it, you can use:

CC=x86_64-rumprun-netbsd-gcc make

Or for GNU autotools:

./configure --host=x86_64-rumprun-netbsd

Simple, eh?


Baking is done using the rumprun-bake tool. It takes the binary produced by the compiler and produces a runnable image. The mandatory parameters are the output image and target name.

For example:

rumprun-bake hw_virtio test.bin test

Use rumprun-bake list to list the available targets.


The details of running a unikernel vary from platform to platform. For example, running a Xen guest requires creating a domain config file and running xl, while running the hw platform in qemu (with or without kvm) requires a different set of options. We supply the rumprun tool to hide the details and provide a uniform experience for every platform.

For example, to rumprun a program with an interactive console on Xen, use:

rumprun xen -i prog

If "prog" is built for baremtal, you can rumprun it by using qemu as the platform instead of xen.


Hardware (``hw'')

The hw platform provides support on raw hardware, and by extension most hypervisors on the cloud. The main difference between running on hardware or hypervisors is the approach to I/O. On physical hardware the hardware drivers get used, while on a cloud hypervisor approaches such as virtio come into play.


The Xen platform is optimized for running on top of the Xen hypervisor.


Generally speaking, use rumprun -D port and target remote:port in gdb.

More details can be found in the debugging tutorial


Remote syslog is the easiest approach if the service supports syslog. TODO: other options?



Experimental nature

Many of the various pieces of the Rumprun tools are marked as "experimental". What that means is that the interfaces are experimental, and may change in incompatible ways in the future. We specifically do not mean that things are not expected to work, just that they might not continue working the same way until we figure out acceptably easy usage.

Below is a summary of the (non-)experimental natures of various bits:

toolchain (arch-rumprun-netbsd-foo)



Mostly stable, the remaining open question is the structure of the config names (e.g. hw_virtio).

A slightly related problem (though not exclusive to rumprun-bake) comes with to binary packages: rumprun-bake is a global name, but is always required to do a build. So assuming we want to provide a binary Rumprun package both for i486-rumprun-netbsdelf and x86_64-rumprun-netbsd, there will be problems with conflicts assuming we want to install all the runnable scripts into dest/bin. It is unsure if fixing the binary package issue will require rethinking rumprun-bake (and other offensive bits), or will it binary a packaging problem, or a bit of both.

rumprun (launch tool)

Whole approach might change completely. Most likely, the rumprun tool will go away and the json file that is currently used internally will be the user-facing config file.