OSv was originally designed and implemented by Cloudius Systems (now ScyllaDB) however currently it is being maintained and enhanced by a small community of volunteers. If you are into systems programming or want to learn and help us improve OSv, then please contact us on OSv Google Group forum or feel free to pickup any good issues for newcomers. For details on how to format and send patches, please read this wiki (we do NOT accept pull requests).
OSv is an open-source versatile modular unikernel designed to run single unmodified Linux application securely as microVM on top of a hypervisor, when compared to traditional operating systems which were designed for a vast range of physical machines. Built from the ground up for effortless deployment and management of microservices and serverless apps, with superior performance.
OSv has been designed to run unmodified x86-64 and AArch64 Linux binaries as is, which effectively makes it a Linux binary compatible unikernel (for more details about Linux ABI compatibility please read this doc). In particular OSv can run many managed language runtimes including JVM, Python, Node.JS, Ruby, Erlang, and applications built on top of those runtimes. It can also run applications written in languages compiling directly to native machine code like C, C++, Golang and Rust as well as native images produced by GraalVM and WebAssembly/Wasmer.
OSv can boot as fast as ~5 ms on Firecracker using as low as 15 MB of memory. OSv can run on many hypervisors including QEMU/KVM, Firecracker, Cloud Hypervisor, Xen, VMWare, VirtualBox and Hyperkit as well as open clouds like AWS EC2, GCE and OpenStack.
Building and Running Apps on OSv
In order to run an application on OSv, one needs to build an image by fusing OSv kernel, and the application files together. This, in high level can be achieved in two ways, either:
- by using the shell script located at
./scripts/buildthat builds the kernel from sources and fuses it with application files, or
- by using the capstan tool that uses pre-built kernel and combines it with application files to produce a final image.
If your intention is to try to run your app on OSv with the least effort possible, you should pursue the capstan
route. For introduction please read this
For more details about capstan please read
this more detailed documentation. Pre-built OSv kernel files
ovs-loader.qemu) can be automatically downloaded by capstan from
the OSv regular releases page or manually from
the nightly releases repo.
If you are comfortable with make and GCC toolchain and want to try the latest OSv code, then you should read this part of the readme to guide you how to set up your development environment and build OSv kernel and application images.
We aim to release OSv 2-3 times a year. You can find the latest one on github along with number of published artifacts including kernel and some modules.
In addition, we have set up Travis-based CI/CD pipeline where each commit to the master and ipv6 branches triggers full build of the latest kernel and publishes some artifacts to the nightly releases repo. Each commit also triggers publishing of new Docker "build tool chain" images to the Docker hub.
In addition, you can find a lot of good information about design of specific OSv components on the main wiki page and http://osv.io/ and http://blog.osv.io/. Unfortunately, some of that information may be outdated (especially on http://osv.io/), so it is always best to ask on the mailing list if in doubt.
Metrics and Performance
There are no official up-to date performance metrics comparing OSv to other unikernels or Linux. In general OSv lags behind Linux in disk-I/O-intensive workloads partially due to coarse-grained locking in VFS around read/write operations as described in this issue. In network-I/O-intensive workloads, OSv should fare better (or at least used to as Linux has advanced a lot since) as shown with performance tests of Redis and Memcached. You can find some old "numbers" on the main wiki, http://osv.io/benchmarks and some papers listed at the bottom of this readme.
So OSv is probably not best suited to run MySQL or ElasticSearch, but should deliver pretty solid performance for general stateless applications like microservices or serverless (at least as some papers show).
At this moment (as of July 2021) the size of the uncompressed OSv kernel (
kernel.elf artifact) is around
6.7 MB (the compressed is ~ 2.7 MB). This is not that small comparing to Linux kernel and quite large comparing
to other unikernels. However, bear in mind that OSv kernel (being unikernel) provides subset of functionality
of the following Linux libraries (see their approximate size on Linux host):
The equivalent static version of
libstdc++.so.6 is actually linked
--whole-archive so that
any C++ apps can run without having to add
libstdc++.so.6 to the image (whether it needs it or not).
Finally, OSv kernel comes with ZFS implementation which in theory later can be extracted as a
separate library. The
point of this is to illustrate that comparing OSv kernel size to Linux kernel size does not
quite make sense.
OSv, with Read-Only FS and networking off, can boot as fast as ~5 ms on Firecracker and even faster around ~3 ms on QEMU with the microvm machine. However, in general the boot time will depend on many factors like hypervisor including settings of individual para-virtual devices, filesystem (ZFS, ROFS, RAMFS or Virtio-FS) and some boot parameters. Please note that by default OSv images get built with ZFS filesystem.
For example, the boot time of ZFS image on Firecracker is ~40 ms and regular QEMU ~200 ms these days. Also,
newer versions of QEMU (>=4.0) are typically faster to boot. Booting on QEMU in PVH/HVM mode (aka direct kernel boot, enabled
-k option of
run.py) should always be faster as OSv is directly invoked in 64-bit long mode. Please see
this Wiki for the brief review of the boot
methods OSv supports.
Finally, some boot parameters passed to the kernel may affect the boot time:
--console serial- this disables VGA console that is slow to initialize and can shave off 60-70 ms on QEMU
--nopci- this disables enumeration of PCI devices especially if we know none are present (QEMU with microvm or Firecracker) and can shave off 10-20 ms
--redirect=/tmp/out- writing to the console can impact the performance quite severely (30-40%) if application logs a lot, so redirecting standard output and error to a file might speed up performance quite a lot
You can always see boot time breakdown by adding
./scripts/run.py -e '--bootchart /hello' OSv v0.54.0-197-g1f0df4e4 eth0: 192.168.122.15 disk read (real mode): 25.85ms, (+25.85ms) uncompress lzloader.elf: 45.11ms, (+19.26ms) TLS initialization: 45.72ms, (+0.61ms) .init functions: 47.61ms, (+1.89ms) SMP launched: 48.08ms, (+0.47ms) VFS initialized: 50.99ms, (+2.91ms) Network initialized: 51.12ms, (+0.14ms) pvpanic done: 51.25ms, (+0.13ms) pci enumerated: 61.55ms, (+10.29ms) drivers probe: 61.55ms, (+0.00ms) drivers loaded: 135.91ms, (+74.36ms) ROFS mounted: 136.98ms, (+1.07ms) Total time: 138.16ms, (+1.18ms) Cmdline: /hello Hello from C code
OSv needs at least 15 M of memory to run a hello world app. Even though it is half of what it was 2 years ago, it is still quite a lot comparing to other unikernels. We are planning to further lower this number by reducing size of the kernel, adding self-tuning logic to L1/L2 memory pools and making application threads use lazily allocated stacks.
OSv comes with around 140 unit tests that get executed upon every commit and run on ScyllaDB servers. There are also number of extra
tests located under
tests/ sub-tree that are not automated at this point.
You can run unit tests in number of ways:
./scripts/build check # Create ZFS test image and run all tests on QEMU ./scripts/build check fs=rofs # Create ROFS test image and run all tests on QEMU ./scripts/build image=tests && \ # Create ZFS test image and run all tests on Firecracker ./scripts/test.py -p firecracker ./scripts/build image=tests && \ # Create ZFS test image and run all tests on QEMU ./scripts/test.py -p qemu_microvm # with microvm machine
In addition, there is an Automated Testing Framework
that can be used to run around 30 real apps, some of them
under stress using
wrk tools. The intention is to catch any regressions that might be missed
by unit tests.
Finally, one can use Docker files to test OSv on different Linux distribution.
Setting up Development Environment
OSv can only be built on a 64-bit x86 and ARM Linux distribution. Please note that this means the "x86_64" or "amd64" version for 64-bit x86 and "aarch64" or "arm64" version for ARM respectively.
In order to build OSv kernel you need a physical or virtual machine with Linux distribution on it and GCC toolchain and all necessary packages and libraries OSv build process depends on. The fastest way to set it up is to use the Docker files that OSv comes with. You can use them to build your own Docker image and then start it in order to build OSv kernel or run an app on OSv inside of it. Please note that the main docker file depends on pre-built base Docker images for Ubuntu or Fedora that get published to DockerHub upon every commit. This should speed up building the final images as all necessary packages are installed as part of the base images.
Alternatively, you can manually clone OSv repo and use setup.py to install all required packages and libraries, as long as it supports your Linux distribution, and you have both git and python 3 installed on your machine:
git clone https://github.com/cloudius-systems/osv.git cd osv && git submodule update --init --recursive ./scripts/setup.py
setup.py recognizes and installs packages for number of Linux distributions including Fedora, Ubuntu,
Debian, LinuxMint and RedHat ones
(Scientific Linux, NauLinux, CentOS Linux, Red Hat Enterprise Linux, Oracle Linux). Please note that we actively
maintain and test only Ubuntu and Fedora, so your mileage with other distributions may vary. The support of CentOS 7
has also been recently added and tested so it should work as well. The
is actually used by Docker files internally to achieve the same result.
If you like working in IDEs, we recommend either Eclipse CDT which can be setup as described in this wiki page or CLion from JetBrains which can be setup to work with OSv makefile using so called compilation DB as described in this guide.
Building OSv Kernel and Creating Images
Building OSv is as easy as using the shell script
that orchestrates the build process by delegating to the main makefile
to build the kernel and by using number of Python scripts like
to build application and fuse it together with the kernel
into a final image placed at
./build/$(arch)/usr.img in general).
Please note that building an application does not necessarily mean building from sources as in many
cases the application binaries would be located on and copied from the Linux build machine
using the shell script
(see this Wiki page for details).
The shell script
build can be used as the examples below illustrate:
# Create default image that comes with command line and REST API server ./scripts/build # Create image with native-example app ./scripts/build -j4 fs=rofs image=native-example # Create image with spring boot app with Java 10 JRE ./scripts/build JAVA_VERSION=10 image=openjdk-zulu-9-and-above,spring-boot-example # Create image with 'ls' executable taken from the host ./scripts/manifest_from_host.sh -w ls && ./scripts/build --append-manifest # Create test image and run all tests in it ./scripts/build check # Clean the build tree ./scripts/build clean
Command nproc will calculate the number of jobs/threads for make and
Alternatively, the environment variable MAKEFLAGS can be exported as follows:
In that case, make and scripts/build do not need the parameter -j.
For details on how to use the build script, please run
./scripts/build creates the image
build/last/usr.img in qcow2 format.
To convert this image to other formats, use the
tool, which can convert an image to the vmdk, vdi or raw formats.
By default, OSv kernel gets built for the native host architecture (x86_64 or aarch64), but it is also possible to cross-compile kernel and modules on Intel machine for ARM by adding arch parameter like so:
At this point cross-compiling the aarch64 version of OSv is only supported
on Fedora, Ubuntu and CentOS 7 and relevant aarch64 gcc and libraries' binaries can be downloaded using
./scripts/download_aarch64_packages.py script. OSv can also be built natively on Ubuntu on ARM hardware
like Raspberry PI 4, Odroid N2+ or RockPro64.
Please note that as of the latest 0.56.0 release, the ARM part of OSv has been greately improved and tested and is quite close in functionality to the x86_64 port. In addition, most unit tests and many more advanced apps like nginx, python, iperf3, etc can successfully run on QEMU and Firecraker on Raspberry PI 4 and Odroid N2+ with KVM acceleration enabled.
For more information about the aarch64 port please read this Wiki page.
At the end of the boot process, OSv dynamic linker loads an application ELF and any related libraries
from the filesystem on a disk that is part of the image. By default, the images built by
contain a disk formatted as ZFS, which you can read more about here.
ZFS is a great read-write file system and may be a perfect fit if you want to run MySQL on OSv. However, it may be an overkill
if you want to run stateless apps in which case you may consider
Read-Only FS. Finally,
you can also have OSv read the application binary from RAMFS, in which case the filesystem get embedded as part of
the kernel ELF. You can specify which filesystem to build image disk as
by setting parameter
./scripts/build to one of the three values -
In addition, one can mount NFS filesystem, which had been recently transformed to be a shared library pluggable as a module, and newly implemented and improved Virtio-FS filesystem. The Virtio-FS mounts can be setup by adding proper entry
/etc/fstab or by passing a boot parameter as explained in this Wiki. In addition, very recently OSv has been enhanced to be able to boot from Virtio-FS filesystem directly.
Running an OSv image, built by
scripts/build, is as easy as:
By default, the
run.py runs OSv under KVM, with 4 vCPUs and 2 GB of memory.
You can control these and tens of other ones by passing relevant parameters to
run.py. For details, on how to use the script, please run
run.py can run OSv image on QEMU/KVM, Xen and VMware. If running under KVM you can terminate by hitting Ctrl+A X.
Alternatively, you can use
./scripts/firecracker.py to run OSv on Firecracker.
This script automatically downloads firecracker binary if missing, and accepts number of parameters like number ot vCPUs, memory
named exactly like
run.py does. You can learn more about running OSv on Firecracker
from this wiki.
Please note that in order to run OSv with the best performance on Linux under QEMU or Firecracker you need KVM enabled
(this is only possible on physical Linux machines, EC2 "bare metal" (i3) instances or VMs that support nested virtualization with KVM on).
The easiest way to verify if KVM is enabled is to check if
/dev/kvm is present, and your user account can read from and write to it.
Adding your user to the kvm group may be necessary like so:
usermod -aG kvm <user name>
For more information about building and running JVM, Node.JS, Python and other managed runtimes as well as Rust, Golang or C/C++ apps on OSv, please read this wiki page. For more information about various example apps you can build and run on OSv, please read the osv-apps repo README.
By default, the
run.py starts OSv with
user networking/SLIRP on.
To start OSv with more performant external networking, you need to enable
-v options like so:
sudo ./scripts/run.py -nv
The -v is for KVM's vhost that provides better performance and its setup requires tap device and thus we use sudo.
By default, OSv spawns a
dhcpd-like thread that automatically configures virtual NICs.
A static configuration can be done within OSv by configuring networking like so:
ifconfig virtio-net0 192.168.122.100 netmask 255.255.255.0 up route add default gw 192.168.122.1
To enable networking on Firecracker, you have to explicitly enable
Finally, please note that the master branch of OSv only implements IPV4 subset of networking stack. If you need IPV6, please build from ipv6 branch or use IPV6 kernel published to nightly releases repo.
Debugging, Monitoring, Profiling OSv
- OSv can be debugged with gdb; for more details please read this wiki
- OSv kernel and application can be traced and profiled; for more details please read this wiki
- OSv comes with the admin/monitoring REST API server; for more details please read this and that wiki page. There is also lighter monitoring REST API module that is effectively a read-only subset of the former one.
FAQ and Contact
Papers and Articles about OSv
List of somewhat newer articles about OSv found on the Web:
- Unikernels vs Containers: An In-Depth Benchmarking Study in the context of Microservice Applications
- Towards a Practical Ecosystem of Specialized OS Kernels
- A Performance Evaluation of Unikernels
- Security Perspective on Unikernels
- Performance Evaluation of OSv for Server Applications
- Time provisioning Evaluation of KVM, Docker and Unikernels in a Cloud Platform
- Unikernels - Beyond Containers to the Next Generation of the Cloud