CRI-O - OCI-based implementation of Kubernetes Container Runtime Interface
Compatibility matrix: CRI-O <-> Kubernetes clusters
CRI-O and Kubernetes follow the same release cycle and deprecation policy. For more information visit the Kubernetes versioning documentation.
|Version - Branch||Kubernetes branch/version||Maintenance status|
|CRI-O 1.10.x - release-1.10||Kubernetes 1.10 branch, v1.10.x||=|
|CRI-O 1.11.x - release-1.11||Kubernetes 1.11 branch, v1.11.x||=|
|CRI-O 1.12.x - release-1.12||Kubernetes 1.12 branch, v1.12.x||=|
|CRI-O 1.13.x - release-1.13||Kubernetes 1.13 branch, v1.13.x||=|
|CRI-O HEAD - master||Kubernetes master branch||✓|
✓Changes in main Kubernetes repo about CRI are actively implemented in CRI-O
=Maintenance is manual, only bugs will be patched.
What is the scope of this project?
CRI-O is meant to provide an integration path between OCI conformant runtimes and the kubelet. Specifically, it implements the Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes. The scope of CRI-O is tied to the scope of the CRI.
At a high level, we expect the scope of CRI-O to be restricted to the following functionalities:
- Support multiple image formats including the existing Docker image format
- Support for multiple means to download images including trust & image verification
- Container image management (managing image layers, overlay filesystems, etc)
- Container process lifecycle management
- Monitoring and logging required to satisfy the CRI
- Resource isolation as required by the CRI
What is not in scope for this project?
- Building, signing and pushing images to various image storages
- A CLI utility for interacting with CRI-O. Any CLIs built as part of this project are only meant for testing this project and there will be no guarantees on the backward compatibility with it.
This is an implementation of the Kubernetes Container Runtime Interface (CRI) that will allow Kubernetes to directly launch and manage Open Container Initiative (OCI) containers.
The plan is to use OCI projects and best of breed libraries for different aspects:
- Runtime: runc (or any OCI runtime-spec implementation) and oci runtime tools
- Images: Image management using containers/image
- Storage: Storage and management of image layers using containers/storage
- Networking: Networking support through use of CNI
|crio(8)||OCI Kubernetes Container Runtime daemon|
Note that kpod and its container management and debugging commands have moved to a separate repository, located here.
|crio.conf(5)||CRI-O Configuation file|
|policy.json(5)||Signature Verification Policy File(s)|
|registries.conf(5)||Registries Configuration file|
|storage.conf(5)||Storage Configuation file|
OCI Hooks Support
CRI-O Usage Transfer
For async communication and long running discussions please use issues and pull requests on the github repo. This will be the best place to discuss design and implementation.
For sync communication we have an IRC channel #CRI-O, on chat.freenode.net, that everyone is welcome to join and chat about development.
- runc, Clear Containers runtime, or any other OCI compatible runtime
Latest version of
runc is expected to be installed on the system. It is picked up as the default runtime by CRI-O.
Build and Run Dependencies
Fedora, CentOS, RHEL, and related distributions:
yum install -y \ btrfs-progs-devel \ device-mapper-devel \ git \ glib2-devel \ glibc-devel \ glibc-static \ go \ golang-github-cpuguy83-go-md2man \ gpgme-devel \ libassuan-devel \ libgpg-error-devel \ libseccomp-devel \ libselinux-devel \ ostree-devel \ pkgconfig \ runc \ skopeo-containers
Debian, Ubuntu, and related distributions:
apt-get install -y \ btrfs-tools \ git \ golang-go \ libassuan-dev \ libdevmapper-dev \ libglib2.0-dev \ libc6-dev \ libgpgme11-dev \ libgpg-error-dev \ libseccomp-dev \ libselinux1-dev \ pkg-config \ go-md2man \ runc \ skopeo-containers
Debian, Ubuntu, and related distributions will also need a copy of the development libraries for
ostree, either in the form of the
libostree-dev package from the flatpak PPA, or built from source (more on that here).
If using an older release or a long-term support release, be careful to double-check that the version of
runc is new enough (running
runc --version should produce
spec: 1.0.0), or else build your own.
Be careful to double-check that the version of golang is new enough, version 1.8.x or higher is required. If needed, golang kits are avaliable at https://golang.org/dl/
Get Source Code
Clone the source code using:
git clone https://github.com/kubernetes-sigs/cri-o # or your fork cd cri-o
make install.tools make sudo make install
Otherwise, if you do not want to build
CRI-O with seccomp support you can add
BUILDTAGS="" when running make.
make install.tools make BUILDTAGS="" sudo make install
CRI-O supports optional build tags for compiling support of various features.
To add build tags to the make option the
BUILDTAGS variable must be set.
make BUILDTAGS='seccomp apparmor'
|selinux||selinux process and mount labeling||libselinux|
|apparmor||apparmor profile support|
Running pods and containers
Follow this tutorial to get started with CRI-O.
Setup CNI networking
A proper description of setting up CNI networking is given in the
contrib/cni README. But the gist is that you need to
have some basic network configurations enabled and CNI plugins installed on
Running with kubernetes
You can run a local version of kubernetes with CRI-O using
- Clone the kubernetes repository
- Start the CRI-O daemon (
- From the kubernetes project directory, run:
CGROUP_DRIVER=systemd \ CONTAINER_RUNTIME=remote \ CONTAINER_RUNTIME_ENDPOINT='unix:///var/run/crio/crio.sock --runtime-request-timeout=15m' \ ./hack/local-up-cluster.sh
To run a full cluster, see the instructions.
- Basic pod/container lifecycle, basic image pull (done)
- Support for tty handling and state management (done)
- Basic integration with kubelet once client side changes are ready (done)
- Support for log management, networking integration using CNI, pluggable image/storage management (done)
- Support for exec/attach (done)
- Target fully automated kubernetes testing without failures e2e status
- Track upstream k8s releases