The bosh-runc-cpi-release houses a set of tooling intended for the BOSH ecosystem that orchestrates the creation of containers, as opposed to VMs. It was inspired by and can most directly be compared to the warden-cpi-release. The primary use case is for virtual machine environments where the provisioning of Garden proves prohibitive. It was what powers blt and cfdev.
Much like the warden-cpi interacts with a garden server, the runc-cpi interacts with a runc-cpi-daemon counterpart. The cpi itself is a standalone binary that is invoked in the BOSH lifecycle with arguments via stdin. Consequently, the normal workflow is that the daemon is first set up listening on a particular tcp
or unix
address, and then the cpi must be configured with the aforementioned address.
With this repository cloned and all the submodules initialized. Simply compile with Go tooling. Cross-compiling is completely supported:
$ cd bosh-runc-cpi-release
$ export GOPATH=$PWD
$ export GOARCH=amd64
$ export GOOS=linux
$ go build \
github.com/aemengo/bosh-runc-cpi/cmd/cpid
Having the daemon server running is straightfoward. Simply execute the binary built from the previous step with a configuration yaml file and a long running process will be invoked.
Tar and RunC must be on the $PATH
. Only linux environments are supported. sudo
privileges are also required.
# Make sure that tar is on the $PATH
# Make sure that runc is on the $PATH
$ sudo ./cpid ./config.yml
Initializing bosh-runc-cpid...
Note: In order to support a wide array of use cases, containers created by the bosh-runc-daemon are privileged. Running on your host machine is extremely discouraged and at your own risk!
As of right now, the ./config.yml
file only supports these properties. Here's an example configuration file:
$ cat config.yml
---
work_dir: "/var/lib/cpid"
network_type: "tcp"
address: "0.0.0.0:9999"
cidr: "10.0.0.0/16"
work_dir:
The location where all bosh state (stemcells, disks, containers) will be kept. If the directory does not exist, it will be created. A directory with high disk space and that can sustain heavy writes is preferred.network_type:
Eitherunix
ortcp
. The protocol type that the daemon will listen as. Since this server must be accessed by both the director and your bootstrap environment,tcp
is more flexible and thus supported.unix
support might be removed in the future.address:
The listen address for the server. Withtcp
it is the form of<IP>:<PORT>
and withunix
it is the location of the unix socket. Parent directories will be created if they do not exist.unix
support might be removed in the future.cidr:
The CIDR block to be used for containers IPs. A network bridge and gateway will automatically be created from this value. The gateway IP will automatically be assigned at x.x.x.1/xx.
The master branch of this repository keeps an up-to-date reference to a pre-built CPI ready for use with bosh-deployment. To utilize in a the BOSH fashion, simply use the -o
option to the ops-file at ./operations/runc-cpi.yml and pass in the necessary variables. For example:
$ cd bosh-runc-cpi-release
$ bosh create-env ../bosh-deployment/bosh.yml \
-o ./operations/runc-cpi.yml \
--state ./state.json \
--vars-store ./creds.yml \
-v director_name=director \
-v external_cpid_ip=127.0.0.1 \
-v internal_cpid_ip=192.168.65.3 \
-v internal_nameserver=192.168.65.1 \
-v internal_ip=10.0.0.4 \
-v internal_gw=10.0.0.1 \
-v internal_cidr=10.0.0.0/16
director_name:
Name for your bosh directorexternal_cpid_ip:
Network address of the runc-daemon server used at bootstrap. The port of9999
and network_type oftcp
is assumed but these can be changed in the ops-file.internal_cpid_ip:
Network address of the runc-daemon server used by the director container. The port of9999
and network_type oftcp
is assumed but these can be changed in the ops-file.internal_nameserver:
This is the nameserver address that will be configured with every container. When running inside of VM, you may opt to choose the nameserver of the VM itself, otherwise8.8.8.8
is perfectly valid.internal_ip:
The IP that the bosh director container will be assigned. Your value must fall within thecidr
that was configured with runc-daemon.internal_gw:
The gateway that will be used by containers in a bridge network. You must specify thex.x.x.1
value of thecidr
that was configured with runc-daemon, since the runc-daemon server has pre-configured a gateway at that IP.internal_cidr:
The IP range that container will be assigned. You must specify the value forcidr
that was configured with your runc-daemon.
You may opt to use the cloud-config.yml found in this repository at ./operations/cloud-config.yml for reference, choosing to adjust the network IPs for your specific configuration.
Note: As of right now, only bosh-deployment v1.1.0 is supported.
With this repository cloned and all the submodules initialized. Simply compile with Go tooling. Cross-compiling is completely supported.
$ cd bosh-runc-cpi-release
$ export GOPATH=$PWD
$ export GOARCH=amd64
$ GOOS=linux go build \
-o ./runc-cpi-linux \
github.com/aemengo/bosh-runc-cpi/cmd/cpi
$ GOOS=darwin go build \
-o ./runc-cpi-darwin \
github.com/aemengo/bosh-runc-cpi/cmd/cpi
To go further and create a BOSH release:
$ bosh add-blob ./runc-cpi-darwin runc-cpi-darwin
$ bosh add-blob ./runc-cpi-linux runc-cpi-linux
$ bosh create-release --name=bosh-runc-cpi --version=<version> --tarball=<tarball-path>.tgz
Then to test your developement bosh-release, you may edit the ops-file at ./operations/runc-cpi.yml. You must replace all references of /releases/-
and release:
to point to your location and name of your new release, respectively.
Finally you can follow the instructions here to see it in action.
Copyright (c) 2018 Anthony Emengo