Containers Workshop

Jan Poctavek edited this page Aug 16, 2017 · 43 revisions

Danube Cloud Containers Workshop

  1. Orientation
  2. Container Images
  3. Container Management
  4. Inside Containers
  5. ZFS and Snapshots
  6. Goodies
  7. Links

Part 0: Orientation

  • You must be connected to the WiFi: Danube Cloud
  • WiFi password: esdc3000

You will have to connect to a SmartOS server (compute node) via SSH. The workshop instructor will dedicate one compute node to you or your team. All tasks will be performed on one of the following SmartOS compute nodes:

node-01          login: root/esdc123
node-02          login: root/esdc123


  • node = compute node = physical server with SmartOS installed
  • container = zone = virtualized environment created using vmadm
  • VM = virtual machine (means both, zone/container or full KVM virtual machine); VM types:
    • OS type = container/zone running SmartOS (SunOS) inside
    • LX type = container/zone running Linux
    • KVM = standard virtual machine (full HW virtualization)
  • image = preinstalled OS
  • pool = storage = zpool = a group of disks (with RAID or single disk) managed by ZFS filesystem
  • etherstub = internal virtual switch without access to physical network that can interconnect containers
  • VM manifest = VM configuration in JSON format

Part 1: Container Images

We will start by importing some base container images. You can use following commands to work with images on a SmartOS system:

  • list all local OS images (empty at first)
    • imgadm list
  • list remote OS images
    • imgadm avail
    • imgadm avail type=zone-dataset - We will use type=zone-dataset images
  • import remote image
    • imgadm import <image_uuid>
      • e.g. imgadm import eb82f2e2-62cb-4719-a87a-a74d08d6363c
  • get info about local image
    • imgadm get <image_uuid>

Images are imported from an image repository. Your SmartOS machine has one remote image repository configured for the purposes of this workshop.

  • list/manage image repositories
    • imgadm sources

Part 2: Container Management

All virtual machines (including containers) are managed via the vmadm command.

Creating containers

In order to create a VM (container), we will need to create a JSON-formatted (VM manifest) file with the VM's configuration.

  1. create json manifest
    • Basic example: wget
    • Extended example: wget
  2. edit json manifest using your favorite editor (vim, joe, nano):
    • vim create_zone.json
      • At least change the IP address in the nics property.
  3. run vmadm
    • vmadm create -f create_zone.json

All VM settings are documented in man vmadm.

Listing containers

  • list all containers (with vm uuid)
    • vmadm list
  • start/stop/reboot container
    • vmadm start <vm_uuid>
    • vmadm stop <vm_uuid>
    • vmadm stop -f <vm_uuid>
    • vmadm reboot <vm_uuid>
  • get container properties (json manifest)
    • vmadm get <vm_uuid>
  • get container network setting
    • vmadm get <vm_uuid> | json nics
    • vmadm get <vm_uuid> | json nics.0.ip

Changing containers

  • delete a container
    • vmadm delete <vm_uuid>
  • update container properties
    • vmadm update <vm_uuid> -f <update.json>
    • Example update.json: wget

Part 3: Inside Containers

  • log into the container, one of:
    • zlogin <vm_uuid>
    • ssh root@$(vmadm get <vm_uuid> | json nics.0.ip)

Package management

Packages on a SmartOS zone are managed using the pkgin command.

  • list installed packages

    • pkgin list
  • find package name

    • pkgin search <pattern> - search for a package
    • pkgin avail - list all available package
  • install package

    • pkgin install <package>
      • e.g.: pkgin install nginx
  • remove package

    • pkgin remove <package>
  • update package DB and packages

    • pkgin update - update pkgin DB
    • pkgin upgrade <package> - upgrade package
  • NOTE: Packages are installed into the /opt/local prefix. For all packages installed via pkgin, the configuration folder is located in /opt/local/etc.

Service management

  • list system services
    • svcs - list online services
    • svcs -a - show all services
    • svcs -x - display services in problem state
  • start/stop services
    • svcadm enable <service_name>
    • svcadm disable <service_name>
    • svcadm restart <service_name>

Part 4: ZFS and snapshots

Snapshots of containers

  • create container snapshot
    • vmadm create-snapshot <vm_uuid> <snapshot_name>
  • rollback to snapshot
    • vmadm rollback-snapshot <vm_uuid> <snapshot_name>
  • delete a container snapshot
    • vmadm delete-snapshot <vm_uuid> <snapshot_name>

General ZFS listing

  • list all zfs filesystems
    • zfs list
  • list all snapshots in system
    • zfs list -t snapshot
  • list filesystems and snapshots of a container
    • zfs list -t all | grep <vm_uuid>
  • access to files of specific container snapshot
    • from inside container: ls -l /checkpoints/<snapshot_name>
    • from compute node: ls -l /zones/<vm_uuid>/.zfs/snapshot/vmsnap-<snapshot_name>

Part 5: Goodies

Linux containers

  • choose and import an LX image
    • imgadm avail type=lx-dataset
    • imgadm import <image_uuid>
      • e.g. imgadm import 7b5981c4-1889-11e7-b4c5-3f3bdfc9b88b # ubuntu-16.04
  • create another VM manifest with following properties updated:
    • "brand": "lx",
    • "image_uuid": "<lx-dataset image>",
    • "kernel_version" :"4.2",
    • IP address
  • create VM using vmadm create -f ...

Image creation

  • create image from container
    • imgadm create <vm_uuid> name=<new_image_name> version=<version>
    • imgadm create b7c38d98-c15c-4416-b42b-5dd74ea6907f name=mycontainer version=1.0.0
  • create image from container with clean script before snapshotting
    • imgadm create -s <local_clean_script> <vm_uuid> name=<new_image_name> version=<version>
    • imgadm create -s b7c38d98-c15c-4416-b42b-5dd74ea6907f name=mycontainer version=1.0.0
    • Example wget
  • import created image to local imgadm store (manifest and file)
    • imgadm install -m mycontainer-1.0.0.imgmanifest -f mycontainer-1.0.0.zfs

Internal virtual network switch

  • create a virtual switch (=etherstub)
    • dladm create-etherstub myswitch0
  • assign etherstub into vm
    • vmadm update <vm_uuid> -f add_nic.json
    • Example add_nic.json: wget
    • vmadm reboot <vm_uuid>

VM metadata

VM metadata is a simple yet powerful functionality used to pass information from the compute node to the VM (container/KVM) and vice versa.

  • VM metadata are configured via the VM manifest property customer_metadata on the compute node:

    • VM manifest:
       "customer_metadata": {
          "install_packages": "nginx,mariadb,zsh",
          "info": "blabla"
    • update metadata on an already existent VM:
      • echo '{ "set_customer_metadata": {"foo": "bar"} }' | vmadm update <vm_uuid>
  • inside a VM you can use the mdata-client tools to list, read, set and delete VM metadata:

    • mdata-list
    • mdata-get <key>
    • mdata-delete <key>
    • mdata-put <key> <value>
      • The mdata-put and mdata-delete operations will update the VM manifest on the node (vmadm get <vm_uuid>)

Danube Cloud :)



You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.