Skip to content
This repository has been archived by the owner on May 12, 2021. It is now read-only.

Kata container startup time #1025

Closed
sahilsuneja1 opened this issue Dec 14, 2018 · 5 comments
Closed

Kata container startup time #1025

sahilsuneja1 opened this issue Dec 14, 2018 · 5 comments

Comments

@sahilsuneja1
Copy link

Description of problem

Time to run a "hello world" container in kata comes out to be 2.2s when using default config (qemu-lite-system-x86_64), and 2.8s with Nemu. Are such absolute and relative times expected?
I do see some error logs in kata-collect-data.sh. Is that a contributor? How do I resolve this?

kata-runtime -v

kata-runtime  : 1.4.0
   commit   : 21f0059
   OCI specs: 1.0.1-dev

kata-runtime kata-env | awk -v RS= '/[Hypervisor]/'

[Hypervisor]
  MachineType = "virt"
  Version = "NEMU (like QEMU) version 3.0.0 (-dirty)\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers"
  Path = "/root/build-x86_64_virt/x86_64_virt-softmmu/qemu-system-x86_64_virt"
  BlockDeviceDriver = "virtio-scsi"
  EntropySource = "/dev/urandom"
  Msize9p = 8192
  MemorySlots = 10
  Debug = false
  UseVSock = false

cd ~/rootfs/node:11.2.0
time kata-runtime run -b . 851428da-ffd3-11e8-9cf1-0aecc8e7ab67

hello

real	0m2.890s
user	0m0.036s
sys	0m0.083s

Expected result

2.2s seems too high. Plus, nemu should be faster if I am not wrong.


kata-collect-data.sh

Meta details

Running kata-collect-data.sh version 1.4.0 (commit 21f0059) at 2018-12-14.14:08:01.509881815-0500.


Runtime is /usr/bin/kata-runtime.

kata-env

Output of "/usr/bin/kata-runtime kata-env":

[Meta]
  Version = "1.0.19"

[Runtime]
  Debug = false
  DisableNewNetNs = false
  Path = "/usr/bin/kata-runtime"
  [Runtime.Version]
    Semver = "1.4.0"
    Commit = "21f0059"
    OCI = "1.0.1-dev"
  [Runtime.Config]
    Path = "/usr/share/defaults/kata-containers/configuration.toml"

[Hypervisor]
  MachineType = "virt"
  Version = "NEMU (like QEMU) version 3.0.0 (-dirty)\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers"
  Path = "/root/build-x86_64_virt/x86_64_virt-softmmu/qemu-system-x86_64_virt"
  BlockDeviceDriver = "virtio-scsi"
  EntropySource = "/dev/urandom"
  Msize9p = 8192
  MemorySlots = 10
  Debug = false
  UseVSock = false

[Image]
  Path = "/usr/share/kata-containers/kata-containers-image_clearlinux_1.4.0_agent_0ff30063f7e.img"

[Kernel]
  Path = "/usr/share/kata-containers/vmlinuz-4.14.67.17-11.container"
  Parameters = ""

[Initrd]
  Path = ""

[Proxy]
  Type = "kataProxy"
  Version = "kata-proxy version 1.4.0-e1856c2"
  Path = "/usr/libexec/kata-containers/kata-proxy"
  Debug = false

[Shim]
  Type = "kataShim"
  Version = "kata-shim version 1.4.0-b02868b"
  Path = "/usr/libexec/kata-containers/kata-shim"
  Debug = false

[Agent]
  Type = "kata"

[Host]
  Kernel = "4.15.0-39-generic"
  Architecture = "amd64"
  VMContainerCapable = true
  SupportVSocks = false
  [Host.Distro]
    Name = "Ubuntu"
    Version = "18.04"
  [Host.CPU]
    Vendor = "GenuineIntel"
    Model = "Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz"

[Netmon]
  Version = "kata-netmon version 1.4.0"
  Path = "/usr/libexec/kata-containers/kata-netmon"
  Debug = false
  Enable = false

Runtime config files

Runtime default config files

/etc/kata-containers/configuration.toml
/usr/share/defaults/kata-containers/configuration.toml

Runtime config file contents

Config file /etc/kata-containers/configuration.toml not found
Output of "cat "/usr/share/defaults/kata-containers/configuration.toml"":

# Copyright (c) 2017-2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

[hypervisor.qemu]
#path = "/usr/bin/qemu-lite-system-x86_64"
#for nemu
path = "/root/build-x86_64_virt/x86_64_virt-softmmu/qemu-system-x86_64_virt" 
kernel = "/usr/share/kata-containers/vmlinuz.container"
image = "/usr/share/kata-containers/kata-containers.img"
#machine_type = "pc"
# for nemu
machine_type = "virt"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
#firmware = ""
#for nemu
firmware = "/usr/share/nemu/OVMF.fd"

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per SB/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 1

# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0             --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0

# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's 
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. 
# This flag prevents the block device from being passed to the hypervisor, 
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is either virtio-scsi or 
# virtio-blk.
block_device_driver = "virtio-scsi"

# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically 
# result in memory pre allocation
#enable_hugepages = true

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
# 
# Default false
#enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
# 
#disable_nesting_checks = true

# This is the msize used for 9p shares. It is the number of bytes 
# used for 9p packet payload.
#msize_9p = 8192

# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true

# VFIO devices are hotplugged on a bridge by default. 
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on 
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true

# If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics.
# Default false
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy.  If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"

# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"

[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Default false
#enable_template = true

[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[agent.kata]
# There is no field for this section. The goal is only to be able to
# specify which type of agent the user wants to use.

[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true

# Specify the path to the netmon binary.
path = "/usr/libexec/kata-containers/kata-netmon"

# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - bridged
#     Uses a linux bridge to interconnect the container interface to
#     the VM. Works for most cases except macvlan and ipvlan.
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
#
#   - none
#     Used when customize network. Only creates a tap device. No veth pair.
#
#   - tcfilter
#     Uses tc filter rules to redirect traffic from the network interface
#     provided by plugin to a tap interface connected to the VM.
#
internetworking_model="macvtap"

# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true

# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true

KSM throttler

version

Output of "/usr/libexec/kata-ksm-throttler/kata-ksm-throttler --version":

kata-ksm-throttler version 1.4.0-1212de2

systemd service

Image details

---
osbuilder:
  url: "https://github.com/kata-containers/osbuilder"
  version: "unknown"
rootfs-creation-time: "2018-11-23T07:21:30.662044613+0000Z"
description: "osbuilder rootfs"
file-format-version: "0.0.2"
architecture: "x86_64"
base-distro:
  name: "Clear"
  version: "26450"
  packages:
    default:
      - "iptables-bin"
      - "libudev0-shim"
      - "systemd"
    extra:

agent:
  url: "https://github.com/kata-containers/agent"
  name: "kata-agent"
  version: "1.4.0-0ff30063f7e71eb0f48d60c21156cd18b8a58024"
  agent-is-init-daemon: "no"

Initrd details

No initrd


Logfiles

Runtime logs

Recent runtime problems found in system journal:

time="2018-12-14T14:07:44.665743333-05:00" level=warning msg="fetch sandbox device failed" arch=amd64 command=run container=851428da-ffd3-11e8-9cf1-0aecc8e7ab67 error="open /run/vc/sbs/851428da-ffd3-11e8-9cf1-0aecc8e7ab67/devices.json: no such file or directory" name=kata-runtime pid=27602 sandbox=851428da-ffd3-11e8-9cf1-0aecc8e7ab67 sandboxid=851428da-ffd3-11e8-9cf1-0aecc8e7ab67 source=virtcontainers subsystem=sandbox

Proxy logs

Recent proxy problems found in system journal:

time="2018-12-14T14:07:47.152061975-05:00" level=fatal msg="channel error" error="accept unix /run/vc/sbs/851428da-ffd3-11e8-9cf1-0aecc8e7ab67/proxy.sock: use of closed network connection" name=kata-proxy pid=27621 sandbox=851428da-ffd3-11e8-9cf1-0aecc8e7ab67 source=proxy

Shim logs

No recent shim problems found in system journal.

Throttler logs

No recent throttler problems found in system journal.


Container manager details

Have docker

Docker

Output of "docker version":

Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:49:01 2018
 OS/Arch:           linux/amd64
 Experimental:      false
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Output of "docker info":

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Output of "systemctl show docker":

Type=notify
Restart=always
NotifyAccess=main
RestartUSec=2s
TimeoutStartUSec=infinity
TimeoutStopUSec=infinity
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestampMonotonic=0
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=0
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestamp=Fri 2018-12-14 14:05:40 EST
ExecMainStartTimestampMonotonic=2051535268458
ExecMainExitTimestamp=Fri 2018-12-14 14:06:03 EST
ExecMainExitTimestampMonotonic=2051558954672
ExecMainPID=26954
ExecMainCode=1
ExecMainStatus=0
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H unix:// ; ignore_errors=no ; start_time=[Fri 2018-12-14 14:05:40 EST] ; stop_time=[Fri 2018-12-14 14:06:03 EST] ; pid=26954 ; code=exited ; status=0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=[not set]
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=yes
DelegateControllers=cpu cpuacct io blkio memory devices pids
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=no
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=infinity
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=infinity
LimitNOFILESoft=infinity
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=infinity
LimitNPROCSoft=infinity
LimitMEMLOCK=16777216
LimitMEMLOCKSoft=16777216
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=63728
LimitSIGPENDINGSoft=63728
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=docker.service
Names=docker.service
Requires=sysinit.target system.slice
Wants=network-online.target
BindsTo=containerd.service
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=multi-user.target shutdown.target
After=network-online.target systemd-journald.socket system.slice firewalld.service sysinit.target basic.target
Documentation=https://docs.docker.com
Description=Docker Application Container Engine
LoadState=loaded
ActiveState=inactive
SubState=dead
FragmentPath=/lib/systemd/system/docker.service
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Fri 2018-12-14 14:06:03 EST
StateChangeTimestampMonotonic=2051558954675
InactiveExitTimestamp=Fri 2018-12-14 14:05:40 EST
InactiveExitTimestampMonotonic=2051535268476
ActiveEnterTimestamp=Fri 2018-12-14 14:05:47 EST
ActiveEnterTimestampMonotonic=2051542980992
ActiveExitTimestamp=Fri 2018-12-14 14:06:03 EST
ActiveExitTimestampMonotonic=2051558950156
InactiveEnterTimestamp=Fri 2018-12-14 14:06:03 EST
InactiveEnterTimestampMonotonic=2051558954675
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Fri 2018-12-14 14:05:40 EST
ConditionTimestampMonotonic=2051535267952
AssertTimestamp=Fri 2018-12-14 14:05:40 EST
AssertTimestampMonotonic=2051535267953
Transient=no
Perpetual=no
StartLimitIntervalUSec=1min
StartLimitBurst=3
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=b2222272b3954506b24dd81be2428d26
CollectMode=inactive

No kubectl


Packages

Have dpkg
Output of "dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"":

ii  kata-containers-image                 1.4.0-10                          amd64        Kata containers image
ii  kata-ksm-throttler                    1.4.0.git+1212de2-12              amd64        
ii  kata-linux-container                  4.14.67.17-11                     amd64        linux kernel optimised for container-like workloads.
ii  kata-proxy                            1.4.0+git.e1856c2-11              amd64        
ii  kata-runtime                          1.4.0+git.21f0059-15              amd64        
ii  kata-shim                             1.4.0+git.b02868b-9               amd64        
ii  qemu-lite                             2.11.0+git.f886228056-13          amd64        linux kernel optimised for container-like workloads.
ii  qemu-vanilla                          2.11.2+git.0982a56a55-13          amd64        linux kernel optimised for container-like workloads.

No rpm


@caoruidong
Copy link
Member

caoruidong commented Dec 17, 2018

I see performance drop in Ubuntu too. What is the graph driver? Could you please run docker info with root? We cannot see the output now. And how about container boot time with runc as runtime?

@grahamwhaley
Copy link
Contributor

Hi @sahilsuneja1 @caoruidong - thanks for the report and question.
Interesting to see you measuring the runtime directly (and not through docker for instance :-) ).
A couple of points about your measurement then:

  • taking a single boot measure is a little 'risky' - quite often we see some noise or variance on a system, so better to take say 20 samples and then the average. That is what we do over in the report generator for boot time for instance.
  • also, in the report generator test (using https://github.com/kata-containers/tests/blob/master/metrics/time/launch_times.sh, which uses docker to do the run), we try to grab timestamps so we can measure which phases of the 'boot' take how long - this might give some insight into where the time difference is between the hypervisors, and in particular, you are currently measuring the teardown time in the test, which you may or may not be interested in.

My best guess for NEMU is that maybe it is using a different type of BIOS or firmware boot. I don't know what the current status of that is, but I believe @rbradford has been looking into the options and their time and size impacts.

/cc @sameo @sboeuf as well.

@devimc
Copy link

devimc commented Dec 17, 2018

@sahilsuneja1 try using kata-agent as init, see #768 (comment)

@rbradford
Copy link
Contributor

qemu-lite will always be faster than anything else due to the fact it loads an uncompressed kernel binary. We're currently doing an investigation into improving boot time. You can see the details in this mailing list post: https://lists.gnu.org/archive/html/qemu-devel/2018-12/msg03096.html

In the medium term we're looking into switching Kata-NEMU from OVMF to Seabios.

@sahilsuneja1
Copy link
Author

Thanks!
@caoruidong: I'm not using docker to run my container but using kata-runtime directly.
@grahamwhaley: The time I mentioned is the average time over several runs (the sample I pasted was from one of those). Thanks for the pointer to report generator and launch_times; will try running those.
@rbradford: Thanks for the nemu vs qemu pointer and clarification!
@devimc: Thanks for the pointer; will try kata-agent as init route.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
Issue backlog
  
Done
Development

No branches or pull requests

6 participants