Skip to content

Latest commit

 

History

History
282 lines (219 loc) · 9.67 KB

exec.mdx

File metadata and controls

282 lines (219 loc) · 9.67 KB
layout page_title description
docs
Drivers: Exec
The Exec task driver is used to run binaries using OS isolation primitives.

Isolated Fork/Exec Driver

Name: exec

The exec driver is used to simply execute a particular command for a task. However, unlike raw_exec it uses the underlying isolation primitives of the operating system to limit the task's access to resources. While simple, since the exec driver can invoke any command, it can be used to call scripts or other wrappers which provide higher level features.

Task Configuration

task "webservice" {
  driver = "exec"

  config {
    command = "my-binary"
    args    = ["-flag", "1"]
  }
}

The exec driver supports the following configuration in the job spec:

  • command - The command to execute. Must be provided. If executing a binary that exists on the host, the path must be absolute and within the task's chroot or in a host volume mounted with a volume_mount block. The driver will make the binary executable and will search, in order:

    • The local directory with the task directory.
    • The task directory.
    • Any mounts, in the order listed in the job specification.
    • The usr/local/bin, usr/bin and bin directories inside the task directory.

    If executing a binary that is downloaded from an artifact, the path can be relative from the allocation's root directory.

  • args - (Optional) A list of arguments to the command. References to environment variables or any interpretable Nomad variables will be interpreted before launching the task.

  • pid_mode - (Optional) Set to "private" to enable PID namespace isolation for this task, or "host" to disable isolation. If left unset, the behavior is determined from the default_pid_mode in plugin configuration.

!> Warning: If set to "host", other processes running as the same user will be able to access sensitive process information like environment variables.

  • ipc_mode - (Optional) Set to "private" to enable IPC namespace isolation for this task, or "host" to disable isolation. If left unset, the behavior is determined from the default_ipc_mode in plugin configuration.

!> Warning: If set to "host", other processes running as the same user will be able to make use of IPC features, like sending unexpected POSIX signals.

  • cap_add - (Optional) A list of Linux capabilities to enable for the task. Effective capabilities (computed from cap_add and cap_drop) must be a subset of the allowed capabilities configured with allow_caps. Note that "all" is not permitted here if the allow_caps field in the driver configuration doesn't also allow all capabilities.
config {
  cap_add = ["net_raw", "sys_time"]
}
  • cap_drop - (Optional) A list of Linux capabilities to disable for the task. Effective capabilities (computed from cap_add and cap_drop) must be a subset of the allowed capabilities configured with allow_caps.
config {
  cap_drop = ["all"]
  cap_add  = ["chown", "sys_chroot", "mknod"]
}

Examples

To run a binary present on the Node:

task "example" {
  driver = "exec"

  config {
    # When running a binary that exists on the host, the path must be absolute.
    command = "/bin/sleep"
    args    = ["1"]
  }
}

To execute a binary downloaded from an artifact:

task "example" {
  driver = "exec"

  config {
    command = "name-of-my-binary"
  }

  artifact {
    source = "https://internal.file.server/name-of-my-binary"
    options {
      checksum = "sha256:abd123445ds4555555555"
    }
  }
}

Capabilities

The exec driver implements the following capabilities.

Feature Implementation
nomad alloc signal true
nomad alloc exec true
filesystem isolation chroot
network isolation host, group
volume mounting all

Client Requirements

The exec driver can only be run when on Linux and running Nomad as root. exec is limited to this configuration because currently isolation of resources is only guaranteed on Linux. Further, the host must have cgroups mounted properly in order for the driver to work.

If you are receiving the error:

* Constraint "missing drivers" filtered <> nodes

and using the exec driver, check to ensure that you are running Nomad as root. This also applies for running Nomad in -dev mode.

Plugin Options

  • default_pid_mode (string: optional) - Defaults to "private". Set to "private" to enable PID namespace isolation for tasks by default, or "host" to disable isolation.

!> Warning: If set to "host", other processes running as the same user will be able to access sensitive process information like environment variables.

  • default_ipc_mode (string: optional) - Defaults to "private". Set to "private" to enable IPC namespace isolation for tasks by default, or "host" to disable isolation.

!> Warning: If set to "host", other processes running as the same user will be able to make use of IPC features, like sending unexpected POSIX signals.

  • no_pivot_root (bool: optional) - Defaults to false. When true, the driver uses chroot for file system isolation without pivot_root. This is useful for systems where the root is on a ramdisk.

  • allow_caps - A list of allowed Linux capabilities. Defaults to

["audit_write", "chown", "dac_override", "fowner", "fsetid", "kill", "mknod",
 "net_bind_service", "setfcap", "setgid", "setpcap", "setuid", "sys_chroot"]

which is modeled after the capabilities allowed by docker by default (without NET_RAW). Allows the operator to control which capabilities can be obtained by tasks using cap_add and cap_drop options. Supports the value "all" as a shortcut for allow-listing all capabilities supported by the operating system.

!> Warning: Allowing more capabilities beyond the default may lead to undesirable consequences, including untrusted tasks being able to compromise the host system.

Client Attributes

The exec driver will set the following client attributes:

  • driver.exec - This will be set to "1", indicating the driver is available.

Resource Isolation

The resource isolation provided varies by the operating system of the client and the configuration.

On Linux, Nomad will use cgroups, and a chroot to isolate the resources of a process and as such the Nomad agent must be run as root. Some Linux distributions do not boot with all required cgroups enabled by default. You can see which cgroups are enabled by reading /proc/cgroups, and verifying that all the following cgroups are enabled:

$ awk '{print $1 " " $4}' /proc/cgroups
#subsys_name enabled
cpuset 1
cpu 1
cpuacct 1
blkio 1
memory 1
devices 1
freezer 1
net_cls 1
perf_event 1
net_prio 1
hugetlb 1
pids 1

Nomad can only use cgroups to control resources if all the required controllers are available. If one or more required cgroups are unavailable, Nomad will disable resource controls that require cgroups entirely. See the documentation on cgroup controller requirements for more details.

Chroot

The chroot is populated with data in the following directories from the host machine:

[
  "/bin",
  "/etc",
  "/lib",
  "/lib32",
  "/lib64",
  "/run/resolvconf",
  "/sbin",
  "/usr",
]

The task's chroot is populated by linking or copying the data from the host into the chroot. Note that this can take considerable disk space. Since Nomad v0.5.3, the client manages garbage collection locally which mitigates any issue this may create.

This list is configurable through the agent client configuration file.

CPU

Nomad limits exec tasks' CPU based on CPU shares. CPU shares allow containers to burst past their CPU limits. CPU limits will only be imposed when there is contention for resources. When the host is under load your process may be throttled to stabilize QoS depending on how many shares it has. You can see how many CPU shares are available to your process by reading NOMAD_CPU_LIMIT. 1000 shares are approximately equal to 1 GHz.

Please keep the implications of CPU shares in mind when you load test workloads on Nomad.

If resources cores is set, the task is given an isolated reserved set of CPU cores to make use of. The total set of cores the task may run on is the private set combined with the variable set of unreserved cores. The private set of CPU cores is available to your process by reading NOMAD_CPU_CORES.