Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cgroups not getting applied on containers launched using nomad-driver-containerd #8

Closed
shishir-a412ed opened this issue Jun 24, 2020 · 0 comments · Fixed by #44
Closed
Assignees
Labels
bug Something isn't working

Comments

@shishir-a412ed
Copy link
Collaborator

When I launch a container using nomad-driver-containerd, and it exceeds its limits, cgroups are not applied and the container doesn't get OOM killed. To give a comparison between docker and nomad-driver-containerd driver:

stress.nomad

job "stress" {
  datacenters = ["dc1"]

  group "stress-group" {
    task "stress-task" {
      driver = "docker"

      config {
        image = "docker.io/shm32/stress:1.0"
      }

      restart {
        attempts = 5
        delay    = "30s"
      }

      resources {
        cpu    = 500
        memory = 256
        network {
          mbits = 10
        }
      }
    }
  }
}
$ nomad job run stress.nomad

When stress.nomad exceeds 500 Mhz of CPU or 256 MB of memory, it's OOM killed.

However when I launch the same job (stress.nomad) using nomad-driver-containerd it keeps running and doesn't get OOM killed.

In the case of docker driver, IIUC docker is managing the cgroups for the container.
The question probably is, how does nomad manage resource constraints (cgroups) on workloads launched by other drivers e.g. QEMU, Java, exec, etc.
Does nomad apply/manage cgroups at the orchestration level?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant