Skip to content
This repository has been archived by the owner on Dec 11, 2020. It is now read-only.

support to import some docker_container's attributes #234

Merged
merged 16 commits into from Feb 1, 2020
Merged

support to import some docker_container's attributes #234

merged 16 commits into from Feb 1, 2020

Conversation

suzuki-shunsuke
Copy link
Contributor

@suzuki-shunsuke suzuki-shunsuke commented Jan 7, 2020

#219

  • import some docker_container's attributes. Please see docker/resource_docker_container_funcs.go
  • set Computed: true to some attributes
    • hostname
    • command
    • entrypoint
    • env
    • labels
    • shm_size
    • ipc_mode
  • set DiffSuppressFunc to some attributes
    • network_mode
    • restart
  • format by gofmt

Verification codes

I share verification codes.

main.tf

provider "docker" {
}

resource "docker_image" "alpine" {
  name = "alpine:3.10.3"
}

resource "docker_container" "foo" {
  image       = docker_image.alpine.latest
  name        = "foo"
  rm          = true
  entrypoint  = ["tail"]
  start       = true
  command     = ["-f", "/dev/null"]
  read_only   = true
  shm_size    = 64 # 64MB
  privileged  = true
  user        = "root"
  working_dir = "/workspace"
  domainname  = "foo.com"
  hostname    = "foo"
  dns         = ["8.8.8.8"]
  dns_search  = ["example.com"]
  env = [
    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
    "RACK_ENV=development"
  ]
  userns_mode = "host"
  tmpfs = {
    "/run" : ""
    "/tmp" : ""
  }
  log_driver = "json-file"
  log_opts = {
    max-size : "200k"
    max-file : "10"
  }

  # these labels are set by docker-compose automatically
  labels {
    label = "com.docker.compose.config-hash"
    value = "40541480c3ba148eefff245755e35d6186d2bca9c9033340866be9a27a603fe9" # you have to fix the value
  }
  labels {
    label = "com.docker.compose.container-number"
    value = "1"
  }
  labels {
    label = "com.docker.compose.oneoff"
    value = "False"
  }
  labels {
    label = "com.docker.compose.project"
    value = "terraform-provider-docker"
  }
  labels {
    label = "com.docker.compose.service"
    value = "foo"
  }
  labels {
    label = "com.docker.compose.version"
    value = "1.24.1"
  }
  network_mode = "bridge"
  pid_mode     = "host"
  healthcheck {
    test     = ["CMD", "true"]
    interval = "1m30s"
    timeout  = "10s"
    retries  = 3
  }
  sysctls = {
    "net.core.somaxconn" : 1024
    "net.ipv4.tcp_syncookies" : 0
  }
  ipc_mode    = "shareable"
  memory      = 50
  memory_swap = 100
  capabilities {
    add  = ["ALL"]
    drop = ["SYS_ADMIN"]
  }
  devices {
    container_path = "/dev/null"
    host_path      = "/dev/null"
    permissions    = "rwm"
  }
}

resource "docker_container" "zoo" {
  image      = docker_image.alpine.latest
  name       = "zoo"
  command    = ["tail", "-f", "/dev/null"]
  cpu_shares = 2
  cpu_set    = "1"
  start      = true
  env = [
    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
  ]
  # links = ["/foo:/zoo/foo"]
}

Using Docker Compose and docker run command

docker-compose.yml

---
version: "3"
services:
  foo:
    image: alpine:3.10.3
    container_name: foo
    entrypoint: ["tail"]
    command: ["-f", "/dev/null"]
    read_only: true
    shm_size: 64M
    privileged: true
    user: root
    working_dir: /workspace
    domainname: foo.com
    hostname: foo
    dns: 8.8.8.8
    dns_search: example.com
    restart: "no"
    environment:
      RACK_ENV: development
    userns_mode: "host"
    tmpfs:
    - /run
    - /tmp
    devices:
    - "/dev/null:/dev/null"
    logging:
      driver: json-file
      options:
        max-size: "200k"
        max-file: "10"
    network_mode: "bridge"
    pid: "host"
    healthcheck:
      test: ["CMD", "true"]
      interval: 1m30s
      timeout: 10s
      retries: 3
    sysctls:
      net.core.somaxconn: 1024
      net.ipv4.tcp_syncookies: 0
    ipc: host
    cap_add:
    - ALL
    cap_drop:
    - SYS_ADMIN
$ docker-compose up -d
$ docker run --cpu-shares=2 --cpuset-cpus="1" -d --name zoo alpine:3.10.3 tail -f /dev/null

terraform apply and terraform state rm

Instead of Docker Compose and docker run command, we can run terraform apply and terraform state rm too.

$ terraform apply
$ terraform state rm docker_container.foo docker_container.zoo

terraform import

$ terraform import docker_container.foo $(docker inspect foo -f "{{.ID}}")
$ terraform import docker_container.zoo $(docker inspect zoo -f "{{.ID}}")

@ghost ghost added the size/M label Jan 7, 2020
@suzuki-shunsuke
Copy link
Contributor Author

Verification precedure

main.tf

provider "docker" {
}

resource "docker_image" "alpine" {
  name = "alpine:3.10.3"
}

resource "docker_container" "foo" {
  image       = docker_image.alpine.latest
  name        = "foo"
  rm          = true
  entrypoint  = ["tail"]
  start       = true
  command     = ["-f", "/dev/null"]
  read_only   = true
  shm_size    = 64 # 64MB
  privileged  = true
  user        = "root"
  working_dir = "/workspace"
  domainname  = "foo.com"
  hostname    = "foo"
  dns         = ["8.8.8.8"]
  dns_search  = ["example.com"]
  env = [
    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
    "RACK_ENV=development"
  ]
  userns_mode = "host"
  tmpfs = {
    "/run" : ""
    "/tmp" : ""
  }
  log_driver = "json-file"
  log_opts = {
    max-size : "200k"
    max-file : "10"
  }

  # these labels are set by docker-compose automatically
  labels {
    label = "com.docker.compose.config-hash"
    value = "40541480c3ba148eefff245755e35d6186d2bca9c9033340866be9a27a603fe9" # you have to fix the value
  }
  labels {
    label = "com.docker.compose.container-number"
    value = "1"
  }
  labels {
    label = "com.docker.compose.oneoff"
    value = "False"
  }
  labels {
    label = "com.docker.compose.project"
    value = "terraform-provider-docker"
  }
  labels {
    label = "com.docker.compose.service"
    value = "foo"
  }
  labels {
    label = "com.docker.compose.version"
    value = "1.24.1"
  }
  network_mode = "bridge"
  pid_mode     = "host"
  healthcheck {
    test     = ["CMD", "true"]
    interval = "1m30s"
    timeout  = "10s"
    retries  = 3
  }
  sysctls = {
    "net.core.somaxconn" : 1024
    "net.ipv4.tcp_syncookies" : 0
  }
  ipc_mode    = "shareable"
  memory      = 50
  memory_swap = 100
  capabilities {
    add  = ["ALL"]
    drop = ["SYS_ADMIN"]
  }
  devices {
    container_path = "/dev/null"
    host_path      = "/dev/null"
    permissions    = "rwm"
  }
}

resource "docker_container" "zoo" {
  image      = docker_image.alpine.latest
  name       = "zoo"
  command    = ["tail", "-f", "/dev/null"]
  cpu_shares = 2
  cpu_set    = "1"
  start      = true
  env = [
    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
  ]
  # links = ["/foo:/zoo/foo"]
}

commands

$ terraform version
$ terraform apply
$ terraform plan
$ terraform state rm docker_container.foo docker_container.zoo
$ terraform import docker_container.foo $(docker inspect foo -f "{{.ID}}")
$ terraform import docker_container.zoo $(docker inspect zoo -f "{{.ID}}")
$ terraform plan

results

$ terraform version
Terraform v0.12.18
+ provider.docker (unversioned)

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # docker_container.foo will be created
  + resource "docker_container" "foo" {
      + attach           = false
      + bridge           = (known after apply)
      + command          = [
          + "-f",
          + "/dev/null",
        ]
      + container_logs   = (known after apply)
      + dns              = [
          + "8.8.8.8",
        ]
      + dns_search       = [
          + "example.com",
        ]
      + domainname       = "foo.com"
      + entrypoint       = [
          + "tail",
        ]
      + env              = [
          + "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
          + "RACK_ENV=development",
        ]
      + exit_code        = (known after apply)
      + gateway          = (known after apply)
      + hostname         = "foo"
      + id               = (known after apply)
      + image            = (known after apply)
      + ip_address       = (known after apply)
      + ip_prefix_length = (known after apply)
      + ipc_mode         = "shareable"
      + log_driver       = "json-file"
      + log_opts         = {
          + "max-file" = "10"
          + "max-size" = "200k"
        }
      + logs             = false
      + memory           = 50
      + memory_swap      = 100
      + name             = "foo"
      + network_data     = (known after apply)
      + network_mode     = "bridge"
      + pid_mode         = "host"
      + privileged       = true
      + read_only        = true
      + restart          = "no"
      + rm               = true
      + shm_size         = 64
      + sysctls          = {
          + "net.core.somaxconn"      = "1024"
          + "net.ipv4.tcp_syncookies" = "0"
        }
      + tmpfs            = {
          + "/run" = ""
          + "/tmp" = ""
        }
      + user             = "root"
      + userns_mode      = "host"
      + working_dir      = "/workspace"

      + capabilities {
          + add  = [
              + "ALL",
            ]
          + drop = [
              + "SYS_ADMIN",
            ]
        }

      + devices {
          + container_path = "/dev/null"
          + host_path      = "/dev/null"
          + permissions    = "rwm"
        }

      + healthcheck {
          + interval     = "1m30s"
          + retries      = 3
          + start_period = "0s"
          + test         = [
              + "CMD",
              + "true",
            ]
          + timeout      = "10s"
        }

      + labels {
          + label = "com.docker.compose.config-hash"
          + value = "40541480c3ba148eefff245755e35d6186d2bca9c9033340866be9a27a603fe9"
        }
      + labels {
          + label = "com.docker.compose.container-number"
          + value = "1"
        }
      + labels {
          + label = "com.docker.compose.oneoff"
          + value = "False"
        }
      + labels {
          + label = "com.docker.compose.project"
          + value = "terraform-provider-docker"
        }
      + labels {
          + label = "com.docker.compose.service"
          + value = "foo"
        }
      + labels {
          + label = "com.docker.compose.version"
          + value = "1.24.1"
        }
    }

  # docker_container.zoo will be created
  + resource "docker_container" "zoo" {
      + attach           = false
      + bridge           = (known after apply)
      + command          = [
          + "tail",
          + "-f",
          + "/dev/null",
        ]
      + container_logs   = (known after apply)
      + cpu_set          = "1"
      + cpu_shares       = 2
      + env              = [
          + "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
        ]
      + exit_code        = (known after apply)
      + gateway          = (known after apply)
      + hostname         = (known after apply)
      + id               = (known after apply)
      + image            = (known after apply)
      + ip_address       = (known after apply)
      + ip_prefix_length = (known after apply)
      + ipc_mode         = (known after apply)
      + log_driver       = "json-file"
      + logs             = false
      + name             = "zoo"
      + network_data     = (known after apply)
      + read_only        = false
      + restart          = "no"
      + rm               = false
      + shm_size         = (known after apply)
    }

  # docker_image.alpine will be created
  + resource "docker_image" "alpine" {
      + id     = (known after apply)
      + latest = (known after apply)
      + name   = "alpine:3.10.3"
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

docker_image.alpine: Creating...
docker_image.alpine: Creation complete after 0s [id=sha256:965ea09ff2ebd2b9eeec88cd822ce156f6674c7e99be082c7efac3c62f3ff652alpine:3.10.3]
docker_container.zoo: Creating...
docker_container.foo: Creating...
docker_container.zoo: Creation complete after 0s [id=08e9f5799f00d5a6c968666cd91e683ab5a626e1aae84c761b3a05e2d9e7b9a5]
docker_container.foo: Creation complete after 0s [id=ca7f581b69edba2a71fac1146c1a3fa95a03e3e4b4fccce9dbec46cbbe9cab04]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

docker_image.alpine: Refreshing state... [id=sha256:965ea09ff2ebd2b9eeec88cd822ce156f6674c7e99be082c7efac3c62f3ff652alpine:3.10.3]
docker_container.zoo: Refreshing state... [id=08e9f5799f00d5a6c968666cd91e683ab5a626e1aae84c761b3a05e2d9e7b9a5]
docker_container.foo: Refreshing state... [id=ca7f581b69edba2a71fac1146c1a3fa95a03e3e4b4fccce9dbec46cbbe9cab04]

------------------------------------------------------------------------

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

$ terraform state rm docker_container.foo docker_container.zoo
Removed docker_container.foo
Removed docker_container.zoo
Successfully removed 2 resource instance(s).

$ terraform import docker_container.foo $(docker inspect foo -f "{{.ID}}")
docker_container.foo: Importing from ID "ca7f581b69edba2a71fac1146c1a3fa95a03e3e4b4fccce9dbec46cbbe9cab04"...
docker_container.foo: Import prepared!
  Prepared docker_container for import
docker_container.foo: Refreshing state... [id=ca7f581b69edba2a71fac1146c1a3fa95a03e3e4b4fccce9dbec46cbbe9cab04]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

$ terraform import docker_container.zoo $(docker inspect zoo -f "{{.ID}}")
docker_container.zoo: Importing from ID "08e9f5799f00d5a6c968666cd91e683ab5a626e1aae84c761b3a05e2d9e7b9a5"...
docker_container.zoo: Import prepared!
  Prepared docker_container for import
docker_container.zoo: Refreshing state... [id=08e9f5799f00d5a6c968666cd91e683ab5a626e1aae84c761b3a05e2d9e7b9a5]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

docker_image.alpine: Refreshing state... [id=sha256:965ea09ff2ebd2b9eeec88cd822ce156f6674c7e99be082c7efac3c62f3ff652alpine:3.10.3]
docker_container.zoo: Refreshing state... [id=08e9f5799f00d5a6c968666cd91e683ab5a626e1aae84c761b3a05e2d9e7b9a5]
docker_container.foo: Refreshing state... [id=ca7f581b69edba2a71fac1146c1a3fa95a03e3e4b4fccce9dbec46cbbe9cab04]

------------------------------------------------------------------------

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

@mavogel
Copy link
Contributor

mavogel commented Jan 8, 2020

Hey, @suzuki-shunsuke thank you for your effort on helping to finish the import of all resources :)

Could you take a look at the failing tests? Furthermore, we'd consider adding the verify-step for the test as well as already done in the test for docker_service

{
  ResourceName:      "docker_service.foo",
  ImportState:       true,
  ImportStateVerify: true,
},

@suzuki-shunsuke
Copy link
Contributor Author

@mavogel
Thank you for your comment.
I'll check in this weekend.

@suzuki-shunsuke
Copy link
Contributor Author

I found that d.Get("start").(bool) is false even if "start" = true when DiffSuppressFunc of "start" is set. 🤔

https://github.com/suzuki-shunsuke/terraform-provider-docker/blob/dbf977499272d5bde494c823e8bff777e2fa0f3b/docker/resource_docker_container_funcs.go#L436
https://github.com/terraform-providers/terraform-provider-docker/pull/234/files#diff-39adaa97903875e2527ec754a4bda9a9R58-R61

When I comment out DiffSuppressFunc, d.Get("start").(bool) is false.

@suzuki-shunsuke
Copy link
Contributor Author

We can avoid the above trouble by removing DiffSuppressFunc, but ideally we should set DiffSuppressFunc.

start - (Optional, bool) If true, then the Docker container will be started after creation. If false, then the container is only created.

https://www.terraform.io/docs/providers/docker/r/container.html#start

I think the attribute start is meaningful only when the container is created.
So I set DiffSuppressFunc to ignore diff.
We can't get the information about "start" from existing container.
The attribute "start" isn't the state of the container.

@ghost ghost added size/L and removed size/M labels Jan 10, 2020
@suzuki-shunsuke
Copy link
Contributor Author

suzuki-shunsuke commented Jan 10, 2020

Note that Docker image can have labels and environment variables so when we define labels or environment variables of docker_container resource we have to add labes and environment variables Docker image has to the attribute labels and env.

@suzuki-shunsuke
Copy link
Contributor Author

suzuki-shunsuke commented Jan 10, 2020

#237

CI fails but this failure occurs at the master branch too so it has nothing to do with this pull request.

This is a test of docker_service.

https://github.com/terraform-providers/terraform-provider-docker/blob/a7f6cc93009c4052f836c279e68ecd4a96ab238d/docker/resource_docker_service_test.go#L475

=== RUN   TestAccDockerService_fullSpec
--- FAIL: TestAccDockerService_fullSpec (6.43s)
    testing.go:569: Step 0 error: Check failed: Check 71/76 error: docker_service.foo: Attribute 'endpoint_spec.0.mode' expected "vip", got ""

I don't know why this test fails.

@suzuki-shunsuke
Copy link
Contributor Author

@mavogel Please review 🙏

@mavogel
Copy link
Contributor

mavogel commented Jan 11, 2020

@suzuki-shunsuke thank you for the awesome work. It looks like we have a flaky test here and yes it has nothing to do with your PR. I'll investigate in a separate issue and we can deactivate this test temporarily (as we did also for another flaky test only on travis)

I'll review in the next couple of days

@mavogel mavogel self-requested a review January 12, 2020 16:39
@suzuki-shunsuke
Copy link
Contributor Author

suzuki-shunsuke commented Jan 25, 2020

Would you review?

@mavogel mavogel added this to the v2.7.0 milestone Feb 1, 2020
Copy link
Contributor

@mavogel mavogel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM :) big thanks for this PR.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants