Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform apply sometimes fails due to arbitrary ordering of volume_mounts #93

Closed
Legogris opened this issue Feb 6, 2020 · 4 comments · Fixed by #94 or #97
Closed

terraform apply sometimes fails due to arbitrary ordering of volume_mounts #93

Legogris opened this issue Feb 6, 2020 · 4 comments · Fixed by #94 or #97
Assignees

Comments

@Legogris
Copy link

Legogris commented Feb 6, 2020

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Terraform v0.12.20

  • provider.nomad v1.4.2
  • provider.null v2.1.2
  • provider.random v2.2.1
  • provider.template v2.1.2

Nomad Version

Build 0.10.2
Protocol 2

Provider Configuration

Which values are you setting in the provider configuration?

provider "nomad" {
  version = "~> 1.4"
  address = "http://nomad-servers.service.dc2.consul:4646"
}

Affected Resource(s)

Please list the resources as a list, for example:

  • nomad_job

Terraform Configuration Files

job "jellyfin" {
  datacenters = ["${dc}"]
  group "jellyfin" {
    count = 1
    task "jellyfin" {
      driver = "docker"
      volume_mount {
        volume = "media"
        destination = "/media"
        read_only = true
      }
      volume_mount {
        volume = "jellyfin-config"
        destination = "/config"
        read_only = false
      }
      config {
        image = "jellyfin/jellyfin"
        labels {
          group = "popcorn"
        }
        devices = [
          {
            host_path = "/dev/dri",
            container_path = "/dev/dri"
          }
        ],
        volumes = [
          "/opt/jellyfin/cache:/cache",
        ]
        port_map {
          http = 8096
          autodiscovery = 1900
        }
      }
      resources {
        network {
          mbits = 800
          port "http" {
          }
          port "autodiscovery" {
          }
        }
      }
      service {
        name = "jellyfin"
        port = "http"
        
        tags = [
          "traefik.enable=true",
          "traefik.http.routers.jellyfin.rule=${traefik_router_rule}",
          "traefik.http.routers.jellyfin.entrypoints=http",
        ]

        check {
          type     = "tcp"
          port     = "http"
          interval = "30s"
          timeout  = "10s"
        }
      }
    }
    volume "media" {
      type = "host"
      source = "media"
      read_only = true
    }
    volume "jellyfin-config" {
      type = "host"
      source = "jellyfin-config"
      read_only = false
    }
  }
}

Expected Behavior

Be able to apply.

Actual Behavior

Most of the time, apply succeeds. Sometimes, After being presented with the proposed actions, t fails with the below output. It seems that volume_mounts get arbitrarily reordered and the first gets confused with the second and vice versa.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes


Error: Provider produced inconsistent final plan

When expanding the plan for module.popcorn.nomad_job.jellyfin to include new
values learned so far during apply, provider "registry.terraform.io/-/nomad"
produced an invalid new value for .task_groups[0].volumes[0].name: was
cty.StringVal("jellyfin-config"), but now cty.StringVal("media").

This is a bug in the provider, which should be reported in the provider's own
issue tracker.


Error: Provider produced inconsistent final plan

When expanding the plan for module.popcorn.nomad_job.jellyfin to include new
values learned so far during apply, provider "registry.terraform.io/-/nomad"
produced an invalid new value for .task_groups[0].volumes[0].read_only: was
cty.False, but now cty.True.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.


Error: Provider produced inconsistent final plan

When expanding the plan for module.popcorn.nomad_job.jellyfin to include new
values learned so far during apply, provider "registry.terraform.io/-/nomad"
produced an invalid new value for .task_groups[0].volumes[0].source: was
cty.StringVal("jellyfin-config"), but now cty.StringVal("media").

This is a bug in the provider, which should be reported in the provider's own
issue tracker.


Error: Provider produced inconsistent final plan

When expanding the plan for module.popcorn.nomad_job.jellyfin to include new
values learned so far during apply, provider "registry.terraform.io/-/nomad"
produced an invalid new value for .task_groups[0].volumes[1].name: was
cty.StringVal("media"), but now cty.StringVal("jellyfin-config").

This is a bug in the provider, which should be reported in the provider's own
issue tracker.


Error: Provider produced inconsistent final plan

When expanding the plan for module.popcorn.nomad_job.jellyfin to include new
values learned so far during apply, provider "registry.terraform.io/-/nomad"
produced an invalid new value for .task_groups[0].volumes[1].read_only: was
cty.True, but now cty.False.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.


Error: Provider produced inconsistent final plan

When expanding the plan for module.popcorn.nomad_job.jellyfin to include new
values learned so far during apply, provider "registry.terraform.io/-/nomad"
produced an invalid new value for .task_groups[0].volumes[1].source: was
cty.StringVal("media"), but now cty.StringVal("jellyfin-config").

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
@cgbaker
Copy link
Contributor

cgbaker commented Feb 6, 2020

Thanks for the report, @Legogris. I'm looking into a solution.

@cgbaker cgbaker self-assigned this Feb 6, 2020
@cgbaker
Copy link
Contributor

cgbaker commented Feb 6, 2020

I reproduced this using your example, and I am working on a patch.

@cgbaker
Copy link
Contributor

cgbaker commented Feb 6, 2020

after this PR goes through, we'll go ahead and publish this in 1.4.3

@Legogris
Copy link
Author

Thanks @cgbaker <3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants