Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error creating volume - Cannot spawn additional jobs. Please wait for the ongoing jobs to finish and try again #73

Open
cadolphus opened this issue Aug 30, 2022 · 1 comment

Comments

@cadolphus
Copy link

When I try to create multiple volumes in a single Terraform manifest, I get an error saying:

│ Error: code: 409, message: Error creating volume - Cannot spawn additional jobs. Please wait for the ongoing jobs to finish and try again
│
│   with netapp-gcp_volume.cvs_volume_hw_premium,
│   on main.tf line 150, in resource "netapp-gcp_volume" "cvs_volume_hw_premium":
│  150: resource "netapp-gcp_volume" "cvs_volume_hw_premium" {
│

Here is my Terraform config and Provider config:

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~>4.32.0"
    }
    google-beta = {
      source  = "hashicorp/google-beta"
      version = "~>4.32.0"
    }
    netapp-gcp = {
      source  = "NetApp/netapp-gcp"
      version = "~>22.8.1"
    }
  }
}

provider "google" {
  project     = var.gcp_project_id
  region      = var.gcp_region
  zone        = var.gcp_zone
}

provider "google-beta" {
  project     = var.gcp_project_id
  region      = var.gcp_region
  zone        = var.gcp_zone
}

provider "netapp-gcp" {
  project     = var.gcp_project_id
  service_account = "xxxxxxxxxxxxxxxxxxxxxxxx"
}

Here is the first resource, which succeeds in creating:

resource "netapp-gcp_volume" "cvs_volume_hw_standard" {
  name               = "tf${local.prefix}-cvs-hw-standard1"
  region             = var.gcp_region
  protocol_types     = ["NFSv3","NFSv4"]
  network            = local.vpc_name
  size               = var.cvs_volume_hw_standard.size_gb
  storage_class      = "hardware" # "hardware" for CVS-Performance, "software" for CVS-Software
  service_level      = "standard" # "standard", "premium", or "extreme"
  volume_path        = var.cvs_volume_hw_standard.volume_path
  snapshot_directory = true
  snapshot_policy {
    enabled = true
    hourly_schedule {
      snapshots_to_keep = 24
      minute            = 0
    }
    daily_schedule {
      snapshots_to_keep = 7
      hour              = 0
      minute            = 0
    }
  }
  export_policy {
    rule {
      allowed_clients = "0.0.0.0/0"
      access          = "ReadWrite"
      has_root_access = "true"
      nfsv3 {
        checked =  true
      }
      nfsv4 {
        checked = true
      }
    }
  }
}

Then here is the second resource in the same manifest which fails with the error I mention above:

resource "netapp-gcp_volume" "cvs_volume_hw_premium" {
  name               = "tf${local.prefix}-cvs-hw-premium1"
  region             = var.gcp_region
  protocol_types     = ["NFSv3","NFSv4"]
  network            = local.vpc_name
  size               = var.cvs_volume_hw_premium.size_gb
  storage_class      = "hardware" # "hardware" for CVS-Performance, "software" for CVS-Software
  service_level      = "premium" # "standard", "premium", or "extreme"
  volume_path        = var.cvs_volume_hw_premium.volume_path
  snapshot_directory = true
  snapshot_policy {
    enabled = true
    hourly_schedule {
      snapshots_to_keep = 24
      minute            = 0
    }
    daily_schedule {
      snapshots_to_keep = 7
      hour              = 0
      minute            = 0
    }
  }
  export_policy {
    rule {
      allowed_clients = "0.0.0.0/0"
      access          = "ReadWrite"
      has_root_access = "true"
      nfsv3 {
        checked =  true
      }
      nfsv4 {
        checked = true
      }
    }
  }
}

If I add a "depends_on" into this second resource, then it works! Therefore, it seems that your API doesn't support concurrent volume creation jobs. If this is the case, then the code for your Provider needs to handle this rather than expecting the user to code around the API's limitations.

@okrause
Copy link
Contributor

okrause commented Aug 31, 2022

This seems to be a DUP of #60

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants