Skip to content

Error creating volume - Cannot spawn additional jobs. Please wait for the ongoing jobs to finish and try again #73

Open
@cadolphus

Description

@cadolphus

When I try to create multiple volumes in a single Terraform manifest, I get an error saying:

│ Error: code: 409, message: Error creating volume - Cannot spawn additional jobs. Please wait for the ongoing jobs to finish and try again
│
│   with netapp-gcp_volume.cvs_volume_hw_premium,
│   on main.tf line 150, in resource "netapp-gcp_volume" "cvs_volume_hw_premium":
│  150: resource "netapp-gcp_volume" "cvs_volume_hw_premium" {
│

Here is my Terraform config and Provider config:

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~>4.32.0"
    }
    google-beta = {
      source  = "hashicorp/google-beta"
      version = "~>4.32.0"
    }
    netapp-gcp = {
      source  = "NetApp/netapp-gcp"
      version = "~>22.8.1"
    }
  }
}

provider "google" {
  project     = var.gcp_project_id
  region      = var.gcp_region
  zone        = var.gcp_zone
}

provider "google-beta" {
  project     = var.gcp_project_id
  region      = var.gcp_region
  zone        = var.gcp_zone
}

provider "netapp-gcp" {
  project     = var.gcp_project_id
  service_account = "xxxxxxxxxxxxxxxxxxxxxxxx"
}

Here is the first resource, which succeeds in creating:

resource "netapp-gcp_volume" "cvs_volume_hw_standard" {
  name               = "tf${local.prefix}-cvs-hw-standard1"
  region             = var.gcp_region
  protocol_types     = ["NFSv3","NFSv4"]
  network            = local.vpc_name
  size               = var.cvs_volume_hw_standard.size_gb
  storage_class      = "hardware" # "hardware" for CVS-Performance, "software" for CVS-Software
  service_level      = "standard" # "standard", "premium", or "extreme"
  volume_path        = var.cvs_volume_hw_standard.volume_path
  snapshot_directory = true
  snapshot_policy {
    enabled = true
    hourly_schedule {
      snapshots_to_keep = 24
      minute            = 0
    }
    daily_schedule {
      snapshots_to_keep = 7
      hour              = 0
      minute            = 0
    }
  }
  export_policy {
    rule {
      allowed_clients = "0.0.0.0/0"
      access          = "ReadWrite"
      has_root_access = "true"
      nfsv3 {
        checked =  true
      }
      nfsv4 {
        checked = true
      }
    }
  }
}

Then here is the second resource in the same manifest which fails with the error I mention above:

resource "netapp-gcp_volume" "cvs_volume_hw_premium" {
  name               = "tf${local.prefix}-cvs-hw-premium1"
  region             = var.gcp_region
  protocol_types     = ["NFSv3","NFSv4"]
  network            = local.vpc_name
  size               = var.cvs_volume_hw_premium.size_gb
  storage_class      = "hardware" # "hardware" for CVS-Performance, "software" for CVS-Software
  service_level      = "premium" # "standard", "premium", or "extreme"
  volume_path        = var.cvs_volume_hw_premium.volume_path
  snapshot_directory = true
  snapshot_policy {
    enabled = true
    hourly_schedule {
      snapshots_to_keep = 24
      minute            = 0
    }
    daily_schedule {
      snapshots_to_keep = 7
      hour              = 0
      minute            = 0
    }
  }
  export_policy {
    rule {
      allowed_clients = "0.0.0.0/0"
      access          = "ReadWrite"
      has_root_access = "true"
      nfsv3 {
        checked =  true
      }
      nfsv4 {
        checked = true
      }
    }
  }
}

If I add a "depends_on" into this second resource, then it works! Therefore, it seems that your API doesn't support concurrent volume creation jobs. If this is the case, then the code for your Provider needs to handle this rather than expecting the user to code around the API's limitations.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions