Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create multiple VMs with managed azure disks #1331

Closed
ArseniiPetrovich opened this issue Jun 1, 2018 · 6 comments
Closed

Create multiple VMs with managed azure disks #1331

ArseniiPetrovich opened this issue Jun 1, 2018 · 6 comments

Comments

@ArseniiPetrovich
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

v0.11.7
azurerm 1.6.0

Affected Resource(s)

azurerm_virtual_machine.storage_os_disk

Terraform Configuration Files

# Create virtual machine
resource "azurerm_virtual_machine" "node" {
  count                 = "${var.node_count}"
  name                  = "${var.prefix}${var.role}-vm-${var.network_name}-${count.index}"
  location              = "${var.region}"
  resource_group_name   = "${var.resource_group_name}"
  network_interface_ids = ["${element(azurerm_network_interface.node.*.id, count.index)}"]

  # 1 vCPU, 3.5 Gb of RAM
  vm_size = "${var.machine_type}"

  storage_os_disk {
    name              = "${var.prefix}${var.role}-disk-os-${count.index}"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }

  storage_image_reference {
    publisher = "${lookup(var.image_publisher, var.platform)}"
    offer     = "${lookup(var.image_offer, var.platform)}"
    sku       = "${lookup(var.image_version, var.platform)}"
    version   = "latest"
  }

  # delete the OS disk automatically when deleting the VM
  delete_os_disk_on_termination = true

  os_profile {
    computer_name  = "${var.role}"
    admin_username = "poa"
  }

  os_profile_linux_config {
    disable_password_authentication = true

    ssh_keys = [
      {
        path     = "/home/poa/.ssh/authorized_keys"
        key_data = "${file(var.ssh_public_key)}"
      },
    ]
  }

  tags {
    environment = "${var.environment_name}"
    role        = "${var.role}"
    countable_role = "${var.role}-${count.index}"
  }
}

Debug Output

Will be happy to provide if needed.

Expected Behavior

Azurerm creates each node with its own disk.

Actual Behavior

Azurerm creates one disk, and then trying to assign it to each node (it also tries to rename the existing disk, so i get the following errors):

* module.validator.azurerm_virtual_machine.node[2]: 1 error(s) occurred:

* azurerm_virtual_machine.node.2: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=409 -- Original Error: failed request: autorest/azure: Service returned an error. Status=<nil> Code="PropertyChangeNotAllowed" Message="Changing property 'osDisk.name' is not allowed." Target="osDisk.name"
* module.bootnode.azurerm_virtual_machine.node[1]: 1 error(s) occurred:

* azurerm_virtual_machine.node.1: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=409 -- Original Error: failed request: autorest/azure: Service returned an error. Status=<nil> Code="PropertyChangeNotAllowed" Message="Changing property 'osDisk.name' is not allowed." Target="osDisk.name"
* module.bootnode.azurerm_virtual_machine.node[0]: 1 error(s) occurred:

Steps to Reproduce

terraform

  1. terraform plan
  2. terraform apply

References

@tombuildsstuff
Copy link
Member

hey @ArseniiPetrovich

Thanks for opening this issue :)

Taking a look at the configuration you've posted above, I notice two things:

name = "${var.prefix}${var.role}-vm-${var.network_name}-${count.index}"
storage_os_disk .0.name = "${var.prefix}${var.role}-disk-os-${count.index}"

The name you're using for the OS Disk appears to be different to the one used for the VM - as such I'm wondering if this disk already exists, as I believe this would lead to the error message that you're seeing above. Would you be able to take a look and confirm this for me? :)

Thanks!

@ArseniiPetrovich
Copy link
Author

ArseniiPetrovich commented Jun 1, 2018

Hi, @tombuildsstuff ! Happy to see you here :)

If i understand you properly - both parameters you specified should be the same. So I've changed my config file to look as follow:

# Create virtual machine
resource "azurerm_virtual_machine" "node" {
  count                 = "${var.node_count}"
  name                  = "${var.prefix}${var.role}-vm-${var.network_name}-${count.index}"
  location              = "${var.region}"
  resource_group_name   = "${var.resource_group_name}"
  network_interface_ids = ["${element(azurerm_network_interface.node.*.id, count.index)}"]

  # 1 vCPU, 3.5 Gb of RAM
  vm_size = "${var.machine_type}"

  storage_os_disk {
    name              = "${var.prefix}${var.role}-vm-${var.network_name}-${count.index}"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }

  storage_image_reference {
    publisher = "${lookup(var.image_publisher, var.platform)}"
    offer     = "${lookup(var.image_offer, var.platform)}"
    sku       = "${lookup(var.image_version, var.platform)}"
    version   = "latest"
  }

  # delete the OS disk automatically when deleting the VM
  delete_os_disk_on_termination = true

  os_profile {
    computer_name  = "${var.role}"
    admin_username = "poa"
  }

  os_profile_linux_config {
    disable_password_authentication = true

    ssh_keys = [
      {
        path     = "/home/poa/.ssh/authorized_keys"
        key_data = "${file(var.ssh_public_key)}"
      },
    ]
  }

  tags {
    environment = "${var.environment_name}"
    role        = "${var.role}"
    countable_role = "${var.role}-${count.index}"
  }
}

However, it lead me to the same error as before:

module.bootnode.azurerm_virtual_machine.node[2]: Creating...
  availability_set_id:                                              "" => "<computed>"
  delete_data_disks_on_termination:                                 "" => "false"
  delete_os_disk_on_termination:                                    "" => "true"
  identity.#:                                                       "" => "<computed>"
  location:                                                         "" => "eastus"
  name:                                                             "" => "tf-bootnode-vm-POA-2"
  network_interface_ids.#:                                          "" => "1"
  network_interface_ids.0:                                          "" => "/providers/Microsoft.Network/networkInterfaces/tf-bootnode-network-card-count-2"
  os_profile.#:                                                     "" => "1"
  os_profile.90238491.admin_password:                               "<sensitive>" => "<sensitive>"
  os_profile.90238491.admin_username:                               "" => "poa"
  os_profile.90238491.computer_name:                                "" => "bootnode"
  os_profile.90238491.custom_data:                                  "" => "<computed>"
  os_profile_linux_config.#:                                        "" => "1"
  os_profile_linux_config.69840937.disable_password_authentication: "" => "true"
  os_profile_linux_config.69840937.ssh_keys.#:                      "" => "1"
  os_profile_linux_config.69840937.ssh_keys.0.key_data:             "" => "" 
  os_profile_linux_config.69840937.ssh_keys.0.path:                 "" => "/home/poa/.ssh/authorized_keys"
  resource_group_name:                                              "" => "tf-test-full-setup"
  storage_image_reference.#:                                        "" => "1"
  storage_image_reference.363552096.id:                             "" => ""
  storage_image_reference.363552096.offer:                          "" => "UbuntuServer"
  storage_image_reference.363552096.publisher:                      "" => "Canonical"
  storage_image_reference.363552096.sku:                            "" => "16.04.0-LTS"
  storage_image_reference.363552096.version:                        "" => "latest"
  storage_os_disk.#:                                                "" => "1"
  storage_os_disk.0.caching:                                        "" => "ReadWrite"
  storage_os_disk.0.create_option:                                  "" => "FromImage"
  storage_os_disk.0.disk_size_gb:                                   "" => "<computed>"
  storage_os_disk.0.managed_disk_id:                                "" => "<computed>"
  storage_os_disk.0.managed_disk_type:                              "" => "Standard_LRS"
  storage_os_disk.0.name:                                           "" => "tf-bootnode-vm-POA-2"
  tags.%:                                                           "" => "3"
  tags.countable_role:                                              "" => "bootnode-2"
  tags.environment:                                                 "" => "Terraform Demo"
  tags.role:                                                        "" => "bootnode"
  vm_size:                                                          "" => "Standard_DS1_v2"

Error: Error applying plan:

11 error(s) occurred:

* module.validator.azurerm_virtual_machine.node[1]: 1 error(s) occurred:

* azurerm_virtual_machine.node.1: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=409 -- Original Error: failed request: autorest/azure: Service returned an error. Status=<nil> Code="PropertyChangeNotAllowed" Message="Changing property 'osDisk.name' is not allowed." Target="osDisk.name"
* module.validator.azurerm_virtual_machine.node[3]: 1 error(s) occurred:

* azurerm_virtual_machine.node.3: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=409 -- Original Error: failed request: autorest/azure: Service returned an error. Status=<nil> Code="PropertyChangeNotAllowed" Message="Changing property 'osDisk.name' is not allowed." Target="osDisk.name"
...

I've partially modified the output for security (cleared os_profile_linux_config.69840937.ssh_keys.0.key_data and partially cleared network_interface_ids.0)

@tombuildsstuff
Copy link
Member

@ArseniiPetrovich no problem, thanks for getting back to us here :)

If i understand you properly - both parameters you specified should be the same. So I've changed my config file to look as follow:

Kind of, I'm trying to confirm that both the Virtual Machine and the OS Disk (Managed Disk) don't already exist; as that would explain the error being returned from the Azure API here. Would you be able to confirm that both the VM and the OS Disks don't exist prior to provisioning the VM with Terraform (e.g. in the Portal)? Taking a quick look at the configuration above - this looks fine; which is why I believe one (or both) of these already exists (but I may be wrong 😄).

Thanks!

@ArseniiPetrovich
Copy link
Author

I finally got it!
The problem was in fact, that some of the previous terraform builds were not properly destroyed. I've tried to look into disk list for an hours - and there were no disks created since (as i think) i made a mistake in one of the previous deployment. As a result i was very confused when terraform told me that it tries to change smth on a completely new resource!
However, after I saw your message, I've logged into azure console and found out that there was old vms, that was not properly deleted by terraform after one of the bad deployments. I've removed them manually, restarted terraform provisioning and it's finally working! Thank you so much, @tombuildsstuff! Sorry for taking up your time with such an easy case, but it really drove me up the wall :D

@tombuildsstuff
Copy link
Member

@ArseniiPetrovich

The problem was in fact, that some of the previous terraform builds were not properly destroyed. I've tried to look into disk list for an hours - and there were no disks created since (as i think) i made a mistake in one of the previous deployment. As a result i was very confused when terraform told me that it tries to change smth on a completely new resource!

This is actually a bug in the Azure Provider where it works differently to the other Providers (e.g. AWS/Google) where the other Providers will complain that a resource already exists, and requires that you import the existing resource in order to be able to modify it (whereas the Azure Provider just upserts them currently due to the nature of the Azure API's [which are all Upserts]).

It's something we're aware of and will be fixing in the near future (we're trying to decide how's best to roll this out, if that's in one go or gradually)

Sorry for taking up your time with such an easy case, but it really drove me up the wall :D

Not at all - I'm glad to hear this is now working for you :)

@ghost
Copy link

ghost commented Mar 31, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 31, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants