Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't migrate azurerm_virtual_machine with custom_data to azurerm_linux_virtual_machine #7234

Open
brownoxford opened this issue Jun 5, 2020 · 4 comments
Labels
question service/virtual-machine upstream/terraform This issue is blocked on an upstream issue within Terraform (Terraform Core/CLI, The Plugin SDK etc) v/2.x (legacy)

Comments

@brownoxford
Copy link

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.12.17

  • provider.aws v2.49.0
  • provider.azuread v0.7.0
  • provider.azurerm v2.13.0
  • provider.random v2.2.1
  • provider.template v2.1.2

Affected Resource(s)

  • azurerm_virtual_machine
  • azurerm_linux_virtual_machine

Terraform Configuration Files

Old Config

resource "azurerm_virtual_machine" "this" {

  name                          = "${local.prefix}-vm"
  location                      = azurerm_resource_group.this.location
  resource_group_name           = azurerm_resource_group.this.name
  network_interface_ids         = ["${azurerm_network_interface.this.id}"]
  vm_size                       = "Standard_B2ms"
  delete_os_disk_on_termination = true

  boot_diagnostics {
    enabled     = true
    storage_uri = azurerm_storage_account.this.primary_blob_endpoint
  }

  storage_os_disk {
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Premium_LRS"
    name              = "${local.prefix}-vm-boot"
    os_type           = "Linux"
  }

  storage_image_reference {
    offer     = "UbuntuServer"
    publisher = "Canonical"
    sku       = "18.04-LTS"
    version   = "latest"
  }

  os_profile {
    admin_username = "**REDACTED**"
    computer_name  = "${local.prefix}-vm"
    custom_data    = file("${path.module}/cloud-init.yaml")
  }

  os_profile_linux_config {
    disable_password_authentication = true
    ssh_keys {
      path     = "/home/**REDACTED**/.ssh/authorized_keys"
      key_data = "**REDACTED**"
    }
  }
}

New Config

resource "azurerm_linux_virtual_machine" "this" {
  admin_username = "**REDACTED**"
  custom_data           = filebase64("${path.module}/cloud-init.yaml")
  location              = azurerm_resource_group.this.location
  name                  = "${local.prefix}-vm"
  network_interface_ids = ["${azurerm_network_interface.this.id}"]
  resource_group_name   = azurerm_resource_group.this.name
  size                  = "Standard_B2ms"

  admin_ssh_key {
    public_key = "**REDACTED**"
    username   = "**REDACTED**"
  }

  boot_diagnostics {
    storage_account_uri = azurerm_storage_account.this.primary_blob_endpoint
  }

  os_disk {
    caching              = "ReadWrite"
    name                 = "${local.prefix}-vm-boot"
    storage_account_type = "Premium_LRS"
  }

  source_image_reference {
    offer     = "UbuntuServer"
    publisher = "Canonical"
    sku       = "18.04-LTS"
    version   = "latest"
  }
}

Debug Output

Panic Output

Expected Behavior

I expected terraform import to import the existing configuration fully so that I would be able to migrate from legacy azurerm_virtual_machine to azurerm_linux_virtual_machine without having to destroy and re-create vm instances.

Actual Behavior

The terraform import does not seem to recognize existing os_profile.custom data and subsequent plan or apply views custom_data as being new and triggers a destroy/recreate.

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # module.mgo-beta.azurerm_linux_virtual_machine.this must be replaced
-/+ resource "azurerm_linux_virtual_machine" "this" {
        admin_username                  = "**REDACTED**"
        allow_extension_operations      = true
      ~ computer_name                   = "mgo-beta-vm" -> (known after apply)
      + custom_data                     = (sensitive value)
      ...
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Steps to Reproduce

  1. Have an existing azurerm_virtual_machine with custom data specified in os_profile.custom_data.
  2. Create new azurerm_linux_virtual_machine configuration using values translated from existing azurerm_virtual_machine
  3. Remove old item from state with terraform state rm <your old azurerm_virtual_machine>
  4. Import existing virtual machine as an azurerm_linux_virtual_machine with terraform import ...
  5. Run terraform plan
  6. Observe plan wants to destroy/recreate vm

Important Factoids

N/A

References

N/A

@ArcturusZhang
Copy link
Contributor

Hi @brownoxford

Thanks for opening this issue!

In Azure, the custom_data of a VM only takes effect during the VM's creation, it would never be re-invoke again, and as a consequence, Azure does not return the custom_data of a VM after the provision of a VM succeeded. Another consideration is that the custom_data may possibly contain sensitive data, therefore Azure will not return it.

Since Azure would not return the custom_data, terraform can do nothing when you are importing a VM into state but can only leave it empty. The custom_data is also a ForceNew attribute, therefore you get the situation you described in this issue.

Based on the nature of custom_data and Azure's behaviour on this, there is really not much we can do from the provider side. To solve your situation, you could either add a

lifecycle {
ignore_changes = [
    custom_data
]
}

to let terraform ignore the changes on custom_data, or you could manually modify the state file to add the custom_data back in. Similar situation would also happen to some password attributes, if Azure does not return those attributes, you will run into the same situation as this.

@kev-in-shu
Copy link

Hi,

If the custom_data is a ForceNew attribute, I would have expected this to be marked in the plan with "# forces replacement".

Especially because in the previous "azurerm_virtual_machine" resource, changes to the custom_data did not have the same effect. It took my quite a while to understand the custom_data was the reason terraform wanted to recreate my virtual machine.

@ArcturusZhang
Copy link
Contributor

ArcturusZhang commented Jul 2, 2020

Hi @kev-in-shu the reason why terraform is not directly telling you that it is custom_data that makes terraform suppose we should recreate the VM is that custom_data is not only a ForceNew attribute, but also a sensitive attribute, and somehow the force-replace notification is overwritten by the (sensitive value) notification, which should be a terraform issue rather than a provider issue.

@ArcturusZhang ArcturusZhang added the upstream/terraform This issue is blocked on an upstream issue within Terraform (Terraform Core/CLI, The Plugin SDK etc) label Jul 2, 2020
@lovelinuxalot
Copy link
Contributor

Hi,

I still face the same issue. I already have a azurerm_linux_virtual_machine resource so the steps I followed are:

  • Created a module for the linux_vm
  • Removed the state for the resource in terraform
  • Imported the existing resource to terraform state with module linux_vm
  • Copied the custom_data section from previous state to the new state file and pushed the updated state to remote
  • Add lifecycle for custom_data in module

When I run terraform plan I have the changes only for custom_data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question service/virtual-machine upstream/terraform This issue is blocked on an upstream issue within Terraform (Terraform Core/CLI, The Plugin SDK etc) v/2.x (legacy)
Projects
None yet
Development

No branches or pull requests

6 participants