Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for specifying an existing disk to attach an azurerm_disk_access resource to #15156

Open
tspearconquest opened this issue Jan 28, 2022 · 12 comments

Comments

@tspearconquest
Copy link
Contributor

tspearconquest commented Jan 28, 2022

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

We would like to request support in azurerm_disk_access for specifying an existing azurerm_managed_disk resource ID.

This would allow for attaching a newly created disk access resource to an existing Managed Disk which may have been created by an azurerm_linux_virtual_machine or azurerm_windows_virtual_machine resource.

The azurerm_linux_virtual_machine resource would need to export the disk IDs in Terraform for this to work, but then we could simply create a disk access resource and provide that an input with the disk IDs exported from the VM resource.

It might also be nice to have support for creating a disk access in terraform inside either of the above mentioned virtual machine blocks as well so that one can be created during the VM's creation and deleted when the VM is deleted.

New or Affected Resource(s)

  • azurerm_disk_access

Potential Terraform Configuration

The below example azurerm_linux_virtual_machine resource would create a disk access as part of the VM creation, and connect it to all disks being created and connected to that VM.

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
resource "azurerm_linux_virtual_machine" "example" {
  ...
}

resource "azurerm_disk_access" "example" {
  ... required and optional arguments
  managed_disk_ids = toset(flatten([azurerm_linux_virtual_machine.example.disk_ids]))
}

References

  • #0000
@myc2h6o
Copy link
Contributor

myc2h6o commented Jan 28, 2022

Hi @tspearconquest Disk Access feature requires updating Managed Disk network_access_policy to AllowPrivate as well, when destroying azurerm_disk_access resource with managed_disk_ids, Terraform is not able to decide whether to update network_access_policy to AllowAll or DenyAll.

I could think of a work around, maybe you can check if this could help in your case. (It requires latest az package with the --disk-access fix)

provider "null" {
}

resource "azurerm_linux_virtual_machine" "example" {
    ...
}

resource "azurerm_disk_access" "example" {
  name                = "yicma-disk-access-0"
  resource_group_name = azurerm_resource_group.test.name
  location            = azurerm_resource_group.test.location
}

# Use a null_resource to call az command to set disk access on the disk, only trigger when vm id is changed
resource "null_resource" "update_disk_access" {
  triggers = {
    id = azurerm_linux_virtual_machine.example.id
  }

  provisioner "local-exec" {
    command = "az disk update --resource-group ${azurerm_linux_virtual_machine.example.resource_group_name} --name ${azurerm_linux_virtual_machine.example.os_disk[0].name} --network-access-policy AllowPrivate --disk-access ${azurerm_disk_access.example.id}"
  }
}

@tspearconquest
Copy link
Contributor Author

tspearconquest commented Jan 28, 2022

Thank you for the workaround. I actually don't have a VM I can test this against at the moment, but it seems like this would work fine.

I'm not sure I understand the issue with why it can't be done by terraform though. When a disk access is removed, the network policy should no longer apply and so it should revert to the default AllowAll, right? If you set it to DenyAll and have no Disk Access, is it still possible to mount the disk in the VM?

What about creating a separated network access policy resource for disks, similar to what azurerm_storage_account_network_rules offers for storage accounts?

Something like azurerm_managed_disk_network_rules which can take an argument to decide how to set the network rule on destroy. Just spitballing ideas.

@myc2h6o
Copy link
Contributor

myc2h6o commented Jan 29, 2022

Hi @tspearconquest as far as I know, disk access controls only import/export and does not affect attaching to VM, please correct me if I'm wrong.

However after looking at azurerm_storage_account_network_rules, I think it should be helpful to have a similar resource for managed disk, since there is no way to configure the DiskAccess for the OS Disk of azurerm_linux_virtual_machine or azurerm_windows_virtual_machine. Below is the potential configuration code I've written based on idea.
@tombuildsstuff Do you think if this can fit into our current design of virtual machine/disk resource?

resource "azurerm_linux_virtual_machine" "example" {
    ...
}

resource "azurerm_disk_access" "example" {
  ...
}

// New resource type to configure the disk access for os disk of azurerm_linux_virtual_machine
resource "azurerm_managed_disk_network_access_policy" "example" {
  managed_disk_id       = azurerm_linux_virtual_machine.example.os_disk[0].id  // Requires adding os disk id to vm attribute
  network_access_policy = "AllowPrivate"
  disk_access_id        = azurerm_disk_access.example.id
}

@tspearconquest
Copy link
Contributor Author

tspearconquest commented Jan 29, 2022 via email

@tombuildsstuff
Copy link
Contributor

@myc2h6o this doesn't make sense as a separate resource - it'd need to either be supported by the azurerm_virtual_machine_data_disk_attachment resource or as a part of #6117

Presumably this can only be configured for Data Disks and not the OS Disk, since sharing an OS Disk likely would cause other issues?

@myc2h6o
Copy link
Contributor

myc2h6o commented Feb 7, 2022

@tombuildsstuff Data disk shall be fine in this case, as network_access_policy can be configured on the separate disk resource. However, for OS disk, as we don't support creating a VM using an existing disk id (only FromImage is supported), the OS disk of the VM actually has the default network access policy as AllowAll which allows sharing the OS disk.

From below comments seems like creating from an existing OS disk is not recommended any more. However, to allow a user restrict the network access policy for OS disk at creation time, do you think we can add the above resource? Or do you think it worth a new feature request on the API side to let this be configurable in compute.OSDisk?

// these have to be hard-coded so there's no point exposing them
// for CreateOption, whilst it's possible for this to be "Attach" for an OS Disk
// from what we can tell this approach has been superseded by provisioning from
// an image of the machine (e.g. an Image/Shared Image Gallery)
CreateOption: compute.DiskCreateOptionTypesFromImage,

@myc2h6o
Copy link
Contributor

myc2h6o commented Mar 25, 2022

Found another issue #8195 for using an existing OS disk with managed_disk_id. It should be able to give the ability to specify the disk access on an existing disk then use it to create a VM.

@HSoulat
Copy link

HSoulat commented Jul 14, 2022

Hi,

I'm facing exactly the same issue as everyone. From my perspective, adding a specific resource like "azurerm_managed_disk_network_access_policy" or adding options in current "azurerm_windows_virtual_machine" os_disk settings seems the easiest way to write the configuration.

Hi @tspearconquest as far as I know, disk access controls only import/export and does not affect attaching to VM, please correct me if I'm wrong.

However after looking at azurerm_storage_account_network_rules, I think it should be helpful to have a similar resource for managed disk, since there is no way to configure the DiskAccess for the OS Disk of azurerm_linux_virtual_machine or azurerm_windows_virtual_machine. Below is the potential configuration code I've written based on idea. @tombuildsstuff Do you think if this can fit into our current design of virtual machine/disk resource?

resource "azurerm_linux_virtual_machine" "example" {
    ...
}

resource "azurerm_disk_access" "example" {
  ...
}

// New resource type to configure the disk access for os disk of azurerm_linux_virtual_machine
resource "azurerm_managed_disk_network_access_policy" "example" {
  managed_disk_id       = azurerm_linux_virtual_machine.example.os_disk[0].id  // Requires adding os disk id to vm attribute
  network_access_policy = "AllowPrivate"
  disk_access_id        = azurerm_disk_access.example.id
}

@segraef
Copy link

segraef commented Nov 14, 2023

Did we find a simple solution to set the network_access_policy to either AllowAll, AllowPrivate or DenyAll for the OS disk of a VM?

@davepattie
Copy link

Would it be possible to get an update on when this can be added please. I just got pinged on an audit for having the osdisk default to AllowAll, thanks

@tspearconquest
Copy link
Contributor Author

@segraef @davepattie another commenter posted in Azure/azure-rest-api-specs#21325 (comment) with a workaround using AzApi provider instead of AzureRM provider. Give the code in this comment a try.

@segraef
Copy link

segraef commented Mar 11, 2024

@segraef @davepattie another commenter posted in Azure/azure-rest-api-specs#21325 (comment) with a workaround using AzApi provider instead of AzureRM provider. Give the code in this comment a try.

Thanks @tspearconquest we use AzApi (since there is no other way) but it would be good to understand if on this is being worked on by the provider team.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants