-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for specifying an existing disk to attach an azurerm_disk_access
resource to
#15156
Comments
Hi @tspearconquest Disk Access feature requires updating Managed Disk I could think of a work around, maybe you can check if this could help in your case. (It requires latest az package with the --disk-access fix) provider "null" {
}
resource "azurerm_linux_virtual_machine" "example" {
...
}
resource "azurerm_disk_access" "example" {
name = "yicma-disk-access-0"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
}
# Use a null_resource to call az command to set disk access on the disk, only trigger when vm id is changed
resource "null_resource" "update_disk_access" {
triggers = {
id = azurerm_linux_virtual_machine.example.id
}
provisioner "local-exec" {
command = "az disk update --resource-group ${azurerm_linux_virtual_machine.example.resource_group_name} --name ${azurerm_linux_virtual_machine.example.os_disk[0].name} --network-access-policy AllowPrivate --disk-access ${azurerm_disk_access.example.id}"
}
}
|
Thank you for the workaround. I actually don't have a VM I can test this against at the moment, but it seems like this would work fine. I'm not sure I understand the issue with why it can't be done by terraform though. When a disk access is removed, the network policy should no longer apply and so it should revert to the default AllowAll, right? If you set it to DenyAll and have no Disk Access, is it still possible to mount the disk in the VM? What about creating a separated network access policy resource for disks, similar to what Something like |
Hi @tspearconquest as far as I know, disk access controls only import/export and does not affect attaching to VM, please correct me if I'm wrong. However after looking at resource "azurerm_linux_virtual_machine" "example" {
...
}
resource "azurerm_disk_access" "example" {
...
}
// New resource type to configure the disk access for os disk of azurerm_linux_virtual_machine
resource "azurerm_managed_disk_network_access_policy" "example" {
managed_disk_id = azurerm_linux_virtual_machine.example.os_disk[0].id // Requires adding os disk id to vm attribute
network_access_policy = "AllowPrivate"
disk_access_id = azurerm_disk_access.example.id
}
|
@myc2h6o this doesn't make sense as a separate resource - it'd need to either be supported by the Presumably this can only be configured for Data Disks and not the OS Disk, since sharing an OS Disk likely would cause other issues? |
@tombuildsstuff Data disk shall be fine in this case, as From below comments seems like creating from an existing OS disk is not recommended any more. However, to allow a user restrict the network access policy for OS disk at creation time, do you think we can add the above resource? Or do you think it worth a new feature request on the API side to let this be configurable in compute.OSDisk? terraform-provider-azurerm/internal/services/compute/virtual_machine.go Lines 248 to 252 in d37dacd
|
Found another issue #8195 for using an existing OS disk with |
Hi, I'm facing exactly the same issue as everyone. From my perspective, adding a specific resource like "azurerm_managed_disk_network_access_policy" or adding options in current "azurerm_windows_virtual_machine" os_disk settings seems the easiest way to write the configuration.
|
Did we find a simple solution to set the |
Would it be possible to get an update on when this can be added please. I just got pinged on an audit for having the osdisk default to AllowAll, thanks |
@segraef @davepattie another commenter posted in Azure/azure-rest-api-specs#21325 (comment) with a workaround using AzApi provider instead of AzureRM provider. Give the code in this comment a try. |
Thanks @tspearconquest we use AzApi (since there is no other way) but it would be good to understand if on this is being worked on by the provider team. |
Community Note
Description
We would like to request support in
azurerm_disk_access
for specifying an existingazurerm_managed_disk
resource ID.This would allow for attaching a newly created disk access resource to an existing Managed Disk which may have been created by an
azurerm_linux_virtual_machine
orazurerm_windows_virtual_machine
resource.The
azurerm_linux_virtual_machine
resource would need to export the disk IDs in Terraform for this to work, but then we could simply create a disk access resource and provide that an input with the disk IDs exported from the VM resource.It might also be nice to have support for creating a disk access in terraform inside either of the above mentioned virtual machine blocks as well so that one can be created during the VM's creation and deleted when the VM is deleted.
New or Affected Resource(s)
Potential Terraform Configuration
The below example
azurerm_linux_virtual_machine
resource would create a disk access as part of the VM creation, and connect it to all disks being created and connected to that VM.References
The text was updated successfully, but these errors were encountered: