Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use disk API to load managed disk info when VM is stopped. #1100

Merged
merged 6 commits into from Apr 23, 2018

Conversation

JunyiYi
Copy link

@JunyiYi JunyiYi commented Apr 10, 2018

When VM is stopped, vmClient won't return disks information.
In this pull request, I've done the following steps to resolve such issues similar to #555 :

  • Check the status of VM using InstanceView
  • When it is stopped, use diskClient to query the managed disk information
  • Use the information to update the VM schema like "disk_size_gb" and "managed_disk_type"

stopped := false
if instance.Statuses != nil {
for _, status := range *instance.Statuses {
if status.Code != nil && *status.Code == "PowerState/deallocated" {
Copy link
Contributor

@metacpp metacpp Apr 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do experimentation or check with service team: != running or == deallocated.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested for "PowerState/deallocating", "PowerState/deallocated" and "PowerState/starting". Other statuses didn't appear, but I would still add them for safety.

return &resp, nil
}

func flattenAzureRmVirtualMachineDataDisk(disks *[]compute.DataDisk, stopped bool, meta interface{}) (interface{}, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is taking more responsibility, we may make it simpler and only do flatten work.

@@ -1064,7 +1120,7 @@ func flattenAzureRmVirtualMachineOsProfileLinuxConfiguration(config *compute.Lin
return []interface{}{result}
}

func flattenAzureRmVirtualMachineOsDisk(disk *compute.OSDisk) []interface{} {
func flattenAzureRmVirtualMachineOsDisk(disk *compute.OSDisk, stopped bool, meta interface{}) ([]interface{}, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same suggestion as above.

@tombuildsstuff tombuildsstuff modified the milestones: 1.4.0, Soon Apr 16, 2018
Copy link
Member

@tombuildsstuff tombuildsstuff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @JunyiYi

Thanks for this PR :)

I've taken a look through and left some comments in-line - but there's two things which need resolving to proceed with this PR:

  1. We need to add an acceptance test which verifies this works as expected. We can do this by applying the configuration, then shutting down the machine and then verifying that the appropriate fields are set on the os_disk and data_disk objects.
  2. We should pull the data out into the objects returned from the Azure API prior to flattening, rather than flattening twice

Thanks!

stopped := false
if instance.Statuses != nil {
for _, status := range *instance.Statuses {
if status.Code != nil && *status.Code == "PowerState/deallocated" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Azure's API's have a tendency to return things in different casings (particularly for older resources which haven't been modified recently), as such can we check this in a case insensitive manner?

@@ -976,9 +1018,23 @@ func flattenAzureRmVirtualMachineDataDisk(disks *[]compute.DataDisk) interface{}
}
l["lun"] = *disk.Lun

if stopped && disk.ManagedDisk != nil && disk.ManagedDisk.ID != nil {
diskInfo, err := getStoppedVMManagedDiskInfo(*disk.ManagedDisk.ID, meta)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thinking about it - would it be worth pulling this regardless of the VM state if managed disks are used?

@@ -1083,7 +1139,22 @@ func flattenAzureRmVirtualMachineOsDisk(disk *compute.OSDisk) []interface{} {
}
result["os_type"] = string(disk.OsType)

return []interface{}{result}
if stopped && disk.ManagedDisk != nil && disk.ManagedDisk.ID != nil {
diskInfo, err := getStoppedVMManagedDiskInfo(*disk.ManagedDisk.ID, meta)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(as above) I think we could probably retrieve this from this API all the time, rather than just when it's stopped?

@@ -1598,3 +1597,91 @@ func resourceArmVirtualMachineStorageImageReferenceHash(v interface{}) int {
}
return hashcode.String(buf.String())
}

func resourceArmVirtualMachineReviseStorageInfo(d *schema.ResourceData, meta interface{}) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we merge this back in with the standard Read method?

}
}

if stopped {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if this API is the source of truth for this data - we may as well just call it all the time? otherwise we're going to get into some fun edge cases here (whilst this'll cause a couple of extra api calls, it's not that big a deal imo)

}
}
if dataDisks, ok := d.GetOk("storage_data_disk"); ok {
if err := resourceArmVirtualMachineUpdateManagedDisk(dataDisks.([]interface{}), meta); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updating the state twice isn't ideal here - we should assign this to the objects used prior to the flatten methods e.g. https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/azurerm/resource_arm_virtual_machine.go#L960

@JunyiYi
Copy link
Author

JunyiYi commented Apr 16, 2018

@tombuildsstuff , I agree with adding test cases. But are there any clean ways to shut down the virtual machine (without raw Azure REST API calls) in a multi-step test case?

@tombuildsstuff
Copy link
Member

@JunyiYi we do this in many tests where we call the Azure API from within a Test Case - see the TestAccAzureRMResourceGroup_disappears test for instance where we delete the Resource Group and ensure that Terraform detects it; so we should be able to achieve the same thing by calling the Stop API and Start API's in turn :)

@JunyiYi
Copy link
Author

JunyiYi commented Apr 20, 2018

@tombuildsstuff , sure, I have added the test cases.

Copy link
Member

@tombuildsstuff tombuildsstuff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few minor comments - but this otherwise LGTM - I'll kick off the tests now..

{
Config: config,
Destroy: true,
},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor we actually don't need this last step, the test framework takes care of that for us :)

disable_password_authentication = false
}
}
`, rInt, location)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we switch this to using %d rather than by index (e.g. %[1]d) - there's an issue in golint where incorrect types aren't caught using the index method - so we instead split the variables out

}
diskResp, err := client.Get(ctx, diskID.ResourceGroup, diskID.Path["disks"])
if err != nil {
return nil, err
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor for consistency purposes and in order to return a more useful error message - can we update this to be:

id, err := parseAzureResourceID(*disk.ID)
if err != nil {
  return nil, fmt.Errorf("Error parsing Disk ID %q: %+v", *disk.ID, err)
}

resourceGroup := id.ResourceGroup
name := id.Path["disks"]
diskResp, err := client.Get(ctx, resourceGroup, name)
if err != nil {
  return nil, fmt.Errorf("Error retrieving Disk %q (Resource Group %q): %+v", name, resourceGroup, err)
}

```
$ acctests azurerm TestAccAzureRMVirtualMachine_hasDiskInfoWhenStopped
=== RUN   TestAccAzureRMVirtualMachine_hasDiskInfoWhenStopped
--- PASS: TestAccAzureRMVirtualMachine_hasDiskInfoWhenStopped (729.95s)
PASS
ok  	github.com/terraform-providers/terraform-provider-azurerm/azurerm	729.986s
```
Copy link
Member

@tombuildsstuff tombuildsstuff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JunyiYi I've pushed a commit to resolve the issues I raised above, and this now LGTM 👍

@tombuildsstuff
Copy link
Member

VM Tests pass:

screen shot 2018-04-23 at 13 17 36

@tombuildsstuff tombuildsstuff merged commit 4cc1381 into master Apr 23, 2018
@tombuildsstuff tombuildsstuff deleted the fix_stoppedvm_storageinfo branch April 23, 2018 11:18
@tombuildsstuff tombuildsstuff modified the milestones: Soon, 1.4.0 Apr 23, 2018
tombuildsstuff added a commit that referenced this pull request Apr 23, 2018
@ghost
Copy link

ghost commented Mar 31, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@hashicorp hashicorp locked and limited conversation to collaborators Mar 31, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants