Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add support for scalable shares #1634

Merged
merged 3 commits into from
Apr 29, 2022
Merged

feat: Add support for scalable shares #1634

merged 3 commits into from
Apr 29, 2022

Conversation

tenthirtyam
Copy link
Collaborator

@tenthirtyam tenthirtyam commented Mar 22, 2022

Description

  • Added support for scalable shares on r/vsphere_compute_cluster and r/vsphere_resource_pool for vSphere 7.0 and higher.
  • Updated the r/vsphere_compute_cluster and r/vsphere_resource_pool docs for the enhancement, including additional cleanup on the r/vsphere_compute_cluster docs.

Release Note

resource/compute_cluster: Adds support for scalable shares.(GH-1634)
resource/resource_pool: Adds support for scalable shares.(GH-1634)

References

Closes #1622

Testing

main.tf for Cluster

provider "vsphere" {
  vsphere_server       = var.vsphere_server
  user                 = var.vsphere_username
  password             = var.vsphere_password
  allow_unverified_ssl = var.vsphere_insecure
}

data "vsphere_datacenter" "datacenter" {
  name = var.vsphere_datacenter
}

resource "vsphere_compute_cluster" "this" {
  name                         = "hello"
  datacenter_id                = data.vsphere_datacenter.datacenter.id
  drs_enabled                  = true
  drs_scale_descendants_shares = "scaleCpuAndMemoryShares"
}

resource "vsphere_resource_pool" "this" {
  name                    = "world"
  parent_resource_pool_id = vsphere_compute_cluster.this.resource_pool_id
  cpu_expandable = false
  cpu_limit = "1"
  memory_expandable = false
  memory_limit = "1"
}

resource "vsphere_resource_pool" "that" {
  name                     = "itsme"
  parent_resource_pool_id  = vsphere_resource_pool.this.id
  cpu_expandable = true
  cpu_limit = "1"
  memory_expandable = true
  memory_limit = "1"
}

Apply and State

terraform apply -auto-approve

Terraform used the selected providers to generate the following execution plan. Resource actions are
indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # vsphere_compute_cluster.this will be created
  + resource "vsphere_compute_cluster" "this" {
      + datacenter_id                                         = "datacenter-3"
      + dpm_automation_level                                  = "manual"
      + dpm_enabled                                           = false
      + dpm_threshold                                         = 3
      + drs_automation_level                                  = "manual"
      + drs_enable_vm_overrides                               = true
      + drs_enabled                                           = true
      + drs_migration_threshold                               = 3
      + drs_scale_descendants_shares                          = "scaleCpuAndMemoryShares"
      + ha_admission_control_host_failure_tolerance           = 1
      + ha_admission_control_performance_tolerance            = 100
      + ha_admission_control_policy                           = "resourcePercentage"
      + ha_admission_control_resource_percentage_auto_compute = true
      + ha_admission_control_resource_percentage_cpu          = 100
      + ha_admission_control_resource_percentage_memory       = 100
      + ha_admission_control_slot_policy_explicit_cpu         = 32
      + ha_admission_control_slot_policy_explicit_memory      = 100
      + ha_datastore_apd_recovery_action                      = "none"
      + ha_datastore_apd_response                             = "disabled"
      + ha_datastore_apd_response_delay                       = 180
      + ha_datastore_pdl_response                             = "disabled"
      + ha_enabled                                            = false
      + ha_heartbeat_datastore_policy                         = "allFeasibleDsWithUserPreference"
      + ha_host_isolation_response                            = "none"
      + ha_host_monitoring                                    = "enabled"
      + ha_vm_component_protection                            = "enabled"
      + ha_vm_dependency_restart_condition                    = "none"
      + ha_vm_failure_interval                                = 30
      + ha_vm_maximum_failure_window                          = -1
      + ha_vm_maximum_resets                                  = 3
      + ha_vm_minimum_uptime                                  = 120
      + ha_vm_monitoring                                      = "vmMonitoringDisabled"
      + ha_vm_restart_priority                                = "medium"
      + ha_vm_restart_timeout                                 = 600
      + host_cluster_exit_timeout                             = 3600
      + id                                                    = (known after apply)
      + name                                                  = "hello"
      + proactive_ha_automation_level                         = "Manual"
      + proactive_ha_moderate_remediation                     = "QuarantineMode"
      + proactive_ha_severe_remediation                       = "QuarantineMode"
      + resource_pool_id                                      = (known after apply)
      + vsan_enabled                                          = (known after apply)

      + vsan_disk_group {
          + cache   = (known after apply)
          + storage = (known after apply)
        }
    }

  # vsphere_resource_pool.that will be created
  + resource "vsphere_resource_pool" "that" {
      + cpu_expandable          = true
      + cpu_limit               = 1
      + cpu_reservation         = 0
      + cpu_share_level         = "normal"
      + cpu_shares              = (known after apply)
      + id                      = (known after apply)
      + memory_expandable       = true
      + memory_limit            = 1
      + memory_reservation      = 0
      + memory_share_level      = "normal"
      + memory_shares           = (known after apply)
      + name                    = "itsme"
      + parent_resource_pool_id = (known after apply)
    }

  # vsphere_resource_pool.this will be created
  + resource "vsphere_resource_pool" "this" {
      + cpu_expandable          = false
      + cpu_limit               = 1
      + cpu_reservation         = 0
      + cpu_share_level         = "normal"
      + cpu_shares              = (known after apply)
      + id                      = (known after apply)
      + memory_expandable       = false
      + memory_limit            = 1
      + memory_reservation      = 0
      + memory_share_level      = "normal"
      + memory_shares           = (known after apply)
      + name                    = "world"
      + parent_resource_pool_id = (known after apply)
    }

Plan: 3 to add, 0 to change, 0 to destroy.
vsphere_compute_cluster.this: Creating...
vsphere_compute_cluster.this: Creation complete after 0s [id=domain-c64064]
vsphere_resource_pool.this: Creating...
vsphere_resource_pool.this: Creation complete after 0s [id=resgroup-64066]
vsphere_resource_pool.that: Creating...
vsphere_resource_pool.that: Creation complete after 0s [id=resgroup-64067]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.terraform state show vsphere_compute_cluster.this
# vsphere_compute_cluster.this:
resource "vsphere_compute_cluster" "this" {
    datacenter_id                                         = "datacenter-3"
    dpm_automation_level                                  = "manual"
    dpm_enabled                                           = false
    dpm_threshold                                         = 3
    drs_automation_level                                  = "manual"
    drs_enable_predictive_drs                             = false
    drs_enable_vm_overrides                               = true
    drs_enabled                                           = true
    drs_migration_threshold                               = 3
    drs_scale_descendants_shares                          = "scaleCpuAndMemoryShares"
    ha_admission_control_host_failure_tolerance           = 1
    ha_admission_control_performance_tolerance            = 100
    ha_admission_control_policy                           = "resourcePercentage"
    ha_admission_control_resource_percentage_auto_compute = true
    ha_admission_control_resource_percentage_cpu          = 100
    ha_admission_control_resource_percentage_memory       = 100
    ha_admission_control_slot_policy_explicit_cpu         = 32
    ha_admission_control_slot_policy_explicit_memory      = 100
    ha_datastore_apd_recovery_action                      = "none"
    ha_datastore_apd_response                             = "disabled"
    ha_datastore_apd_response_delay                       = 180
    ha_datastore_pdl_response                             = "disabled"
    ha_enabled                                            = false
    ha_heartbeat_datastore_policy                         = "allFeasibleDsWithUserPreference"
    ha_host_isolation_response                            = "none"
    ha_host_monitoring                                    = "enabled"
    ha_vm_component_protection                            = "enabled"
    ha_vm_dependency_restart_condition                    = "none"
    ha_vm_failure_interval                                = 30
    ha_vm_maximum_failure_window                          = -1
    ha_vm_maximum_resets                                  = 3
    ha_vm_minimum_uptime                                  = 120
    ha_vm_monitoring                                      = "vmMonitoringDisabled"
    ha_vm_restart_additional_delay                        = 0
    ha_vm_restart_priority                                = "medium"
    ha_vm_restart_timeout                                 = 600
    host_cluster_exit_timeout                             = 3600
    id                                                    = "domain-c64064"
    name                                                  = "hello"
    proactive_ha_automation_level                         = "Manual"
    proactive_ha_enabled                                  = false
    proactive_ha_moderate_remediation                     = "QuarantineMode"
    proactive_ha_severe_remediation                       = "QuarantineMode"
    resource_pool_id                                      = "resgroup-64065"
    vsan_enabled                                          = false
}

main.tf for Resouce Pools

provider "vsphere" {
  vsphere_server       = var.vsphere_server
  user                 = var.vsphere_username
  password             = var.vsphere_password
  allow_unverified_ssl = var.vsphere_insecure
}

data "vsphere_datacenter" "datacenter" {
  name = var.vsphere_datacenter
}

resource "vsphere_compute_cluster" "this" {
  name                         = "hello"
  datacenter_id                = data.vsphere_datacenter.datacenter.id
  drs_enabled                  = true
}

resource "vsphere_resource_pool" "this" {
  name                    = "world"
  parent_resource_pool_id = vsphere_compute_cluster.this.resource_pool_id
  cpu_expandable = false
  cpu_limit = "1"
  memory_expandable = false
  memory_limit = "1"
}

resource "vsphere_resource_pool" "that" {
  name                     = "itsme"
  parent_resource_pool_id  = vsphere_resource_pool.this.id
  cpu_expandable = true
  cpu_limit = "1"
  memory_expandable = true
  memory_limit = "1"
  scale_descendants_shares = "scaleCpuAndMemoryShares"
}

Apply and State

terraform state show vsphere_resource_pool.this
# vsphere_resource_pool.this:
resource "vsphere_resource_pool" "this" {
    cpu_expandable          = false
    cpu_limit               = 1
    cpu_reservation         = 0
    cpu_share_level         = "normal"
    cpu_shares              = 4000
    id                      = "resgroup-64070"
    memory_expandable       = false
    memory_limit            = 1
    memory_reservation      = 0
    memory_share_level      = "normal"
    memory_shares           = 163840
    name                    = "world"
    parent_resource_pool_id = "resgroup-64069"terraform state show vsphere_resource_pool.that
# vsphere_resource_pool.that:
resource "vsphere_resource_pool" "that" {
    cpu_expandable           = true
    cpu_limit                = 1
    cpu_reservation          = 0
    cpu_share_level          = "normal"
    cpu_shares               = 4000
    id                       = "resgroup-64071"
    memory_expandable        = true
    memory_limit             = 1
    memory_reservation       = 0
    memory_share_level       = "normal"
    memory_shares            = 163840
    name                     = "itsme"
    parent_resource_pool_id  = "resgroup-64070"
    scale_descendants_shares = "scaleCpuAndMemoryShares"
}

- Added support for scalable shares on `r/vsphere_compute_cluster` and `r/vsphere_resource_pool` for vSphere 7.0 and higher.
- Updated the `r/vsphere_compute_cluster` and `r/vsphere_resource_pool` docs for the enhancement, including additional cleanup on the `r/vsphere_compute_cluster` docs.

#1622

Signed-off-by: Ryan Johnson <johnsonryan@vmware.com>
@tenthirtyam tenthirtyam added enhancement Type: Enhancement needs-review Status: Pull Request Needs Review area/clustering Area: Clustering area/availability Area: Availability labels Mar 22, 2022
@tenthirtyam tenthirtyam added this to the v2.2.0 milestone Mar 22, 2022
@tenthirtyam tenthirtyam self-assigned this Mar 22, 2022
@github-actions github-actions bot added documentation Type: Documentation provider Type: Provider size/l Relative Sizing: Large labels Mar 22, 2022
Copy link
Contributor

@appilon appilon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall, one request around defaults though. I'll also add test cases for the PR.

vsphere/resource_vsphere_compute_cluster.go Show resolved Hide resolved
vsphere/resource_vsphere_compute_cluster.go Show resolved Hide resolved
vsphere/resource_vsphere_resource_pool.go Show resolved Hide resolved
@tenthirtyam
Copy link
Collaborator Author

I'll get these changes in tomorrow, @appilon. 😃

tenthirtyam and others added 2 commits April 27, 2022 15:10
Sets the default for scalable shares to disabled.

Signed-off-by: Ryan Johnson <johnsonryan@vmware.com>
Add check to acceptance test for resource pool. Unfortunately I don't
have an environment that can run the cluster resource tests
Copy link
Contributor

@appilon appilon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had to make some small changes in the read/flatteners

@appilon appilon merged commit a55c00c into main Apr 29, 2022
@appilon appilon deleted the feat/scalable-shares branch April 29, 2022 19:12
@tenthirtyam
Copy link
Collaborator Author

Thanks for digging through this, Alex!

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 30, 2022
@tenthirtyam tenthirtyam removed the needs-review Status: Pull Request Needs Review label Jun 15, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/availability Area: Availability area/clustering Area: Clustering documentation Type: Documentation enhancement Type: Enhancement provider Type: Provider size/l Relative Sizing: Large
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add support for scalable shares on r/vsphere_compute_cluster and r/vsphere_resource_pool
2 participants