With managed disks, you will no longer need to create storage accounts before deployment, or configure the storage account. Disk Resource Provider can arrange all disks automatically to provide best performance.
In earlier versions of CPI (up to V20), you must manually create storage accounts and then configure them in CF manifest file due to the limitation on the number of disks per storage account. Considering best performance, every standard storage account only can host up to 40 disks and every premium storage account only can host up to 35 disks.
For better performance in a large-scale deployment, you need to create multiple storage accounts before deployment and manually configure them in every resource pools in the manifest. This is very painful. Managed Disks will hide these complexities and free the users from the need to be aware of the limitations associated with storage account. We recommend you utilize Managed Disks in your CF deployment by default.
When you decide to enable managed disks, you must update the Global Configuration. It's optional to update VM Types/VM Extensions and Disk Types if needed.
Below are behavior changes with a new deployment:
-
Before the deployment, you no longer need to create or add storage account. So, it is not required to specify storage_account_name in bosh.yml for a new deployment.
-
Deploying BOSH director:
-
(REQUIRED) You need to enable managed disks in the Global Configuration using the ops file use-managed-disks.yml.
-
(Optional) You can specify the
storage_account_typein Disk Types. For example, if you need a SSD persistent disk for the BOSH director, you can usePremium_LRS.
-
-
Deploying Cloud Foundry:
-
(Optional) If availability sets are used to host VMs with managed disks and you want to have 3 fault domains, you need to set
platform_fault_domain_countto3explicitly in VM Types/VM Extensions. The reason: Whenuse_managed_disksistrue, the default value ofplatform_fault_domain_countis2because the maximum number of fault domain is 2 in some regions. -
(Optional) You can specify the
storage_account_typein Disk Types. For example, if you need a SSD persistent disk for Cloud Foundry VM, you can usePremium_LRS.
-
You should NOT migrate the deployment if any of the following conditions is true. You should leave use_managed_disks as false in the manifest file in this case.
- The region does not support managed disks. You can see Azure Products by Region for the availability of the Managed Disks feature.
You need to review the following checklist to prevent predictable migration failures.
-
The default storage account is used to store stemcells uploaded by CPI. In CPI v20 or older, it's specified by
azure.storage_account_namein the global configurations. In CPI v20+, this property is optional. However, in the migration scenario, please make sure the default storage account is specified byazure.storage_account_namein the global configurations. Otherwise, CPI won't find your default storage account, which causes that all the uploaded stemcells can't be re-used. -
The maximum number of fault domains of managed availability sets varies by region - either two or three managed disk fault domains per region. The table shows the number per region. If your existing deploymeng is using 3 fault domains, you need to check whether the region supports 3 managed disk fault domains. Please see details here.
-
Unmanaged snapshots cannot be migrated to managed version, and it may cause migration failure, so you need to delete all snapshots and disable snapshots in
bosh.ymlbefore the migration, if you enabled snapshots in the existing deployment. You can enable snapshots after full migration if you want.-
Disable snapshot in
bosh.ymldirector: enable_snapshots: false -
Re-deploy BOSH director
bosh create-env ~/bosh.yml -
Delete all existing snapshots.
-
This is the recommended approach for existing deployment, you can migrate entire deployment to managed disk with following steps:
-
Update the manifest for deploying BOSH director:
-
(REQUIRED) Upgrade Azure CPI to the new version.
-
(REQUIRED) You need to enable managed disks in the Global Configuration using the ops file use-managed-disks.yml.
-
(REQUIRED) You need to remove
storage_account_nameandstorage_account_max_disk_numberif they exist in VM Types/VM Extensions. -
(Optional) You can specify the
storage_account_typein VM Types/VM Extensions. For example, if you need a SSD root disk for the BOSH director, you can usePremium_LRS. -
(Optional) You can specify the
storage_account_typein Disk Types. For example, if you need a SSD persistent disk for the BOSH director, you can usePremium_LRS. -
(Optional) You can specify the
iopsandmbpsproperties in Disk Types ifstorage_account_typeis eitherPremiumV2_LRSorUltraSSD_LRS. For more information, read Premium SSD v2 performance or Ultra disk performance.
NOTE: Since an existing CF deployment has a default storage account which contains uploaded stemcells, you need to keep
azure.storage_account_namein the global configurations inbosh.ymlwhile migrating. CPI will re-use the uploaded stemcells. After the migration, you can remove the default storage account frombosh.yml. -
-
Re-deploy BOSH director
bosh create-env ~/bosh.yml -
Update the manifest for deploying Cloud Foundry:
-
(REQUIRED) You need to remove
storage_account_nameandstorage_account_max_disk_numberif they exist in VM Types/VM Extensions. -
(Optional) If availability sets are used to host VMs with managed disks and you want to have 3 fault domains, you need to set
platform_fault_domain_countto3explicitly in VM Types/VM Extensions. The reason: Whenuse_managed_disksistrue, the default value ofplatform_fault_domain_countis2because the maximum number of fault domain is 2 in some regions. -
(Optional) You can specify the
storage_account_typein VM Types/VM Extensions. For example, if you need a SSD root disk for Cloud Foundry VM, you can usePremium_LRS. -
(Optional) You can specify the
storage_account_typein Disk Types. For example, if you need a SSD persistent disk for Cloud Foundry VM, you can usePremium_LRS.
-
-
Use ‘bosh recreate --force’ to update your current CF deployment
In this step, all VMs will be re-created with managed disks and all disks will be migrated to managed disks.
If the migration is successful and your applications work as you expected, you should cleanup resources manually.
Delete all blobs in the container bosh with below tags whose names start with bosh-data in all storage accounts in the resource group.
{
`user_agent`=>`bosh`,
`migrated`=>`true`
}
Delete all storage accounts without below tags in the resource group. Please do not delete those storage accounts which may be used by others (e.g. the storage account is used as a blobstore via fog).
{
`user-agent`=>`bosh`,
`type`=>`stemcell`
}
Only managed availability set can host VMs with managed disks. However, the maximum number of fault domains of managed availability sets varies by region - either two or three managed disk fault domains per region.
By default, CPI will migrate the old unmanaged availability sets into managed availability sets automatically, and during the migration, the fault domain number can't be changed. However due to the deployment schedule, the new managed availability set may not support the same maximum number of fault domain as the old unmanaged availability set; in this case, the migration might be blocked. You need to wait till the region supports the same maximum number of fault domains.
Let's assume that the fault domain number is set to 3 in your existing deployment.
- For the regions which support 3 FDs, the migration will succeed.
- For the regions which only support 2 FDs
- If the existing deployment doesn't use load balancer, the migration will secceed. You need to specify a new availability set name in
resource_pools. Then CPI will create new VMs with managed disks in new availability sets (managed) one by one. After migration, you can delete the old unmanaged availability sets manually. - If the existing deployment is using load balancer, the migration will fail because the VMs behind a load balancer have to be in a same availability set. This prevents CPI creating VMs in the new availability set one by one. You should not use managed disks feature until the region supports 3 FDs.
- If the existing deployment doesn't use load balancer, the migration will secceed. You need to specify a new availability set name in
Before the migration, you need to do:
- Create a new storage account in the resource group location, and create two containers
boshandstemcell, and one tablestemcells. - Copy all uploaded stemcells from the container
stemcellof the old storage account to the new one. - Copy all the data in the table
stemcellsof the old storage account to the new one.
Note: If you use fog with the old storage account, the blobs will still be stored in the old storage account.