Skip to content

FiberChannel Multipath for KVM + Pure Flash Array and HPE-Primera Support #7889

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 38 commits into from
Dec 9, 2023

Conversation

rg9975
Copy link

@rg9975 rg9975 commented Aug 21, 2023

FiberChannel Multipath SCSI for KVM, Pure Flash Array and HPE-Primera Support

Description

This PR provides a new primary storage volume type called "FiberChannel" that allows access to volumes connected to hosts over fiber channel connections. It requires Multipath to provide path discovery and failover. Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how volumes are managed/orchestrated from the connector to communicate with the primary storage provider, using a ProviderAdapter interface, allowing the code interacting with the primary storage provider API's to be simpler and have no direct dependencies on Cloudstack code. Lastly, the PR provides an implementation of the ProviderAdapter classes for the HP Enterprise Primera line of storage solutions and the Pure Flash Array line of storage solutions.

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

How Has This Been Tested?

Testing involves the following setup:

  1. An HPE_3PAR A670 deployment with 4 nodes.
  2. A Pure FlashArray FA-X70R2 deployment with 2 nodes.
  3. Two physical servers deployed with Rocky Linux 8.7.
  4. Fiberchannel switching infrastructure providing 4 paths between the physical servers and storage appliances.
  5. Cloudstack zone configured with KVM hypervisor and both physical servers connected.

The following testing scenarios are used:

  1. Create New provider Storage Pool for Zone
  2. Create New provider Storage Pool for Cluster
  3. Update provider Storage Pool for Zone
  4. Update provider Storage Pool for Cluster
  5. Create VM with Root Disk using provider pool
  6. Create VM with Root and Data Disk using provider pool
  7. Create VM with Root Disk using NFS and Data Disk on provider pool
  8. Create VM with Root Disk on provider Pool and Data Disk on NFS
  9. Snapshot root disk with VM using provider Pool for root disk
  10. Snapshot data disk with VM using provider Pool for data disk
  11. Snapshot VM (non-memory) with root and data disk using provider pool
  12. Snapshot VM (non-memory) with root disk using Primera pool and data disk using NFS
  13. Snapshot VM (non-memory) with root disk using NFS pool and data disk using provider pool
  14. Create new template from previous snapshot root disk on provider pool
  15. Create new volume from previous snapshot root disk on provider pool
  16. Create new volume from previous snapshot data disk on provider pool
  17. Create new VM using template created from provider root snapshot and using Primera as root volume pool
  18. Create new VM using template created from provider root snapshot and using NFS as root volume pool
  19. Delete previously created snapshot
  20. Detach a Primera volume from a non-running VM
  21. Attach a Primera volume to a running VM
  22. Attach a Primera volume to a non-running VM
  23. Primera-only: Create a 'thin' Disk Offering tagged for Primera pool and provision and attach a data volume to a VM using this offering (ttpv=true, reduce=false)
  24. Primera-only: Create a 'sparse' Disk Offering tagged for Primera pool and provision and attach a data volume to a VM using this offering (ttpv=false, reduce=true)
  25. Primera-only: Create a 'fat' Disk Offering and tagged for Primera pool and provision and attach a data volume to a VM using this offering (should fail as 'fat' not supported)
  26. Perform volume migration of root volume from provider pool to NFS pool on stopped VM
  27. Perform volume migration of root volume from NFS pool to provider pool on stopped VM
  28. Perform volume migration of data volume from provider pool to NFS pool on stopped VM
  29. Perform volume migration of data volume from NFS pool to provider pool on stopped VM
  30. Perform VM data migration for a VM with 1 or more data volumes from all volumes on provider pool to all volumes on NFS pool
  31. Perform VM data migration for a VM with 1 or more data volumes from all volumes on NFS pool to all volumes on provider pool
  32. Perform live migration of a VM with a provider root disk
  33. Perform live migration of a VM with a provider data disk and NFS root disk
  34. Perform live migration of a VM with a provider root disk and NFS data disk
  35. Perform volume migration between 2 provider pools on the same backend provider IP address
  36. Perform volume migration between 2 provider pools on different provider IP address
  37. Perform volume migration from 1 provider to another provider and start/confirm with VM.
  38. Perform volume migration back from 2nd provider to 1st provider and start/confirm with VM.

@boring-cyborg
Copy link

boring-cyborg bot commented Aug 21, 2023

Congratulations on your first Pull Request and welcome to the Apache CloudStack community! If you have any issues or are unsure about any anything please check our Contribution Guide (https://github.com/apache/cloudstack/blob/main/CONTRIBUTING.md)
Here are some useful points:

@weizhouapache
Copy link
Member

great work @rg9975

@DaanHoogland
Copy link
Contributor

huge job with great description. looking forward to reviewing the code @rg9975 !

@DaanHoogland DaanHoogland requested a review from slavkap August 22, 2023 08:28
@DaanHoogland
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@DaanHoogland a [SF] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 6847

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@DaanHoogland DaanHoogland added this to the 4.19.0.0 milestone Aug 22, 2023
@blueorangutan
Copy link

@DaanHoogland a [SF] Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-7515)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 44143 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr7889-t7515-kvm-centos7.zip
Smoke tests completed. 112 look OK, 1 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_08_upgrade_kubernetes_ha_cluster Failure 713.10 test_kubernetes_clusters.py

@rohityadavcloud
Copy link
Member

@rg9975 can you check the build failure in the GitHub actions job?

@DaanHoogland
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@DaanHoogland a [SF] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@codecov
Copy link

codecov bot commented Sep 18, 2023

Codecov Report

Attention: 3579 lines in your changes are missing coverage. Please review.

Comparison is base (d3cad42) 29.15% compared to head (b58a5db) 28.47%.
Report is 6 commits behind head on main.

Files Patch % Lines
...atastore/adapter/flasharray/FlashArrayAdapter.java 0.00% 535 Missing ⚠️
...rage/datastore/adapter/primera/PrimeraAdapter.java 0.00% 484 Missing ⚠️
.../datastore/driver/AdaptiveDataStoreDriverImpl.java 1.82% 430 Missing ⚠️
...pervisor/kvm/storage/MultipathSCSIAdapterBase.java 0.00% 369 Missing ⚠️
...orage/datastore/adapter/primera/PrimeraVolume.java 0.00% 203 Missing ⚠️
...tore/lifecycle/AdaptiveDataStoreLifeCycleImpl.java 2.35% 166 Missing ⚠️
...torage/motion/StorageSystemDataMotionStrategy.java 0.00% 113 Missing ⚠️
.../storage/datastore/adapter/primera/PrimeraCpg.java 0.00% 76 Missing ⚠️
...datastore/adapter/ProviderAdapterDiskOffering.java 0.00% 70 Missing ⚠️
...datastore/adapter/flasharray/FlashArrayVolume.java 0.00% 70 Missing ⚠️
... and 72 more
Additional details and impacted files
@@             Coverage Diff              @@
##               main    #7889      +/-   ##
============================================
- Coverage     29.15%   28.47%   -0.69%     
+ Complexity    31278    30732     -546     
============================================
  Files          5251     5319      +68     
  Lines        368763   372351    +3588     
  Branches      53759    54168     +409     
============================================
- Hits         107529   106018    -1511     
- Misses       246530   251802    +5272     
+ Partials      14704    14531     -173     
Flag Coverage Δ
simulator-marvin-tests 24.24% <1.84%> (-0.80%) ⬇️
uitests 4.44% <0.00%> (-0.01%) ⬇️
unit-tests 14.75% <0.03%> (-0.16%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 7041

@DaanHoogland
Copy link
Contributor

@rohityadavcloud
Copy link
Member

@blueorangutan package

@blueorangutan
Copy link

@rohityadavcloud a [SF] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@DaanHoogland DaanHoogland reopened this Oct 4, 2023
@shwstppr
Copy link
Contributor

shwstppr commented Dec 6, 2023

@rg9975 can you please address the logging related comments? If it is too much I can push those changes if you won't mind :-)
We would definitely like to get this in 4.19

@DaanHoogland
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✖️ debian ✔️ suse15. SL-JID 7951

@shwstppr
Copy link
Contributor

shwstppr commented Dec 6, 2023

@blueorangutan test

@blueorangutan
Copy link

@shwstppr a [SL] Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-8495)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 34947 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr7889-t8495-kvm-centos7.zip
Smoke tests completed. 110 look OK, 11 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
ContextSuite context=TestDeployVM>:setup Error 0.00 test_vm_life_cycle.py
test_01_secure_vm_migration Error 0.02 test_vm_life_cycle.py
test_02_unsecure_vm_migration Error 0.01 test_vm_life_cycle.py
test_03_secured_to_nonsecured_vm_migration Error 0.01 test_vm_life_cycle.py
test_04_nonsecured_to_secured_vm_migration Error 0.01 test_vm_life_cycle.py
ContextSuite context=TestVMLifeCycle>:setup Error 3.15 test_vm_life_cycle.py
ContextSuite context=TestVMSchedule>:setup Error 0.00 test_vm_schedule.py
ContextSuite context=TestVmSnapshot>:setup Error 3.10 test_vm_snapshots.py
test_04_deploy_vnf_appliance Error 93.36 test_vnf_templates.py
test_04_deploy_vnf_appliance Error 93.36 test_vnf_templates.py
test_05_delete_vnf_template Error 0.07 test_vnf_templates.py
ContextSuite context=TestVnfTemplates>:teardown Error 1.15 test_vnf_templates.py
ContextSuite context=TestCreateVolume>:setup Error 0.00 test_volumes.py
test_01_root_volume_encryption Error 0.01 test_volumes.py
test_02_data_volume_encryption Error 0.01 test_volumes.py
test_03_root_and_data_volume_encryption Error 0.01 test_volumes.py
ContextSuite context=TestVolumes>:setup Error 4.30 test_volumes.py
test_01_verify_ipv6_vpc Error 3.31 test_vpc_ipv6.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL Error 5.23 test_vpc_redundant.py
test_02_redundant_VPC_default_routes Error 5.16 test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers Error 5.12 test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics Error 5.08 test_vpc_redundant.py
test_05_rvpc_multi_tiers Error 5.06 test_vpc_redundant.py
test_01_VPC_nics_after_destroy Error 4.04 test_vpc_router_nics.py
test_02_VPC_default_routes Error 4.07 test_vpc_router_nics.py
test_01_redundant_vpc_site2site_vpn Failure 4.38 test_vpc_vpn.py
test_01_vpc_site2site_vpn_multiple_options Failure 3.32 test_vpc_vpn.py
test_01_vpc_remote_access_vpn Failure 1.14 test_vpc_vpn.py
test_01_vpc_site2site_vpn Failure 3.31 test_vpc_vpn.py
test_01_cancel_host_maintenace_with_no_migration_jobs Error 0.05 test_host_maintenance.py
test_02_cancel_host_maintenace_with_migration_jobs Error 0.04 test_host_maintenance.py
test_03_cancel_host_maintenace_with_migration_jobs_failure Error 0.04 test_host_maintenance.py
test_01_cancel_host_maintenance_ssh_enabled_agent_connected Error 0.01 test_host_maintenance.py
test_03_cancel_host_maintenance_ssh_disabled_agent_connected Error 0.01 test_host_maintenance.py
test_04_cancel_host_maintenance_ssh_disabled_agent_disconnected Error 0.01 test_host_maintenance.py
test_hostha_enable_ha_when_host_in_maintenance Error 334.38 test_hostha_kvm.py

Copy link
Member

@rohityadavcloud rohityadavcloud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - let's merge this as/when it passes regression tests; we can't test HPE-Primera and rely on the author's own testing.

@rohityadavcloud
Copy link
Member

@blueorangutan package

@blueorangutan
Copy link

@rohityadavcloud a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 7961

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-8498)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 38932 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr7889-t8498-kvm-centos7.zip
Smoke tests completed. 109 look OK, 12 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_11_destroy_ssvm Failure 912.88 test_ssvm.py
test_list_system_vms_metrics_history Failure 0.18 test_metrics_api.py
ContextSuite context=TestDeployVM>:setup Error 0.00 test_vm_life_cycle.py
test_01_secure_vm_migration Error 0.02 test_vm_life_cycle.py
test_02_unsecure_vm_migration Error 0.01 test_vm_life_cycle.py
test_03_secured_to_nonsecured_vm_migration Error 0.01 test_vm_life_cycle.py
test_04_nonsecured_to_secured_vm_migration Error 0.01 test_vm_life_cycle.py
ContextSuite context=TestVMLifeCycle>:setup Error 3.18 test_vm_life_cycle.py
ContextSuite context=TestVMSchedule>:setup Error 0.00 test_vm_schedule.py
ContextSuite context=TestVmSnapshot>:setup Error 3.24 test_vm_snapshots.py
test_04_deploy_vnf_appliance Error 93.53 test_vnf_templates.py
test_04_deploy_vnf_appliance Error 93.53 test_vnf_templates.py
test_05_delete_vnf_template Error 0.05 test_vnf_templates.py
ContextSuite context=TestVnfTemplates>:teardown Error 0.13 test_vnf_templates.py
ContextSuite context=TestCreateVolume>:setup Error 0.00 test_volumes.py
test_01_root_volume_encryption Error 0.02 test_volumes.py
test_02_data_volume_encryption Error 0.01 test_volumes.py
test_03_root_and_data_volume_encryption Error 0.01 test_volumes.py
ContextSuite context=TestVolumes>:setup Error 4.36 test_volumes.py
test_01_verify_ipv6_vpc Error 3.33 test_vpc_ipv6.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL Error 4.24 test_vpc_redundant.py
test_02_redundant_VPC_default_routes Error 4.21 test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers Error 4.13 test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics Error 5.22 test_vpc_redundant.py
test_05_rvpc_multi_tiers Error 4.28 test_vpc_redundant.py
test_01_VPC_nics_after_destroy Error 3.17 test_vpc_router_nics.py
test_02_VPC_default_routes Error 4.14 test_vpc_router_nics.py
test_01_redundant_vpc_site2site_vpn Failure 3.38 test_vpc_vpn.py
test_01_vpc_site2site_vpn_multiple_options Failure 3.38 test_vpc_vpn.py
test_01_vpc_remote_access_vpn Failure 1.14 test_vpc_vpn.py
test_01_vpc_site2site_vpn Failure 3.35 test_vpc_vpn.py
test_01_cancel_host_maintenace_with_no_migration_jobs Error 0.05 test_host_maintenance.py
test_02_cancel_host_maintenace_with_migration_jobs Error 0.04 test_host_maintenance.py
test_03_cancel_host_maintenace_with_migration_jobs_failure Error 0.04 test_host_maintenance.py
test_01_cancel_host_maintenance_ssh_enabled_agent_connected Error 0.01 test_host_maintenance.py
test_03_cancel_host_maintenance_ssh_disabled_agent_connected Error 0.01 test_host_maintenance.py
test_04_cancel_host_maintenance_ssh_disabled_agent_disconnected Error 0.01 test_host_maintenance.py

@DaanHoogland
Copy link
Contributor

i'm insane
@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-8524)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 51775 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr7889-t8524-kvm-centos7.zip
Smoke tests completed. 119 look OK, 2 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_08_migrate_vm Error 44.91 test_vm_life_cycle.py
test_01_redundant_vpc_site2site_vpn Failure 386.79 test_vpc_vpn.py

@shwstppr shwstppr merged commit 1031c31 into apache:main Dec 9, 2023
dhslove pushed a commit to ablecloud-team/ablestack-cloud that referenced this pull request Dec 11, 2023
…port (apache#7889)

This PR provides a new primary storage volume type called "FiberChannel" that allows access to volumes connected to hosts over fiber channel connections. It requires Multipath to provide path discovery and failover. Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how volumes are managed/orchestrated from the connector to communicate with the primary storage provider, using a ProviderAdapter interface, allowing the code interacting with the primary storage provider API's to be simpler and have no direct dependencies on Cloudstack code. Lastly, the PR provides an implementation of the ProviderAdapter classes for the HP Enterprise Primera line of storage solutions and the Pure Flash Array line of storage solutions.
dhslove added a commit to ablecloud-team/ablestack-cloud that referenced this pull request Dec 11, 2023
dhslove pushed a commit to ablecloud-team/ablestack-cloud that referenced this pull request Jan 17, 2024
…port (apache#7889)

This PR provides a new primary storage volume type called "FiberChannel" that allows access to volumes connected to hosts over fiber channel connections. It requires Multipath to provide path discovery and failover. Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how volumes are managed/orchestrated from the connector to communicate with the primary storage provider, using a ProviderAdapter interface, allowing the code interacting with the primary storage provider API's to be simpler and have no direct dependencies on Cloudstack code. Lastly, the PR provides an implementation of the ProviderAdapter classes for the HP Enterprise Primera line of storage solutions and the Pure Flash Array line of storage solutions.
@rg9975 rg9975 deleted the primera-pure-storage-provider-feature branch March 31, 2024 01:37
@meletisf
Copy link

Hello, are there any plans to support Alletra 5/6K and MP? This is probably compatible with 9K since it is based on Primera but with the general direction being the MP platform (officially GreenLake for Block) it would be interesting to see support for the new generations.

@rg9975
Copy link
Author

rg9975 commented May 22, 2024

Hello, are there any plans to support Alletra 5/6K and MP? This is probably compatible with 9K since it is based on Primera but with the general direction being the MP platform (officially GreenLake for Block) it would be interesting to see support for the new generations.

At this time we do not have plans to test against these newer devices, but the Alletra line appears to support HPE Web Services API 1.10 which the driver is coded to. I suspect it may work as-is or with minor tweaks.

@meletisf
Copy link

meletisf commented May 22, 2024

Hello, are there any plans to support Alletra 5/6K and MP? This is probably compatible with 9K since it is based on Primera but with the general direction being the MP platform (officially GreenLake for Block) it would be interesting to see support for the new generations.

At this time we do not have plans to test against these newer devices, but the Alletra line appears to support HPE Web Services API 1.10 which the driver is coded to. I suspect it may work as-is or with minor tweaks.

Good point about the Web Services API. I will raise it to the Alletra engineering team to verify it. Most likely i will test it on A6K and MP arrays and let you know!

@knowanand
Copy link

Hello,

We’re using a 3PAR 8000 series array with API version 1.6.

Although the documentation mentions support for 3par/Primera, we cannot integrate our existing 3par storage setup with CloudStack.

Is there a workaround or method for integrating our current 3PAR infrastructure without making major changes to the storage layer?
It just stays on the error that I posted in the image.

image

We have tried both with

http://admin:pass@172..xx.xx.xx:8008/api/v1?cpg=cloudstack&hostset=kvm-cs&capacitybytes=4000000000000

and

http://admin:pass@172.xx.xx.xx:8008/api/v1?cpg=cloudstack&hostset=kvm-cs

Irrespective of whether the capacity bytes are in the API URL, or in the capacity bytes section of the UI, it always comes back with

Request failed. (530)
Failed to add data store: Capacity bytes not available from the storage provider, user provided capacity bytes must be specified

on 3par CLI it shows that the user is connected:

fs9000 cli% showwsapi
-Service- -State- -HTTP_State- HTTP_Port -HTTPS_State- HTTPS_Port -Version- ------------API_URL-------------
Enabled Active Enabled 8008 Enabled 8080 1.6.5 https://172.xx.xx.xx:8080/api/v1
fs9000 cli%

@rg9975
Copy link
Author

rg9975 commented May 6, 2025

I'm unsure why you would be seeing that. The driver reads the capacityBytes from the provided inputs and, if its set, will validate it does not exceed the value from the arrays capacityInBytes. When it is not set, it will attempt to get the capacityInBytes from the array (if set) and use it or log the exception you are seeing. You perhaps can try using a CMK command to see if there is something happening between the UI and API when providing the inputs? you may also be able to set the capacityInBytes on the array (assuming the version of the Web Service API your using supports that). We have only tested on the HPE Web Service v1.10+.

            ProviderVolumeStorageStats stats = api.getManagedStorageStats();
            if (capacityBytes != null && capacityBytes != 0 && stats != null) {
                if (stats.getCapacityInBytes() > 0) {
                    if (stats.getCapacityInBytes() < capacityBytes) {
                        throw new InvalidParameterValueException("Capacity bytes provided exceeds the capacity of the storage endpoint: provided by user: " + capacityBytes + ", storage capacity from storage provider: " + stats.getCapacityInBytes());
                    }
                }
                parameters.setCapacityBytes(capacityBytes);
            }
            // if we have no user-provided capacity bytes, use the ones provided by storage
            else {
                if (stats == null || stats.getCapacityInBytes() <= 0) {
                    throw new InvalidParameterValueException("Capacity bytes not available from the storage provider, user provided capacity bytes must be specified");
                }
                parameters.setCapacityBytes(stats.getCapacityInBytes());
            }

@knowanand
Copy link

I'm working with a 3PAR system where the API version is 1.6.5. I understand some methods might differ with this version, but since I’m entering capacity manually through the UI, I assume that part should still work as expected.

As for CMK, I installed the EXE and connected using credentials, but I’m a bit stuck on how to add the primary 3PAR storage to a zone from the CLI. I haven’t found clear guidance on this yet, so any help or pointers would be really appreciated!

Thanks in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants