Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: fetch few counters from ONTAP instead of UM api #1793

Merged
merged 15 commits into from
Mar 10, 2023
Merged

Conversation

Hardikl
Copy link
Contributor

@Hardikl Hardikl commented Mar 2, 2023

Tested Lun, Qtree and Volume objects, Aggr would be next.

  1. LUN object:. Full parity

UM api response:
/usr/software/test/bin/ontapi -s -t dfm -p 443 10.234.172.232 umadmin netapp1! lun-iter

<lun-info>
                        <alignment>indeterminate</alignment>
                        <igroups>
                                <igroup-reference>
                                        <igroup-mapping-id>7</igroup-mapping-id>
                                        <igroup-name>Manjunath</igroup-name>
                                        <igroup-resource-key>8601116e-1b56-11e9-bdec-00a098d87b4b:type=igroup,uuid=e65a313d-9748-11ed-9d0d-00a098d7b0b9</igroup-resource-key>
                                </igroup-reference>
                        </igroups>
                        <is-lun-space-reserved>false</is-lun-space-reserved>
                        <is-space-alloc-enabled>false</is-space-alloc-enabled>
                        <lun-class>regular</lun-class>
                        <lun-path>browfiled_lun_vol/browfiled_lun</lun-path>
                        <lun-size>1299594240</lun-size>
                        <lun-status>Normal</lun-status>
                        <lun-used-space>11177984</lun-used-space>
                        <mapped>true</mapped>
                        <multiprotocol-type>windows_2008</multiprotocol-type>
                        <resource-key>8601116e-1b56-11e9-bdec-00a098d87b4b:type=lun,uuid=c4451e44-9e5c-4fb1-b0ba-1e0449f7df9f</resource-key>
                        <serial-number>80F6x]N6SJxU</serial-number>
                        <volume-name>browfiled_lun_vol</volume-name>
                        <volume-resource-key>8601116e-1b56-11e9-bdec-00a098d87b4b:type=volume,uuid=de2929a8-f1f3-4d4a-aa56-00cc71cdc873</volume-resource-key>
                        <vserver-name>pukale_mixed_vserver</vserver-name>
                        <vserver-resource-key>8601116e-1b56-11e9-bdec-00a098d87b4b:type=vserver,uuid=ff25b9f1-2539-11e9-bdec-00a098d87b4b</vserver-resource-key>
</lun-info>

Harvest 2.0 response:

curl -s 'http://localhost:15001/metrics' | grep "lun_size" | grep "browfiled_lun_vol"
lun_size_used_percent{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",node="AFFA200-206-76-78-01",qtree="",lun="browfiled_lun",volume="browfiled_lun_vol",svm="pukale_mixed_vserver"} 0.8601133843129376
lun_size{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",node="AFFA200-206-76-78-01",qtree="",lun="browfiled_lun",volume="browfiled_lun_vol",svm="pukale_mixed_vserver"} 1299594240
lun_size_used{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",node="AFFA200-206-76-78-01",qtree="",lun="browfiled_lun",volume="browfiled_lun_vol",svm="pukale_mixed_vserver"} 11177984
  • lun-size -> collected in template DONE
  • lun-used-space -> collected in template DONE
  • lun-used-space-percent -> calculated in template DONE
  1. QTREE object: Full parity

UM api response:
/usr/software/test/bin/ontapi -s -t dfm -p 443 10.234.172.232 umadmin netapp1! qtree-iter

<qtree-info>
                        <qtree-name>NFS_IO_VS1:/NFS_IO_VS1_root/&quot;my_~@#$%^&amp;*()&quot;</qtree-name>
                        <qtree-resource-key>fe866c40-cab7-11e7-bcd5-00a09865cd13:type=qtree,vserver_uuid=f67023b0-119c-11e8-8781-00a09865cd13,volume_uuid=c5780794-61d1-455e-82a5-09f5cd13a322,qtree_id=1</qtree-resource-key>
                        <qtree-size>
                                <qtree-size-info>
                                        <files-limit>0</files-limit>
                                        <files-used>11</files-used>
                                        <hard-limit>5120</hard-limit>
                                        <health-status>Critical</health-status>
                                        <soft-limit>5120</soft-limit>
                                        <space-used>8268</space-used>
                                        <status>normal</status>
                                </qtree-size-info>
                        </qtree-size>
                        <volume-name>NFS_IO_VS1_root</volume-name>
                        <volume-resource-key>fe866c40-cab7-11e7-bcd5-00a09865cd13:type=volume,uuid=c5780794-61d1-455e-82a5-09f5cd13a322</volume-resource-key>
                        <vserver-name>NFS_IO_VS1</vserver-name>
                        <vserver-resource-key>fe866c40-cab7-11e7-bcd5-00a09865cd13:type=vserver,uuid=f67023b0-119c-11e8-8781-00a09865cd13</vserver-resource-key>
</qtree-info>

Harvest 2.0 response:

curl -s 'http://localhost:15003/metrics' | grep "qtree_" | grep "&quot"                        
qtree_soft_disk_limit{cluster="ocum-mobility-01-02",datacenter="DC-bigtopZ",svm="NFS_IO_VS1",index="ocum-mobility-01-02_1",unit="Kbyte",type="tree",qtree="&quot;my_~@#$%^&amp;*()&quot;",volume="NFS_IO_VS1_root"} 5120
qtree_files_used{cluster="ocum-mobility-01-02",datacenter="DC-bigtopZ",volume="NFS_IO_VS1_root",svm="NFS_IO_VS1",index="ocum-mobility-01-02_1",type="tree",qtree="&quot;my_~@#$%^&amp;*()&quot;"} 11
qtree_file_limit{cluster="ocum-mobility-01-02",datacenter="DC-bigtopZ",volume="NFS_IO_VS1_root",svm="NFS_IO_VS1",index="ocum-mobility-01-02_1",type="tree",qtree="&quot;my_~@#$%^&amp;*()&quot;"} -1
qtree_disk_limit{cluster="ocum-mobility-01-02",datacenter="DC-bigtopZ",qtree="&quot;my_~@#$%^&amp;*()&quot;",volume="NFS_IO_VS1_root",svm="NFS_IO_VS1",index="ocum-mobility-01-02_1",unit="Kbyte",type="tree"} 5120
qtree_disk_used{cluster="ocum-mobility-01-02",datacenter="DC-bigtopZ",unit="Kbyte",type="tree",qtree="&quot;my_~@#$%^&amp;*()&quot;",volume="NFS_IO_VS1_root",svm="NFS_IO_VS1",index="ocum-mobility-01-02_1"} 8268
qtree_disk_used_pct_disk_limit{cluster="ocum-mobility-01-02",datacenter="DC-bigtopZ",type="tree",qtree="&quot;my_~@#$%^&amp;*()&quot;",volume="NFS_IO_VS1_root",svm="NFS_IO_VS1",index="ocum-mobility-01-02_1"} 161
qtree_labels{export_policy="default",oplocks="enabled",security_style="unix",status="normal",datacenter="DC-bigtopZ",cluster="ocum-mobility-01-02",qtree="&quot;my_~@#$%^&amp;*()&quot;",volume="NFS_IO_VS1_root",svm="NFS_IO_VS1"} 1.0
...

  • files-limit -> collected in template DONE(-1)
  • soft-limit -> collected in template DONE(disk)
  • hard-limit -> collected in template DONE(disk)
  • files-used -> collected in template DONE
  • space-used -> collected in template DONE
  • files-used-percent -> collected in template DONE
  • space-used-percent -> collected in template DONE
  1. VOLUME object: Partial parity

UM api response:
/usr/software/test/bin/ontapi -s -t dfm -p 443 10.234.172.232 umadmin netapp1! volume-iter

used 1 volume for space related metrics and another for efficiency related metrics
for space: kavya_svm_lun_ss_del:/vol_with_qtree

...
<volume-name>kavya_svm_lun_ss_del:/vol_with_qtree</volume-name>
                        <volume-resource-key>e4f33f90-f75f-11e8-9ed9-00a098e3215f:type=volume,uuid=f4d66b7a-ce6d-4271-ae9e-95a4dd048b52</volume-resource-key>
                        <volume-security-info>
                                <group-id>0</group-id>
                                <permissions>755</permissions>
                                <user-id>0</user-id>
                        </volume-security-info>
                        <volume-size>
                                <actual-volume-size>2147483648</actual-volume-size>
                                <afs-avail>731639808</afs-avail>
                                <afs-daily-growth-rate>0</afs-daily-growth-rate>
                                <afs-total>2040111104</afs-total>
                                <afs-used>1308471296</afs-used>
                                <afs-used-per-day>-15437</afs-used-per-day>
                                <is-snapshot-enabled>true</is-snapshot-enabled>
                                <overwrite-reserve-avail>12288</overwrite-reserve-avail>
                                <overwrite-reserve-total>12288</overwrite-reserve-total>
                                <overwrite-reserve-used>0</overwrite-reserve-used>
                                <quota-committed-space>0</quota-committed-space>
                                <snapshot-reserve-avail>106934272</snapshot-reserve-avail>
                                <snapshot-reserve-days-until-full>1251</snapshot-reserve-days-until-full>
                                <snapshot-reserve-total>107372544</snapshot-reserve-total>
                                <snapshot-reserve-used>438272</snapshot-reserve-used>
                                <snapshot-reserve-used-per-day>85426</snapshot-reserve-used-per-day>
                                <total>2147483648</total>
                </volume-size>
...

Harvest 2.0 response:

curl -s 'http://localhost:15002/metrics' | grep "volume_" | grep "vol_with_qtree"
volume_size_total{datacenter="DC-BigtopZ",cluster="umeng-aff220-01-02",volume="vol_with_qtree",node="umeng-aff220-01",svm="kavya_svm_lun_ss_del",aggr="NSLM12_002",style="flexvol"} 2040111104
volume_size_available{datacenter="DC-BigtopZ",cluster="umeng-aff220-01-02",volume="vol_with_qtree",node="umeng-aff220-01",svm="kavya_svm_lun_ss_del",aggr="NSLM12_002",style="flexvol"} 731553792
volume_size{datacenter="DC-BigtopZ",cluster="umeng-aff220-01-02",volume="vol_with_qtree",node="umeng-aff220-01",svm="kavya_svm_lun_ss_del",aggr="NSLM12_002",style="flexvol"} 2147483648
volume_size_used{datacenter="DC-BigtopZ",cluster="umeng-aff220-01-02",volume="vol_with_qtree",node="umeng-aff220-01",svm="kavya_svm_lun_ss_del",aggr="NSLM12_002",style="flexvol"} 1308557312
volume_size_used_percent{datacenter="DC-BigtopZ",cluster="umeng-aff220-01-02",volume="vol_with_qtree",node="umeng-aff220-01",svm="kavya_svm_lun_ss_del",aggr="NSLM12_002",style="flexvol"} 64
volume_overwrite_reserve_avail{datacenter="DC-BigtopZ",cluster="umeng-aff220-01-02",volume="vol_with_qtree",node="umeng-aff220-01",svm="kavya_svm_lun_ss_del",aggr="NSLM12_002",style="flexvol"} 12288
volume_overwrite_reserve_used{datacenter="DC-BigtopZ",cluster="umeng-aff220-01-02",volume="vol_with_qtree",node="umeng-aff220-01",svm="kavya_svm_lun_ss_del",aggr="NSLM12_002",style="flexvol"} 0
volume_overwrite_reserve_total{datacenter="DC-BigtopZ",cluster="umeng-aff220-01-02",volume="vol_with_qtree",node="umeng-aff220-01",svm="kavya_svm_lun_ss_del",aggr="NSLM12_002",style="flexvol"} 12288
volume_snapshot_reserve_available{datacenter="DC-BigtopZ",cluster="umeng-aff220-01-02",volume="vol_with_qtree",node="umeng-aff220-01",svm="kavya_svm_lun_ss_del",aggr="NSLM12_002",style="flexvol"} 106938368
volume_snapshot_reserve_size{datacenter="DC-BigtopZ",cluster="umeng-aff220-01-02",volume="vol_with_qtree",node="umeng-aff220-01",svm="kavya_svm_lun_ss_del",aggr="NSLM12_002",style="flexvol"} 107372544
volume_snapshot_reserve_used_percent{datacenter="DC-BigtopZ",cluster="umeng-aff220-01-02",volume="vol_with_qtree",node="umeng-aff220-01",svm="kavya_svm_lun_ss_del",aggr="NSLM12_002",style="flexvol"} 0
volume_snapshot_reserve_used{datacenter="DC-BigtopZ",cluster="umeng-aff220-01-02",volume="vol_with_qtree",node="umeng-aff220-01",svm="kavya_svm_lun_ss_del",aggr="NSLM12_002",style="flexvol"} 434176
...

for efficiency:

...
<volume-efficiency-info>
                                <compression-space-savings>2293760</compression-space-savings>
                                <compression-space-savings-percentage>20</compression-space-savings-percentage>
                                <dedupe-progress>Idle for 35236:12:46</dedupe-progress>
                                <dedupe-schedule>-</dedupe-schedule>
                                <dedupe-space-savings>7319552</dedupe-space-savings>
                                <dedupe-space-savings-percentage>62</dedupe-space-savings-percentage>
                                <dedupe-status>idle</dedupe-status>
                                <dedupe-type>regular</dedupe-type>
                                <efficiency-policy>auto</efficiency-policy>
                                <is-compression-enabled>true</is-compression-enabled>
                                <is-dedupe-enabled>true</is-dedupe-enabled>
                                <last-dedupe-begin-timestamp>1550902088</last-dedupe-begin-timestamp>
                                <last-dedupe-end-timestamp>1550902088</last-dedupe-end-timestamp>
                                <last-dedupe-scanned-size>0</last-dedupe-scanned-size>
                                <last-dedupe-status>Success</last-dedupe-status>
</volume-efficiency-info>
<volume-name>pukale_mixed_vserver:/browfiled_lun_vol</volume-name>
...

Harvest 2.0 response:

curl -s 'http://localhost:15001/metrics' | grep "volume_" | grep "browfiled_lun_vol"
volume_sis_dedup_saved_percent{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",volume="browfiled_lun_vol",node="AFFA200-206-76-78-01",svm="pukale_mixed_vserver",aggr="test",style="flexvol"} 62
volume_sis_compress_saved{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",volume="browfiled_lun_vol",node="AFFA200-206-76-78-01",svm="pukale_mixed_vserver",aggr="test",style="flexvol"} 2293760
volume_sis_dedup_saved{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",volume="browfiled_lun_vol",node="AFFA200-206-76-78-01",svm="pukale_mixed_vserver",aggr="test",style="flexvol"} 7319552
volume_sis_total_saved{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",volume="browfiled_lun_vol",node="AFFA200-206-76-78-01",svm="pukale_mixed_vserver",aggr="test",style="flexvol"} 9613312
volume_sis_total_saved_percent{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",volume="browfiled_lun_vol",node="AFFA200-206-76-78-01",svm="pukale_mixed_vserver",aggr="test",style="flexvol"} 82
volume_sis_compress_saved_percent{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",volume="browfiled_lun_vol",node="AFFA200-206-76-78-01",svm="pukale_mixed_vserver",aggr="test",style="flexvol"} 20
...
  • afs-avail -> collected in template DONE
  • afs-total -> collected in template DONE
  • afs-used -> collected in template DONE
  • snapshot-reserve-avail -> collected in template DONE
  • snapshot-reserve-total -> collected in template DONE
  • total -> collected in template DONE
  • compression-space-savings -> collected in template DONE
  • compression-space-savings-percentage -> collected in template DONE
  • dedupe-space-savings -> collected in template DONE
  • dedupe-space-savings-percentage -> collected in template DONE
  • afs-used-percent -> collected in template DONE
  • total-space-savings -> collected in template DONE
  • total-space-savings-percentage -> collected in template DONE
  • actual-volume-size -> collected in template DONE
  • snapshot-reserve-used -> collected in template DONE
  • snapshot-used-percent -> calculated in template DONE
  • overwrite-reserve-avail -> now collected in template DONE
  • overwrite-reserve-total -> now collected in template DONE
  • overwrite-reserve-used -> now collected in template DONE

Not collected and not possible to handle in template/plugin for now:

  • afs-daily-growth-rate -> not collected, not available in zapi
  • afs-used-per-day -> not collected, not available in zapi
  • snapshot-reserve-used-per-day -> not collected, not available in zapi
  • quota-committed-space -> not collected, it's fetched from quota/qtree and accumulated in AU in UM.
  • last-dedupe-scanned-size -> not collected --> available via sis-get-iter with policyId
  • dedupe-status -> not collected --> available via sis-get-iter with policyId

@cla-bot cla-bot bot added the cla-signed label Mar 2, 2023
@Hardikl Hardikl linked an issue Mar 2, 2023 that may be closed by this pull request
Copy link
Contributor

@rahulguptajss rahulguptajss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rest changes?

@Hardikl
Copy link
Contributor Author

Hardikl commented Mar 6, 2023

  1. AGGREGATE object: Partial parity

UM api response:
/usr/software/test/bin/ontapi -s -t dfm -p 443 10.234.172.232 umadmin netapp1! aggregate-iter

...
<aggregate-info>
                        <aggregate-name>AFFA200-206-76-78-01:puk_aggr1_on_154</aggregate-name>
                        <aggregate-resource-key>8601116e-1b56-11e9-bdec-00a098d87b4b:type=aggregate,uuid=1c735997-5c8f-4038-9cf4-86ba077a58a5</aggregate-resource-key>
                        <aggregate-size>
                                <daily-growth-rate>0</daily-growth-rate>
                                <size-available>806867300352</size-available>
                                <size-total>1217684824064</size-total>
                                <size-used>410817523712</size-used>
                                <size-used-per-day>-1432950</size-used-per-day>
                                <snapshot-reserve-avail>0</snapshot-reserve-avail>
                                <snapshot-reserve-total>0</snapshot-reserve-total>
                                <snapshot-reserve-used>0</snapshot-reserve-used>
                                <space-total-committed>2100510408704</space-total-committed>
                        </aggregate-size>
                        <aggregate-snaplock-type>non_snaplock</aggregate-snaplock-type>
                        <aggregate-state>online</aggregate-state>
                        <aggregate-status>Error</aggregate-status>
                        <block-type>64_bit</block-type>
                        <cluster-name>AFFA200-206-76-78</cluster-name>
                        <cluster-node-name>AFFA200-206-76-78-01</cluster-node-name>
                        <cluster-node-resource-key>8601116e-1b56-11e9-bdec-00a098d87b4b:type=cluster_node,uuid=4e7bc527-1b54-11e9-bdec-00a098d87b4b</cluster-node-resource-key>
                        <cluster-resource-key>8601116e-1b56-11e9-bdec-00a098d87b4b:type=cluster,uuid=8601116e-1b56-11e9-bdec-00a098d87b4b</cluster-resource-key>
                        <compression-space-savings>28045312</compression-space-savings>
                        <dedupe-space-savings>24505651200</dedupe-space-savings>
                        <has-local-root>false</has-local-root>
                        <has-partner-root>false</has-partner-root>
                        <hybrid-cache-size-total>0</hybrid-cache-size-total>
                        <is-cft-precommit>false</is-cft-precommit>
                        <is-hybrid>false</is-hybrid>
                        <is-hybrid-enabled>false</is-hybrid-enabled>
                        <raid-type>raid_dp</raid-type>
</aggregate-info>
...

Harvest 2.0 response:

curl -s 'http://localhost:15001/metrics' | grep "aggr_" | grep "puk_aggr1_on_154"
aggr_labels{type="ssd",state="online",datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",aggr="puk_aggr1_on_154",node="AFFA200-206-76-78-01"} 1.0
aggr_space_used_percent{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",aggr="puk_aggr1_on_154",node="AFFA200-206-76-78-01"} 34
aggr_space_used{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",aggr="puk_aggr1_on_154",node="AFFA200-206-76-78-01"} 410838249472
aggr_space_total{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",aggr="puk_aggr1_on_154",node="AFFA200-206-76-78-01"} 1217684824064
aggr_space_available{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",aggr="puk_aggr1_on_154",node="AFFA200-206-76-78-01"} 806846574592
aggr_compression_space_savings{datacenter="DC-BigtopZ",cluster="AFFA200-206-76-78",aggr="puk_aggr1_on_154",node="AFFA200-206-76-78-01"} 28045312
...
  • size-total -> collected in template DONE
  • size-available -> collected in template DONE
  • size-used -> collected in template DONE
  • size-used-percent -> collected in template DONE
  • compression-space-savings -> now collected in template DONE
  • dedupe-space-savings -> not collected, not available in zapi
  • space-total-committed -> not collected, not available in zapi

The above 3 are fetched from volume level and cumulated in AU in UM.

  • dedupe-space-savings-percent -> not collected, not available in zapi
  • compression-space-savings-percent -> not collected, not available in zapi --> not possible in template and plugin as 2 templates, can be done in promQl
    percent = ('compression-space-savings' / 'size-total') * 100;
  • size-used-per-day -> not collected, not available in zapi
  • daily-growth-rate -> not collected, not available in zapi
    available data in root aggr and Harvest exclude them in template
  • snapshot-reserve-avail -> not collected, no 1-1 mapping found in zapi
  • snapshot-reserve-total -> not collected, no 1-1 mapping found in zapi
  • snapshot-reserve-used -> not collected, no 1-1 mapping found in zapi

@Hardikl
Copy link
Contributor Author

Hardikl commented Mar 6, 2023

For REST changes,
LUN: full mapping
validated and working with changes.

QTREE: full mapping
we need to change the filter show_default_records=true, then only few qtrees would be fetched in Harvest. Need to discuss regarding the importance and usage of this filter.

working on Volume and Aggregate Rest mapping.

@Hardikl
Copy link
Contributor Author

Hardikl commented Mar 6, 2023

For REST changes,
AGGREGATE: partial mapping

  • size-total -> collected in template
  • size-available -> collected in template
  • size-used -> collected in template
  • size-used-percent -> collected in template
  • compression-space-savings -> not collected, not available in rest
  • dedupe-space-savings -> not collected, similar field exist in rest but data don't match with UM api, always show 0 for all aggregates.
  • space-total-committed -> not collected, not available in rest
  • dedupe-space-savings-percent -> not collected, not available in rest
  • compression-space-savings-percent -> not collected, not available in rest
  • size-used-per-day -> not collected, not available in rest
  • daily-growth-rate -> not collected, not available in rest
  • snapshot-reserve-avail -> not collected, not available in rest
  • snapshot-reserve-total -> not collected, not available in rest
  • snapshot-reserve-used -> not collected, not available in rest

@rahulguptajss
Copy link
Contributor

May be we only add what we have available in Zapi else it will remain as gap in Rest for long. Have you checked private cli? Also does any of these metrics used in dashboards in 1.6?

@Hardikl
Copy link
Contributor Author

Hardikl commented Mar 7, 2023

For REST changes,
VOLUME: partial mapping

  • afs-avail -> collected in template DONE
  • afs-total -> collected in template DONE
  • afs-used -> collected in template DONE
  • snapshot-reserve-avail -> collected in template DONE
  • snapshot-reserve-total -> collected in template DONE
  • total -> collected in template DONE
  • compression-space-savings -> collected in template DONE
  • compression-space-savings-percentage -> collected in template DONE
  • dedupe-space-savings -> collected in template DONE
  • dedupe-space-savings-percentage -> collected in template DONE
  • afs-used-percent -> collected in template DONE
  • total-space-savings -> collected in template DONE
  • total-space-savings-percentage -> collected in template DONE
  • actual-volume-size -> collected in template DONE
  • snapshot-reserve-used -> collected in template DONE
  • snapshot-used-percent -> calculated in template DONE
  • overwrite-reserve-avail -> now collected in template DONE
  • overwrite-reserve-total -> now collected in template DONE
  • overwrite-reserve-used -> now collected in template DONE
  • afs-daily-growth-rate -> not collected, not available in rest
  • afs-used-per-day -> not collected, not available in rest
  • snapshot-reserve-used-per-day -> not collected, not available in rest
  • quota-committed-space -> not collected, not available in rest
  • last-dedupe-scanned-size -> not collected, not available in rest
  • dedupe-status -> not collected, not available in rest

@Hardikl
Copy link
Contributor Author

Hardikl commented Mar 7, 2023

May be we only add what we have available in Zapi else it will remain as gap in Rest for long. Have you checked private cli? Also does any of these metrics used in dashboards in 1.6?

-> Yes, Agreed with this.

-> Yes, checked private cli for aggregate and volume, fields were not be available there also.

-> Below are the metric usage(from UM list) in Harvest 1.6 dashboard detail:
LUN: 3 metrics are consumed in 1.6 from UM list, we have them in ZAPI and REST, we're good.
QTREE: 1.6 don't have qtree dashboard, we're good.
VOLUME: many metrics were consumed in 1.6 dashboards from UM list, we fetched most of them, but not these:
quota-committed-space, last-dedupe-scanned-size, dedupe-status
AGGREGATE: 1.6 don't have aggregate dashboard, we're good.

@Hardikl
Copy link
Contributor Author

Hardikl commented Mar 9, 2023

Summary with the above changes:
--> Counters which can be available via template changes, they now made available in Harvest 2.0.

--> These are the counters which we don't currently fetch in Harvest 2.0:
from Volume:
afs-daily-growth-rate, afs-used-per-day, snapshot-reserve-used-per-day, quota-committed-space, last-dedupe-scanned-size, dedupe-status
from Aggregate:
compression-space-savings, dedupe-space-savings, space-total-committed, dedupe-space-savings-percent, compression-space-savings-percent, size-used-per-day, daily-growth-rate, snapshot-reserve-avail, snapshot-reserve-total, snapshot-reserve-used

--> Dashboard perspective, These 3 counters were consumed in Harvest 1.6, but not available in Harvest 2.0:
quota-committed-space, last-dedupe-scanned-size, dedupe-status

@@ -55,6 +55,9 @@ counters:
- snapshot-reserve-size => snapshot_reserve_size
- percentage-snapshot-reserve => snapshot_reserve_percent
- percentage-snapshot-reserve-used => snapshot_reserve_used_percent
- overwrite-reserve => overwrite_reserve_total
- overwrite-reserve-required => overwrite_reserve_avail
Copy link
Contributor

@rahulguptajss rahulguptajss Mar 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this correct mapping? Doesn't seem to match with avail definition . Does name overwrite_reserve_available sound better?

 *         @element    overwrite-reserve-required
 *         @type       integer, optional
 *         @range      [0..2^63-1]
 *         @desc       The reserved size (in bytes) that is required to ensure
 *                     that the reserved space is sufficient to allow all
 *                     space-reserved files and LUNs to be overwritten when the
 *                     volume is full. This field is valid only when the volume
 *                     is online.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Confirmed with UM code, they have total and used directly from zapi and populate available from them.

image

image

To sync with Rest and UM code, we would follow the same in Zapi.

@cgrinds cgrinds merged commit dbdaae3 into main Mar 10, 2023
@cgrinds cgrinds deleted the parity_with_um branch March 10, 2023 15:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Harvest should calculate capacity metrics similar to AIQ.UM
3 participants