Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

type confusion in zfs input module #4059

Closed
grimreaper opened this issue Apr 22, 2018 · 16 comments · Fixed by #4510
Closed

type confusion in zfs input module #4059

grimreaper opened this issue Apr 22, 2018 · 16 comments · Fixed by #4510
Assignees
Labels
bug unexpected problem or unintended behavior
Milestone

Comments

@grimreaper
Copy link

Bug report

Relevant telegraf.conf:

# Read metrics of ZFS from arcstats, zfetchstats, vdev_cache_stats, and pools
[[inputs.zfs]]
  ## ZFS kstat path. Ignored on FreeBSD
  ## If not specified, then default is:
  # kstatPath = "/proc/spl/kstat/zfs"

  ## By default, telegraf gather all zfs stats
  ## If not specified, then default is:
  # kstatMetrics = ["arcstats", "zfetchstats", "vdev_cache_stats"]

  ## By default, don't gather zpool stats
  #poolMetrics = true

System info:

∴uname -rms
FreeBSD 12.0-CURRENT amd64
∴telegraf version
Telegraf v1.5.3 (git: unknown unknown)

Steps to reproduce:

  1. Enable [[inputs.zfs]] with default arguments
  2. observe /var/log/telegraf.log
2018-04-22T22:14:00Z E! Field type conflict, dropping conflicted points: Response Error: Status Code [400], expected [204], [partial write: field type conflict: input field "arcstats_hash_elements_max" on measurement "zfs" is type float, already exists as type integer dropped=6]
2018-04-22T22:14:30Z E! Field type conflict, dropping conflicted points: Response Error: Status Code [400], expected [204], [partial write: field type conflict: input field "arcstats_hash_elements_max" on measurement "zfs" is type float, already exists as type integer dropped=6]
2018-04-22T22:15:00Z E! Field type conflict, dropping conflicted points: Response Error: Status Code [400], expected [204], [partial write: field type conflict: input field "arcstats_c_min" on measurement "zfs" is type float, already exists as type integer dropped=6]
2018-04-22T22:15:30Z E! Field type conflict, dropping conflicted points: Response Error: Status Code [400], expected [204], [partial write: field type conflict: input field "arcstats_hash_elements_max" on measurement "zfs" is type float, already exists as type integer dropped=6]

Expected behavior:

no error messages in log

Actual behavior:

error messages in log

@danielnelson
Copy link
Contributor

Did this happen after you upgraded Telegraf or does it happen on a fresh install?

@grimreaper
Copy link
Author

This happened on a fresh install. The version has never changed

@danielnelson
Copy link
Contributor

Can you run telegraf --input-filter zfs --test and upload the output?

@grimreaper
Copy link
Author

With poolMetrics = true

∴telegraf --input-filter zfs --test --config /usr/local/etc/telegraf.conf
E! Unable to append to /var/log/telegraf.log (open /var/log/telegraf.log: permission denied), using stderr
* Plugin: inputs.zfs, Collection 1
2018-04-24T01:16:18Z E! Error parsing capacity: strconv.ParseInt: parsing "16%": invalid syntax

with poolMetrics = false

* Plugin: inputs.zfs, Collection 1
> zfs,pools=bootpool::zroot,host=fasteagle arcstats_l2_hits=0i,arcstats_uncompressed_size=33077061120i,arcstats_arc_meta_max=1701509648i,arcstats_l2_write_in_l2=0i,arcstats_l2_free_on_write=0i,arcstats_l2_abort_lowmem=0i,arcstats_l2_evict_lock_retry=0i,arcstats_l2_writes_sent=0i,arcstats_l2_rw_clash=0i,arcstats_mfu_size=3664407040i,arcstats_sync_wait_for_async=850i,arcstats_arc_meta_used=1385678664i,arcstats_l2_asize=0i,arcstats_prefetch_data_misses=13347i,arcstats_demand_data_misses=895864i,zfetchstats_hits=255076i,arcstats_hash_chains=10392i,arcstats_hash_elements=425761i,arcstats_evict_l2_cached=0i,arcstats_l2_write_buffer_list_null_iter=0i,arcstats_overhead_size=2359285760i,vdev_cache_stats_hits=0i,arcstats_size=10550191944i,arcstats_prefetch_metadata_hits=134911i,arcstats_demand_hit_predictive_prefetch=16568i,arcstats_memory_throttle_count=0i,arcstats_l2_io_error=0i,zfetchstats_misses=7287319i,vdev_cache_stats_delegations=0i,arcstats_l2_evict_reading=0i,arcstats_hash_elements_max=434533i,arcstats_demand_data_hits=5675244i,arcstats_prefetch_data_hits=106932i,arcstats_l2_writes_error=0i,arcstats_mru_evictable_data=4543701504i,arcstats_anon_evictable_data=0i,arcstats_hash_chain_max=3i,arcstats_evict_l2_eligible=0i,arcstats_access_skip=30710517i,arcstats_mutex_miss=0i,arcstats_deleted=0i,arcstats_arc_meta_limit=16435617792i,arcstats_mru_evictable_metadata=51426816i,arcstats_hdr_size=149613752i,arcstats_allocated=7266612i,arcstats_compressed_size=7621527040i,zfetchstats_max_streams=7159767i,arcstats_mfu_ghost_evictable_metadata=0i,arcstats_mfu_ghost_size=0i,arcstats_mru_size=6315671040i,arcstats_hash_collisions=54074i,arcstats_demand_metadata_hits=90583795i,arcstats_mfu_evictable_metadata=12866048i,arcstats_mru_ghost_size=0i,arcstats_c=65742471168i,arcstats_evict_skip=0i,arcstats_mfu_ghost_hits=0i,arcstats_l2_write_passed_headroom=0i,arcstats_mru_ghost_evictable_data=0i,arcstats_anon_size=250368i,arcstats_l2_write_trylock_fail=0i,arcstats_l2_write_bytes=0i,arcstats_c_max=65742471168i,arcstats_evict_l2_skip=0i,arcstats_l2_write_buffer_bytes_scanned=0i,arcstats_l2_write_not_cacheable=0i,arcstats_l2_write_spa_mismatch=0i,arcstats_l2_cksum_bad=0i,arcstats_l2_writes_lock_retry=0i,arcstats_l2_writes_done=0i,arcstats_data_size=9164513280i,arcstats_mru_hits=3083855i,arcstats_l2_write_buffer_iter=0i,arcstats_l2_write_io_in_progress=0i,arcstats_l2_hdr_size=0i,arcstats_demand_metadata_misses=781093i,arcstats_misses=1709988i,vdev_cache_stats_misses=0i,arcstats_l2_feeds=0i,arcstats_mfu_evictable_data=2137289728i,arcstats_hits=96500882i,arcstats_arc_meta_min=4108904448i,arcstats_l2_write_pios=0i,arcstats_l2_read_bytes=0i,arcstats_anon_evictable_metadata=0i,arcstats_other_size=420249744i,arcstats_metadata_size=815815168i,arcstats_c_min=8217808896i,arcstats_p=32871235584i,arcstats_l2_write_buffer_list_iter=0i,arcstats_l2_evict_l1cached=0i,arcstats_mfu_ghost_evictable_data=0i,arcstats_mfu_hits=93183930i,arcstats_mru_ghost_hits=0i,arcstats_prefetch_metadata_misses=19684i,arcstats_l2_write_full=0i,arcstats_l2_size=0i,arcstats_evict_not_enough=0i,arcstats_l2_misses=0i,arcstats_mru_ghost_evictable_metadata=0i,arcstats_evict_l2_ineligible=0i 1524532640000000000

@danielnelson
Copy link
Contributor

Could you also run:

zpool list -Hp

and these:

sysctl -q arcstats
sysctl -q zfetchstats
sysctl -q vdev_cache_stats

@grimreaper
Copy link
Author

∴zpool list -Hp

bootpool        2130706432      728379392       1402327040      -       -       16%     34      1.00x   ONLINE  -
zroot   919123001344    113568739328    805554262016    -       -       17%     12      1.00x   ONLINE  -
∴sysctl -q arcstats
∴sysctl -q zfetchstats
∴sysctl -q vdev_cache_stats
∴sysctl -q kstat.zfs.misc.arcstats
kstat.zfs.misc.arcstats.demand_hit_predictive_prefetch: 81522
kstat.zfs.misc.arcstats.sync_wait_for_async: 1207
kstat.zfs.misc.arcstats.arc_meta_min: 4108904448
kstat.zfs.misc.arcstats.arc_meta_max: 11024332248
kstat.zfs.misc.arcstats.arc_meta_limit: 16435617792
kstat.zfs.misc.arcstats.arc_meta_used: 7499595816
kstat.zfs.misc.arcstats.memory_throttle_count: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 0
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 0
kstat.zfs.misc.arcstats.l2_write_pios: 0
kstat.zfs.misc.arcstats.l2_write_buffer_iter: 0
kstat.zfs.misc.arcstats.l2_write_full: 0
kstat.zfs.misc.arcstats.l2_write_not_cacheable: 0
kstat.zfs.misc.arcstats.l2_write_io_in_progress: 0
kstat.zfs.misc.arcstats.l2_write_in_l2: 0
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 0
kstat.zfs.misc.arcstats.l2_write_passed_headroom: 0
kstat.zfs.misc.arcstats.l2_write_trylock_fail: 0
kstat.zfs.misc.arcstats.l2_hdr_size: 0
kstat.zfs.misc.arcstats.l2_asize: 0
kstat.zfs.misc.arcstats.l2_size: 0
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 0
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 0
kstat.zfs.misc.arcstats.l2_evict_l1cached: 0
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0
kstat.zfs.misc.arcstats.l2_writes_lock_retry: 0
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_done: 0
kstat.zfs.misc.arcstats.l2_writes_sent: 0
kstat.zfs.misc.arcstats.l2_write_bytes: 0
kstat.zfs.misc.arcstats.l2_read_bytes: 0
kstat.zfs.misc.arcstats.l2_rw_clash: 0
kstat.zfs.misc.arcstats.l2_feeds: 0
kstat.zfs.misc.arcstats.l2_misses: 0
kstat.zfs.misc.arcstats.l2_hits: 0
kstat.zfs.misc.arcstats.mfu_ghost_evictable_metadata: 0
kstat.zfs.misc.arcstats.mfu_ghost_evictable_data: 0
kstat.zfs.misc.arcstats.mfu_ghost_size: 0
kstat.zfs.misc.arcstats.mfu_evictable_metadata: 586418688
kstat.zfs.misc.arcstats.mfu_evictable_data: 3349309952
kstat.zfs.misc.arcstats.mfu_size: 6125444608
kstat.zfs.misc.arcstats.mru_ghost_evictable_metadata: 0
kstat.zfs.misc.arcstats.mru_ghost_evictable_data: 0
kstat.zfs.misc.arcstats.mru_ghost_size: 0
kstat.zfs.misc.arcstats.mru_evictable_metadata: 208494080
kstat.zfs.misc.arcstats.mru_evictable_data: 16864935424
kstat.zfs.misc.arcstats.mru_size: 19414067200
kstat.zfs.misc.arcstats.anon_evictable_metadata: 0
kstat.zfs.misc.arcstats.anon_evictable_data: 0
kstat.zfs.misc.arcstats.anon_size: 1084416
kstat.zfs.misc.arcstats.other_size: 3589699312
kstat.zfs.misc.arcstats.metadata_size: 3136897024
kstat.zfs.misc.arcstats.data_size: 22403699200
kstat.zfs.misc.arcstats.hdr_size: 772999480
kstat.zfs.misc.arcstats.overhead_size: 3416745984
kstat.zfs.misc.arcstats.uncompressed_size: 68295676416
kstat.zfs.misc.arcstats.compressed_size: 22123851264
kstat.zfs.misc.arcstats.size: 29903295016
kstat.zfs.misc.arcstats.c_max: 65742471168
kstat.zfs.misc.arcstats.c_min: 8217808896
kstat.zfs.misc.arcstats.c: 65742471168
kstat.zfs.misc.arcstats.p: 32871235584
kstat.zfs.misc.arcstats.hash_chain_max: 5
kstat.zfs.misc.arcstats.hash_chains: 247622
kstat.zfs.misc.arcstats.hash_collisions: 525606
kstat.zfs.misc.arcstats.hash_elements_max: 2227531
kstat.zfs.misc.arcstats.hash_elements: 2225539
kstat.zfs.misc.arcstats.evict_l2_skip: 0
kstat.zfs.misc.arcstats.evict_l2_ineligible: 0
kstat.zfs.misc.arcstats.evict_l2_eligible: 0
kstat.zfs.misc.arcstats.evict_l2_cached: 0
kstat.zfs.misc.arcstats.evict_not_enough: 0
kstat.zfs.misc.arcstats.evict_skip: 0
kstat.zfs.misc.arcstats.access_skip: 72583450
kstat.zfs.misc.arcstats.mutex_miss: 0
kstat.zfs.misc.arcstats.deleted: 0
kstat.zfs.misc.arcstats.allocated: 22661292
kstat.zfs.misc.arcstats.mfu_ghost_hits: 0
kstat.zfs.misc.arcstats.mfu_hits: 172731893
kstat.zfs.misc.arcstats.mru_ghost_hits: 0
kstat.zfs.misc.arcstats.mru_hits: 12878609
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 147579
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 662424
kstat.zfs.misc.arcstats.prefetch_data_misses: 46562
kstat.zfs.misc.arcstats.prefetch_data_hits: 128842
kstat.zfs.misc.arcstats.demand_metadata_misses: 3676896
kstat.zfs.misc.arcstats.demand_metadata_hits: 174609381
kstat.zfs.misc.arcstats.demand_data_misses: 1822962
kstat.zfs.misc.arcstats.demand_data_hits: 10830744
kstat.zfs.misc.arcstats.misses: 5693999
kstat.zfs.misc.arcstats.hits: 186231391


∴sysctl -q kstat.zfs.misc.zfetchstats.
kstat.zfs.misc.zfetchstats.max_streams: 38604399
kstat.zfs.misc.zfetchstats.misses: 39457685
kstat.zfs.misc.zfetchstats.hits: 509518

sysctl -q kstat.zfs.misc.vdev_cache_stats
kstat.zfs.misc.vdev_cache_stats.misses: 0
kstat.zfs.misc.vdev_cache_stats.hits: 0
kstat.zfs.misc.vdev_cache_stats.delegations: 0

There are also a few others:

∴sysctl -q kstat.zfs.misc | cut -d . -f 1-4|uniq
kstat.zfs.misc.vdev_cache_stats
kstat.zfs.misc.arcstats
kstat.zfs.misc.zcompstats
kstat.zfs.misc.zfetchstats
kstat.zfs.misc.xuio_stats
kstat.zfs.misc.abdstats
kstat.zfs.misc.zio_trim
kstat.zfs.misc.metaslab_trace_stats

@danielnelson
Copy link
Contributor

It looks like you have an extra column with zpool list compared to what we are expecting, and the man page does not seem accurate about the number of columns either:

         -o property[,...]
                 Comma-separated list of properties to display. See the
                 "Properties" section for a list of valid properties. The
                 default list is name, size, used, available, fragmentation,
                 expandsize, capacity, health, altroot.

Could you run zpool list -p so that it shows the headers.

@grimreaper
Copy link
Author

grimreaper commented Apr 27, 2018

(fwiw between my last message and current message I ran zpool upgrade to enable some feature flags)

∴zpool list -p
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
bootpool  2130706432  728342528  1402363904        -         -    17%     34  1.00x  ONLINE  -
zroot     919123001344  115919593472  803203407872        -         -    16%     12  1.00x  ONLINE  -

@danielnelson
Copy link
Contributor

It seems like the manpage is not accurate. Perhaps we can change the command to:

zpool list -p -o health,size,alloc,free,fragmentation,capacity,dedupratio

We probably need to compare the functionality against FreeBSD 11 as well, we don't have a well defined list of FreeBSD platforms we support but based on the release dates I think we should consider at least 11 and 12.

@grimreaper
Copy link
Author

It is certainly good to be explicit and no rely on the default ordering. This is particularly true for data ingestion pipelines. That said, the man page ought to be fixed to.

@danielnelson
Copy link
Contributor

Could you compare against the man zpool on your system to verify, and report the discrepancy upstream for me?

@grimreaper
Copy link
Author

openzfs/openzfs#632

@grimreaper
Copy link
Author

@danielnelson danielnelson added bug unexpected problem or unintended behavior and removed need more info labels May 5, 2018
@mldailey
Copy link

mldailey commented Jul 17, 2018

It looks like this bug first appears in 11.2-RELEASE. I just upgraded a system from 11.1-RELEASE to 11.2-RELEASE and am experiencing this because of the added CKPOINT column.

On 11.1-RELEASE:
$ zpool list -p
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 139452219392 13926928384 125525291008 - 25% 9 1.00x ONLINE -
storage 29961691856896 5629922267136 24331769589760 - 12% 18 1.00x ONLINE -

On 11.2-RELEASE:
$ zpool list -p
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 138512695296 8287207424 130225487872 - - 14% 5 1.00x ONLINE -
storage 2989297238016 633385811968 2355911426048 - - 48% 21 1.00x ONLINE -

@uqs
Copy link

uqs commented Apr 25, 2021

Sorry to drag up this old issue, but I turned on inputs.zfs on a FreeBSD 12.2 system and that made telegraf croak.

relevant config bits (it works fine if I leave these out):

[[inputs.zfs]]

error message:

2021-04-25T14:55:11Z I! Starting Telegraf 1.17.3
panic: runtime error: index out of range [4] with length 1

goroutine 53 [running]:
github.com/influxdata/telegraf/plugins/inputs/zfs.(*Zfs).Gather(0xc000482180, 0x111d7b8, 0xc000142200, 0xc000065f98, 0x2b33a45)
        /wrkdirs/usr/ports/net-mgmt/telegraf/work/telegraf-1.17.3/plugins/inputs/zfs/zfs_freebsd.go:157 +0x645
github.com/influxdata/telegraf/models.(*RunningInput).Gather(0xc0004874a0, 0x111d7b8, 0xc000142200, 0xcf89a0, 0x0)
        /wrkdirs/usr/ports/net-mgmt/telegraf/work/telegraf-1.17.3/models/running_input.go:117 +0x6d
github.com/influxdata/telegraf/agent.(*Agent).gatherOnce.func1(0xc000380720, 0xc0004874a0, 0x111d7b8, 0xc000142200)
        /wrkdirs/usr/ports/net-mgmt/telegraf/work/telegraf-1.17.3/agent/agent.go:484 +0x3f
created by github.com/influxdata/telegraf/agent.(*Agent).gatherOnce
        /wrkdirs/usr/ports/net-mgmt/telegraf/work/telegraf-1.17.3/agent/agent.go:483 +0xb2

Some more debug output as requested from the original submitter:

# zpool list -p
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank  3985729650688  3403932774400  581796876288        -         -    31%     85  1.00x  ONLINE  -

SettingpoolMetrics = true makes it partially work:

# telegraf --input-filter zfs --test --config /usr/local/etc/telegraf.conf
2021-04-25T15:04:46Z I! Starting Telegraf 1.17.3
> zfs_pool,health=ONLINE,host=coyote.spoerlein.net,pool=tank allocated=3403944206336i,capacity=85i,dedupratio=1,fragmentation=31i,free=581785444352i,size=3985729650688i 1619363087000000000
panic: runtime error: index out of range [4] with length 1

goroutine 26 [running]:
github.com/influxdata/telegraf/plugins/inputs/zfs.(*Zfs).Gather(0xc000538960, 0x111d7b8, 0xc00007bde0, 0x0, 0x0)
        /wrkdirs/usr/ports/net-mgmt/telegraf/work/telegraf-1.17.3/plugins/inputs/zfs/zfs_freebsd.go:157 +0x645
github.com/influxdata/telegraf/agent.(*Agent).testRunInputs.func2(0xc000172750, 0xc000010818, 0xc000169b00, 0xc00007bda0, 0xc0004879f0)
        /wrkdirs/usr/ports/net-mgmt/telegraf/work/telegraf-1.17.3/agent/agent.go:424 +0x222
created by github.com/influxdata/telegraf/agent.(*Agent).testRunInputs
        /wrkdirs/usr/ports/net-mgmt/telegraf/work/telegraf-1.17.3/agent/agent.go:393 +0x109
Exit 2

(Note the proper first line output. In fact, I actually want only the pool stats, not all the individual datasets, is that possible?)

Setting the list to kstatMetrics = ["arcstats", "zfetchstats", "vdev_cache_stats"] doesn't magically make it work though. sysctl output for those look like:

# sysctl -a|egrep 'arcstats|zfetchstats|vdev_cache_stats'
kstat.zfs.misc.vdev_cache_stats.misses: 0           
kstat.zfs.misc.vdev_cache_stats.hits: 0                                         
kstat.zfs.misc.vdev_cache_stats.delegations: 0
kstat.zfs.misc.arcstats.demand_hit_prescient_prefetch: 7665203
kstat.zfs.misc.arcstats.demand_hit_predictive_prefetch: 0
kstat.zfs.misc.arcstats.async_upgrade_sync: 909860
kstat.zfs.misc.arcstats.arc_meta_min: 268435456
kstat.zfs.misc.arcstats.arc_meta_max: 1870257696
kstat.zfs.misc.arcstats.arc_dnode_limit: 107374182
kstat.zfs.misc.arcstats.arc_meta_limit: 1073741824
kstat.zfs.misc.arcstats.arc_meta_used: 573223952
kstat.zfs.misc.arcstats.arc_prune: 0
kstat.zfs.misc.arcstats.arc_loaned_bytes: 0
kstat.zfs.misc.arcstats.arc_tempreserve: 0
kstat.zfs.misc.arcstats.arc_no_grow: 0
kstat.zfs.misc.arcstats.memory_available_bytes: 0
kstat.zfs.misc.arcstats.memory_free_bytes: 0
kstat.zfs.misc.arcstats.memory_all_bytes: 0
kstat.zfs.misc.arcstats.memory_indirect_count: 0
kstat.zfs.misc.arcstats.memory_direct_count: 0
kstat.zfs.misc.arcstats.memory_throttle_count: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 0
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 0
kstat.zfs.misc.arcstats.l2_write_pios: 0 
kstat.zfs.misc.arcstats.l2_write_buffer_iter: 0
kstat.zfs.misc.arcstats.l2_write_full: 0 
kstat.zfs.misc.arcstats.l2_write_not_cacheable: 65082268
kstat.zfs.misc.arcstats.l2_write_io_in_progress: 0
kstat.zfs.misc.arcstats.l2_write_in_l2: 0
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 0
...
kstat.zfs.misc.arcstats.mfu_size: 280178688
kstat.zfs.misc.arcstats.mru_ghost_evictable_metadata: 557731840
kstat.zfs.misc.arcstats.mru_ghost_evictable_data: 49152000
kstat.zfs.misc.arcstats.mru_ghost_size: 606883840
kstat.zfs.misc.arcstats.mru_evictable_metadata: 28672
kstat.zfs.misc.arcstats.mru_evictable_data: 12732928
kstat.zfs.misc.arcstats.mru_size: 208486912
kstat.zfs.misc.arcstats.anon_evictable_metadata: 0
kstat.zfs.misc.arcstats.anon_evictable_data: 0
kstat.zfs.misc.arcstats.anon_size: 17987072
kstat.zfs.misc.arcstats.other_size: 292472016
kstat.zfs.misc.arcstats.bonus_size: 69116480
kstat.zfs.misc.arcstats.dnode_size: 168393856
kstat.zfs.misc.arcstats.dbuf_size: 54961680
kstat.zfs.misc.arcstats.metadata_size: 262006784
kstat.zfs.misc.arcstats.data_size: 244645888
kstat.zfs.misc.arcstats.hdr_size: 18745152
kstat.zfs.misc.arcstats.overhead_size: 326361088
kstat.zfs.misc.arcstats.uncompressed_size: 366532608
kstat.zfs.misc.arcstats.compressed_size: 180291584
kstat.zfs.misc.arcstats.size: 817869840
kstat.zfs.misc.arcstats.c_max: 4294967296
kstat.zfs.misc.arcstats.c_min: 536870912
kstat.zfs.misc.arcstats.c: 834644722
kstat.zfs.misc.arcstats.p: 434164316
kstat.zfs.misc.arcstats.hash_chain_max: 5
kstat.zfs.misc.arcstats.hash_chains: 2162
kstat.zfs.misc.arcstats.hash_collisions: 28941218
kstat.zfs.misc.arcstats.hash_elements_max: 358132
kstat.zfs.misc.arcstats.hash_elements: 69851
kstat.zfs.misc.arcstats.evict_l2_skip: 0
kstat.zfs.misc.arcstats.evict_l2_ineligible: 1578061168640
kstat.zfs.misc.arcstats.evict_l2_eligible: 20247800024576
kstat.zfs.misc.arcstats.evict_l2_cached: 0
kstat.zfs.misc.arcstats.evict_not_enough: 173947244
kstat.zfs.misc.arcstats.evict_skip: 28603701955
kstat.zfs.misc.arcstats.access_skip: 555113111
kstat.zfs.misc.arcstats.mutex_miss: 116742793
kstat.zfs.misc.arcstats.deleted: 265331489
kstat.zfs.misc.arcstats.allocated: 1973056409
kstat.zfs.misc.arcstats.mfu_ghost_hits: 53976088
kstat.zfs.misc.arcstats.mfu_hits: 13517139581
kstat.zfs.misc.arcstats.mru_ghost_hits: 55381108
kstat.zfs.misc.arcstats.mru_hits: 2394089304
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 114971824
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 10377458
kstat.zfs.misc.arcstats.prefetch_data_misses: 3865464
kstat.zfs.misc.arcstats.prefetch_data_hits: 10
kstat.zfs.misc.arcstats.demand_metadata_misses: 967739589
kstat.zfs.misc.arcstats.demand_metadata_hits: 15185401401
kstat.zfs.misc.arcstats.demand_data_misses: 186208613
kstat.zfs.misc.arcstats.demand_data_hits: 718842594
kstat.zfs.misc.arcstats.misses: 1272785490
kstat.zfs.misc.arcstats.hits: 15914621463
kstat.zfs.misc.zfetchstats.max_streams: 0
kstat.zfs.misc.zfetchstats.misses: 0
kstat.zfs.misc.zfetchstats.hits: 0

@uqs
Copy link

uqs commented Apr 25, 2021

Sorry, I guess I should've read the code first. Looks like it shells out to sysctl like so:

% sysctl -q kstat.zfs.misc.arcstats
% sysctl -q kstat.zfs.misc.arcstats.allocated
kstat.zfs.misc.arcstats.allocated: 1973139534

As you can see, it only works when specifying the full OID, a prefix match doesn't work. But this is only true on 2 servers that I tested, my laptop here does the prefix search just fine. Odd.

Funnily, I can hack around this like so, but the output then has the column names doubled :/

[[inputs.zfs]]
  ## If not specified, then default is:
  #kstatMetrics = ["arcstats", "zfetchstats", "vdev_cache_stats"]
  kstatMetrics = ["arcstats.allocated"]
  poolMetrics = true

...
# telegraf --input-filter zfs --test --config /usr/local/etc/telegraf.conf
2021-04-25T15:25:05Z I! Starting Telegraf 1.17.3
> zfs_pool,health=ONLINE,host=xxx.spoerlein.net,pool=tank allocated=3403944992768i,capacity=85i,dedupratio=1,fragmentation=31i,free=581784657920i,size=3985729650688i 1619364306000000000
> zfs,datasets=tank::tank/backup::tank/backu... pools=tank arcstats.allocated_allocated=1973247402i 1619364306000000000

But hey, I can also just set kstatMetrics = [""] and then get only the super basic used/free stats on the pool only.

Now why is sysctl messed up on these machines?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug unexpected problem or unintended behavior
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants