For introductory information on stats within Couchbase, start with the Couchbase server documentations.
Stat | Description |
---|---|
uuid | The unique identifier for the bucket |
ep_version | Version number of ep_engine |
ep_startup_time | System-generated engine startup time |
ep_data_age | Seconds since most recently |
stored object was modified | |
ep_data_age_highwat | ep_data_age high water mark |
ep_data_write_failed | Total compaction and commit failures |
ep_num_workers | Global number of shared worker threads |
ep_bucket_priority | Priority assigned to the bucket |
ep_total_deduplicated | Total number of items de-duplicated |
when queued to CheckpointManager | |
ep_total_deduplicated_flusher | Total number of items de-duplicated |
when processed at Flusher | |
ep_total_enqueued | Total number of items queued for |
persistence | |
ep_total_new_items | Total number of persisted new items |
ep_total_del_items | Total number of persisted deletions |
ep_total_persisted | Total number of items persisted |
ep_item_flush_failed | Number of times an item failed to |
flush due to storage errors | |
ep_item_commit_failed | Number of times a transaction failed |
to commit due to storage errors | |
ep_item_begin_failed | Number of times a transaction failed |
to start due to storage errors | |
ep_expired_access | Number of times an item was expired on |
application access. | |
ep_expired_compactor | Number of times an item was expired by |
the ep engine compactor | |
ep_expired_pager | Number of times an item was expired by |
ep engine item pager | |
ep_item_flush_expired | Number of times an item is not flushed |
due to the expiry of the item | |
ep_queue_size | Number of items queued for storage |
ep_flusher_todo | Number of items currently being |
written | |
ep_flusher_state | Current state of the flusher thread |
ep_commit_num | Total number of write commits |
ep_commit_time | Number of milliseconds of most recent |
commit | |
ep_commit_time_total | Cumulative milliseconds spent |
committing | |
ep_vbucket_del | Number of vbucket deletion events |
ep_vbucket_del_fail | Number of failed vbucket deletion |
events | |
ep_vbucket_del_max_walltime | Max wall time (µs) spent by deleting |
a vbucket | |
ep_vbucket_del_avg_walltime | Avg wall time (µs) spent by deleting |
a vbucket | |
ep_pending_compactions | For persistent buckets this is the count |
of compaction tasks. | |
ep_rollback_count | Number of rollbacks on consumer |
ep_flush_duration_total | Cumulative milliseconds spent flushing |
ep_num_ops_get_meta | Number of getMeta operations |
ep_num_ops_set_meta | Number of setWithMeta operations |
ep_num_ops_del_meta | Number of delWithMeta operations |
ep_num_ops_set_meta_res_failed | Number of setWithMeta ops that failed |
conflict resolution | |
ep_num_ops_del_meta_res_failed | Number of delWithMeta ops that failed |
conflict resolution | |
ep_num_ops_set_ret_meta | Number of setRetMeta operations |
ep_num_ops_del_ret_meta | Number of delRetMeta operations |
ep_num_ops_get_meta_on_set_meta | Num of background getMeta operations |
spawn due to setWithMeta operations | |
curr_items | Num items in active vbuckets (temp + |
live) | |
curr_temp_items | Num temp items in active vbuckets |
curr_items_tot | Num current items including those not |
active (replica, dead and pending | |
states) | |
ep_kv_size | Memory used to store item metadata, |
keys and values, no matter the | |
vbucket’s state. If an item’s value is | |
ejected, this stats will be | |
decremented by the size of the item’s | |
value. | |
ep_blob_num | The number of blob objects in the cache |
ep_blob_overhead | The “unused” memory caused by the |
allocator returning bigger chunks than | |
requested | |
ep_value_size | Memory used to store values for |
resident keys | |
ep_storedval_size | Memory used by storedval objects |
ep_storedval_overhead | The “unused” memory caused by the |
allocator returning bigger chunks than | |
requested | |
ep_storedval_num | The number of storedval objects |
allocated | |
ep_overhead | Extra memory used by transient data |
like persistence queues, replication | |
queues, checkpoints, etc | |
ep_item_num | The number of item objects allocated |
ep_mem_low_wat | Low water mark for auto-evictions |
ep_mem_low_wat_percent | Low water mark (as a percentage) |
ep_mem_high_wat | High water mark for auto-evictions |
ep_mem_high_wat_percent | High water mark (as a percentage) |
ep_total_cache_size | The total byte size of all items, no |
matter the vbucket’s state, no matter | |
if an item’s value is ejected | |
ep_oom_errors | Number of times unrecoverable OOMs |
happened while processing operations | |
ep_tmp_oom_errors | Number of times temporary OOMs |
happened while processing operations | |
ep_mem_tracker_enabled | True if memory usage tracker is |
enabled | |
ep_bg_fetched | Number of items fetched from disk |
ep_bg_fetch_avg_read_amplification | Average read amplification for all |
background fetch operations - ratio of | |
read()s to documents fetched. | |
ep_bg_meta_fetched | Number of meta items fetched from disk |
ep_bg_remaining_items | Number of remaining bg fetch items |
ep_bg_remaining_jobs | Number of remaining bg fetch jobs |
ep_num_pager_runs | Number of times we ran pager loops |
to seek additional memory | |
ep_num_expiry_pager_runs | Number of times we ran expiry pager |
loops to purge expired items from | |
memory/disk | |
ep_num_freq_decayer_runs | Number of times we ran the freq decayer |
task because a frequency counter has | |
become saturated | |
ep_num_access_scanner_runs | Number of times we ran accesss scanner |
to snapshot working set | |
ep_num_access_scanner_skips | Number of times accesss scanner task |
decided not to generate access log | |
ep_access_scanner_num_items | Number of items that last access |
scanner task swept to access log. | |
ep_access_scanner_task_time | Time of the next access scanner task |
(GMT), NOT_SCHEDULED if access scanner | |
has been disabled | |
ep_access_scanner_last_runtime | Number of seconds that last access |
scanner task took to complete. | |
ep_expiry_pager_task_time | Time of the next expiry pager task |
(GMT), NOT_SCHEDULED if expiry pager | |
has been disabled | |
ep_items_expelled_from_checkpoints | Number of items expelled from |
checkpoints. Expelled refers to items | |
that have been ejected from memory | |
but are still considered to be part of | |
the checkpoint. | |
ep_items_rm_from_checkpoints | Number of items removed from closed |
unreferenced checkpoints | |
ep_num_value_ejects | Number of times item values got |
ejected from memory to disk | |
ep_num_eject_failures | Number of items that could not be |
ejected | |
ep_num_not_my_vbuckets | Number of times Not My VBucket |
exception happened during runtime | |
ep_dbname | DB path |
ep_pending_ops | Number of ops awaiting pending |
vbuckets | |
ep_pending_ops_total | Total blocked pending ops since reset |
ep_pending_ops_max | Max ops seen awaiting 1 pending |
vbucket | |
ep_pending_ops_max_duration | Max time (µs) used waiting on pending |
vbuckets | |
ep_bg_num_samples | The number of samples included in the |
average | |
ep_bg_min_wait | The shortest time (µs) in the wait |
queue | |
ep_bg_max_wait | The longest time (µs) in the wait |
queue | |
ep_bg_wait_avg | The average wait time (µs) for an item |
before it’s serviced by the dispatcher | |
ep_bg_min_load | The shortest load time (µs) |
ep_bg_max_load | The longest load time (µs) |
ep_bg_load_avg | The average time (µs) for an item to |
be loaded from the persistence layer | |
ep_num_non_resident | The number of non-resident items |
ep_bg_wait | The total elapse time for the wait |
queue | |
ep_bg_load | The total elapse time for items to be |
loaded from the persistence layer | |
ep_allow_data_loss_during_shutdown | Whether data loss is allowed during |
server shutdown | |
ep_alog_block_size | Access log block size |
ep_alog_path | Path to the access log |
ep_access_scanner_enabled | Status of access scanner task |
ep_alog_sleep_time | Interval between access scanner runs |
in minutes | |
ep_alog_task_time | Hour in GMT time when access scanner |
task is scheduled to run | |
ep_backend | The backend that is being used for |
data persistence | |
ep_backfill_mem_threshold | The maximum percentage of memory that |
the backfill task can consume before | |
it is made to back off. | |
ep_bfilter_enabled | Bloom filter use: enabled or disabled |
ep_bfilter_key_count | Minimum key count that bloom filter |
will accomodate | |
ep_bfilter_fp_prob | Bloom filter’s allowed false positive |
probability | |
ep_bfilter_residency_threshold | Resident ratio threshold for full |
eviction policy, after which bloom | |
switches modes from accounting just | |
non resident items and deletes to | |
accounting all items | |
ep_bucket_type | The bucket type |
ep_chk_persistence_remains | Number of remaining vbuckets for |
checkpoint persistence | |
ep_chk_remover_stime | The time interval for purging closed |
checkpoints from memory | |
ep_couch_bucket | The name of this bucket |
ep_couch_host | The hostname that the couchdb views |
server is listening on | |
ep_couch_port | The port the couchdb views server is |
listening on | |
ep_couch_reconnect_sleeptime | The amount of time to wait before |
reconnecting to couchdb | |
ep_data_traffic_enabled | Whether or not data traffic is enabled |
for this bucket | |
ep_db_data_size | Total size of valid data in db files |
ep_db_file_size | Total size of the db files |
ep_db_prepare_size | Total size of SyncWrite prepares in db files |
ep_degraded_mode | True if the engine is either warming |
up or data traffic is disabled | |
ep_exp_pager_enabled | True if the expiry pager is enabled |
ep_exp_pager_stime | The time interval for purging expired |
items from memory | |
ep_exp_pager_initial_run_time | An initial start time for the expiry |
pager task in GMT | |
ep_fsync_after_every_n_bytes_written | If non-zero, perform an fsync after |
every N bytes written to disk | |
ep_getl_default_timeout | The default getl lock duration |
ep_getl_max_timeout | The maximum getl lock duration |
ep_ht_locks | The amount of locks per vb hashtable |
ep_ht_size | The initial size of each vb hashtable |
ep_max_checkpoints | The expected max number of checkpoints |
in each VBucket on a balanced system. | |
Note: That is not a hard limit on the | |
single vbucket. That is used (together | |
with checkpoint_memory_ratio) for | |
computing checkpoint_max_size, which | |
triggers checkpoint creation. | |
ep_max_item_size | The maximum value size |
ep_max_size | The maximum amount of memory this |
bucket can use | |
ep_max_vbuckets | The maximum amount of vbuckets that |
can exist in this bucket | |
ep_mutation_mem_ratio | The ratio of total memory available |
that we should start sending temp oom | |
or oom message when hitting | |
ep_seqno_persistence_timeout | Timeout for SeqnoPersistence operations |
ep_uncommitted_items | The amount of items that have not been |
written to disk | |
ep_warmup | Shows if warmup is enabled / disabled |
ep_warmup_batch_size | The size of each batch loaded during |
warmup | |
ep_warmup_dups | Number of Duplicate items encountered |
during warmup | |
ep_warmup_min_items_threshold | Percentage of total items warmed up |
before we enable traffic | |
ep_warmup_min_memory_threshold | Percentage of max mem warmed up before |
we enable traffic | |
ep_warmup_oom | The amount of oom errors that occured |
during warmup | |
ep_warmup_thread | The status of the warmup thread |
ep_warmup_time | The amount of time warmup took |
ep_workload_pattern | Workload pattern (mixed, read_heavy, |
write_heavy) monitored at runtime | |
ep_defragmenter_interval | How often defragmenter task should be |
run (in seconds). | |
ep_defragmenter_num_moved | Number of items moved by the |
defragmentater task. | |
ep_defragmenter_num_visited | Number of items visited (considered |
for defragmentation) by the | |
defragmenter task. | |
ep_defragmenter_sv_num_moved | Number of StoredValues moved by the |
defragmentater task. | |
ep_item_compressor_interval | How often item compressor task should |
be run (in milliseconds). | |
ep_item_compressor_num_compressed | Number of items compressed by the |
item compressor task. | |
ep_item_compressor_num_visited | Number of items visited (considered |
for compression) by the | |
item compressor task. | |
ep_cursor_dropping_lower_threshold | Memory threshold below which checkpoint |
remover will discontinue cursor | |
dropping. | |
ep_cursor_dropping_upper_threshold | Memory threshold above which checkpoint |
remover will start cursor dropping | |
ep_cursors_dropped | Number of cursors dropped by the |
checkpoint remover | |
ep_mem_freed_by_checkpoint_removal | Amount of memory freed through ckpt |
removal | |
ep_active_hlc_drift | The total absolute drift for all active |
vbuckets. This is microsecond | |
granularity. | |
ep_active_hlc_drift_count | The number of updates applied to |
ep_active_hlc_drift. | |
ep_replica_hlc_drift | The total absolute drift for all |
replica vbuckets. This is microsecond | |
granularity. | |
ep_replica_hlc_drift_count | The number of updates applied to |
ep_replica_hlc_drift. | |
ep_active_ahead_exceptions | The total number of ahead exceptions |
for all active vbuckets. | |
ep_active_behind_exceptions | The total number of behind exceptions |
for all active vbuckets. | |
ep_replica_ahead_exceptions | The total number of ahead exceptions |
for all replica vbuckets. | |
ep_replica_behind_exceptions | The total number of behind exceptions |
for all replica vbuckets. | |
ep_clock_cas_drift_threshold_exceeded | ep_active_ahead_exceptions + |
ep_replica_ahead_exceptions | |
ep_dcp_noop_mandatory_for_v5_features | If True,NOOP will be required for using |
features like xattrs/collections | |
ep_retain_erroneous_tombstones | If True, compactor will retain erroneous |
tombstones. | |
ep_pitr_enabled | If True Point in Time Recovery is |
enabled | |
ep_pitr_max_history_age | The number of seconds of the oldest |
entry to keep as part of compaction | |
ep_pitr_granularity | The granularity (in seconds) for the |
point in time recovery. |
Stat | Description |
---|---|
ep_data_read_failed | Total number of get failures |
ep_io_total_read_bytes | Total number of bytes read |
ep_io_total_write_bytes | Total number of bytes written |
ep_io_compaction_read_bytes | Total number of bytes read during compaction |
ep_io_compaction_write_bytes | Total number of bytes written during compaction |
io_flusher_write_amplification | Number of bytes written to disk during front-end flushing, divided by the document bytes for each document saved (key + metadata + value). |
io_total_write_amplification | Number of bytes written to disk during front-end flushing and compaction, divided by the document bytes for each document saved (key + metadata + value). |
Stat | Description |
---|---|
ep_vb_total | Total vBuckets (count) |
curr_items_tot | Total number of items |
curr_items | Number of active items in memory |
curr_temp_items | Number of temporary items in memory |
vb_dead_num | Number of dead vBuckets |
ep_diskqueue_items | Total items in disk queue |
ep_diskqueue_memory | Total memory used in disk queue |
ep_diskqueue_fill | Total enqueued items on disk queue |
ep_diskqueue_drain | Total drained items on disk queue |
ep_diskqueue_pending | Total bytes of pending writes |
ep_persist_vbstate_total | Total VB persist state to disk |
ep_meta_data_memory | Total memory used by meta data |
ep_meta_data_disk | Total disk used by meta data |
ep_checkpoint_memory | Memory of items in all checkpoints |
ep_checkpoint_memory_queue | Memory of all queued items in all checkpoints |
ep_checkpoint_memory_overhead_allocator | Mem of all checkpoints struct - from allocator |
ep_checkpoint_memory_overhead_allocator_queue | Mem of all checkpoints queues - from allocator |
ep_checkpoint_memory_overhead_allocator_index | Mem of all checkpoints index - from allocator |
ep_checkpoint_memory_overhead | Mem of all checkpoints struct |
ep_checkpoint_memory_overhead_queue | Mem of all queues internal struct |
ep_checkpoint_memory_overhead_index | Mem of all indexes (keys alloc included) |
ep_checkpoint_memory_pending_destruction | Memory of checkpoint structures awaiting |
destruction by a background task | |
ep_checkpoint_memory_quota | Max allocation allowed in all checkpoints |
ep_checkpoint_memory_upper_mark_bytes | Checkpoint mem usage that triggers mem recovery |
ep_checkpoint_memory_lower_mark_bytes | Ckpts recovery target, recovery yields when hit |
Stat | Description |
---|---|
vb_active_num | Number of active vBuckets |
vb_active_curr_items | Number of active non-deleted items |
vb_active_num_non_resident | Number of non-resident items |
vb_active_perc_mem_resident | % memory resident |
vb_active_eject | Number of times item values got ejected |
vb_active_expired | Number of times an item was expired |
vb_active_ht_memory | Memory overhead of the hashtable |
vb_active_ht_memory_overhead | Memory overhead of the hashtable |
vb_active_itm_memory | Total memory of all items in active |
vBuckets (StoredValue + key + value Blob) | |
vb_active_meta_data_memory | Metadata memory of all items in active |
vBuckets (StoredValue + key) | |
vb_active_meta_data_disk | Total metadata disk |
vb_active_checkpoint_memory | Memory of active items in all checkpoints |
vb_active_checkpoint_memory_overhead | Memory of all active checkpoints structures |
vb_active_ops_create | Number of create operations |
vb_active_ops_update | Number of update operations |
vb_active_ops_delete | Number of delete operations |
vb_active_ops_reject | Number of rejected operations |
vb_active_queue_size | Active items in disk queue |
vb_active_backfill_queue_size | Items in active vbucket backfill queue |
vb_active_queue_memory | Memory used for disk queue |
vb_active_queue_age | Sum of disk queue item age in milliseconds |
vb_active_queue_pending | Total bytes of pending writes |
vb_active_queue_fill | Total enqueued items |
vb_active_queue_drain | Total drained items |
vb_active_rollback_item_count | Num of items rolled back |
vb_active_sync_write_accepted_count | Number of SyncWrites accepted |
vb_active_sync_write_committed_count | Number of SyncWrites committed |
vb_active_sync_write_aborted_count | Number of SyncWrites aborted |
vb_active_hp_vb_req_size | Num of async high priority requests |
Stat | Description |
---|---|
vb_replica_num | Number of replica vBuckets |
vb_replica_curr_items | Number of replica non-deleted items |
vb_replica_num_non_resident | Number of non-resident items |
vb_replica_perc_mem_resident | % memory resident |
vb_replica_eject | Number of times item values got ejected |
vb_replica_expired | Number of times an item was expired |
vb_replica_ht_memory | Memory overhead of the hashtable |
vb_replica_ht_memory_overhead | Memory overhead of the hashtable |
vb_replica_itm_memory | Total memory of all items in replica |
vBuckets (StoredValue + key + value Blob) | |
vb_replica_meta_data_memory | Metadata memory of all items in replica |
vBuckets (StoredValue + key) | |
vb_replica_meta_data_disk | Total metadata disk |
vb_replica_checkpoint_memory | Memory of replica items in all checkpoints |
vb_replica_checkpoint_memory_overhead | Memory of all replica checkpoints structures |
vb_replica_ops_create | Number of create operations |
vb_replica_ops_update | Number of update operations |
vb_replica_ops_delete | Number of delete operations |
vb_replica_ops_reject | Number of rejected operations |
vb_replica_queue_size | Replica items in disk queue |
vb_replica_backfill_queue_size | Items in replica vbucket backfill queue |
vb_replica_queue_memory | Memory used for disk queue |
vb_replica_queue_age | Sum of disk queue item age in milliseconds |
vb_replica_queue_pending | Total bytes of pending writes |
vb_replica_queue_fill | Total enqueued items |
vb_replica_queue_drain | Total drained items |
vb_replica_rollback_item_count | Num of items rolled back |
vb_replica_sync_write_accepted_count | Number of SyncWrites accepted |
vb_replica_sync_write_committed_count | Number of SyncWrites committed |
vb_replica_sync_write_aborted_count | Number of SyncWrites aborted |
vb_replica_hp_vb_req_size | Num of async high priority requests |
Stat | Description |
---|---|
vb_pending_num | Number of pending vBuckets |
vb_pending_curr_items | Number of pending non-deleted items |
vb_pending_num_non_resident | Number of non-resident items |
vb_pending_perc_mem_resident | % memory resident |
vb_pending_eject | Number of times item values got ejected |
vb_pending_expired | Number of times an item was expired |
vb_pending_ht_memory | Memory overhead of the hashtable |
vb_pending_ht_memory_overhead | Memory overhead of the hashtable |
vb_pending_itm_memory | Total memory of all items in pending |
vBuckets (StoredValue + key + value Blob) | |
vb_pending_meta_data_memory | Metadata memory of all items in pending |
vBuckets (StoredValue + key) | |
vb_pending_meta_data_disk | Total metadata disk |
vb_pending_checkpoint_memory | Memory of pending items in all checkpoints |
vb_pending_checkpoint_memory_overhead | Memory of all pending checkpoints structures |
vb_pending_ops_create | Number of create operations |
vb_pending_ops_update | Number of update operations |
vb_pending_ops_delete | Number of delete operations |
vb_pending_ops_reject | Number of rejected operations |
vb_pending_queue_size | Pending items in disk queue |
vb_pending_backfill_queue_size | Items in pending vbucket backfill queue |
vb_pending_queue_memory | Memory used for disk queue |
vb_pending_queue_age | Sum of disk queue item age in milliseconds |
vb_pending_queue_pending | Total bytes of pending writes |
vb_pending_queue_fill | Total enqueued items |
vb_pending_queue_drain | Total drained items |
vb_pending_rollback_item_count | Num of items rolled back |
vb_pending_hp_vb_req_size | Num of async high priority requests |
The stats below are listed for each vbucket.
Stat | Description |
---|---|
num_items | Number of items in this vbucket |
num_tmp_items | Number of temporary items in memory |
num_non_resident | Number of non-resident items |
vb_pending_perc_mem_resident | % memory resident |
vb_pending_eject | Number of times item values got ejected |
vb_pending_expired | Number of times an item was expired |
ht_memory | Memory overhead of the hashtable |
ht_num_deleted_items | Number of deleted items in the hashtable |
ht_num_in_memory_items | Number of in-memory items in the hashtable |
ht_num_in_memory_non_resident_items | Number of in-memory non-resident items (i.e. items which only have their metadata in memory) |
ht_num_items | Number of items in the hashtable |
ht_num_temp_items | Number of temporary items in the hashable |
ht_item_memory | Total item memory |
ht_cache_size | Total size of cache (Includes non resident |
items) | |
num_ejects | Number of times an item was ejected from |
memory | |
ops_create | Number of create operations |
ops_update | Number of update operations |
ops_delete | Number of delete operations |
ops_reject | Number of rejected operations |
queue_size | Pending items in disk queue |
backfill_queue_size | Items in backfill queue |
queue_memory | Memory used for disk queue |
queue_age | Sum of disk queue item age in milliseconds |
queue_fill | Total enqueued items |
queue_drain | Total drained items |
pending writes | Total bytes of pending writes |
db_data_size | Total size of valid data on disk |
db_file_size | Total size of the db file |
db_prepare_size | Total size of SyncWrite prepares on disk |
high_seqno | The last seqno assigned by this vbucket |
purge_seqno | The last seqno purged by the compactor |
bloom_filter | Status of the vbucket’s bloom filter |
bloom_filter_size | Size of the bloom filter bit array |
bloom_filter_key_count | Number of keys inserted into the bloom |
filter, considers overlapped items as one, | |
so this may not be accurate at times. | |
uuid | The current vbucket uuid |
rollback_item_count | Num of items rolled back |
hp_vb_req_size | Num of async high priority requests |
max_cas | Maximum CAS of all items in the vbucket. |
This is a hybrid logical clock value in | |
nanoseconds. | |
max_cas_str | max_cas as a time stamp string (seconds |
since epoch). | |
total_abs_drift | The accumulated absolute drift for this |
vbucket’s hybrid logical clock in | |
microseconds. | |
total_abs_drift_count | The number of updates applied to |
total_abs_drift. | |
drift_ahead_threshold_exceeded | The number of HLC updates that had a value |
ahead of the local HLC and were over the | |
drift_ahead_threshold. | |
drift_ahead_threshold | The ahead threshold in ns. |
drift_behind_threshold_exceeded | The number of HLC updates that had a value |
behind the local HLC and were over the | |
drift_behind_threshold. | |
drift_behind_threshold | The behind threshold in ns. |
logical_clock_ticks | How many times this vbucket’s HLC has |
returned logical clock ticks. | |
might_contain_xattrs | True if the vbucket might contain xattrs. |
True means that Xattrs were stored to the | |
vbucket, note that the flag does not clear | |
itself if all xattrs were removed. | |
high_prepared_seqno | Durability: The seqno of the highest |
prepared mutation the vbucket is tracking | |
high_completed_seqno | Durability: The seqno of the highest |
durable write that has completed, completed | |
includes both committed and aborted writes. |
For Ephemeral buckets, the following additional statistics are listed for each vbucket:
Stat | Description |
---|---|
seqlist_count | number of documents in this VBucket’s sequence list. |
seqlist_deleted_count | Count of deleted documents in this VBucket’s sequence list. |
seqlist_high_seqno | High sequence number in sequence list for this VBucket. |
seqlist_highest_deduped_seqno | Highest de-duplicated sequence number in sequence list for this VBucket. |
seqlist_read_range_begin | Starting sequence number for this VBucket’s sequence list read range. Marks the lower bound of possible stale documents in the sequence list. |
seqlist_read_range_end | Ending sequence number for this VBucket’s sequence list read range. Marks the upper bound of possible stale documents in the sequence list. |
seqlist_read_range_count | Count of elements for this VBucket’s sequence list read range (i.e. end - begin). |
seqlist_stale_count | Count of stale documents in this VBucket’s sequence list. |
seqlist_stale_value_bytes | Number of bytes of stale values in this VBucket’s sequence list. |
seqlist_stale_metadata_bytes | Number of bytes of stale metadata (key + fixed metadata) in this VBucket’s sequence list. |
Stats | Description |
——————————+——————————————– | |
abs_high_seqno | The last seqno assigned by this vbucket |
high_seqno | The last seqno assigned by this vbucket, in |
in case of replica, the last closed check- | |
point’s end seqno. | |
last_persisted_seqno | The last persisted seqno for the vbucket |
purge_seqno | The last seqno purged by the compactor |
uuid | The current vbucket uuid |
last_persisted_snap_start | The last persisted snapshot start seqno for |
the vbucket | |
last_persisted_snap_end | The last persisted snapshot end seqno for |
the vbucket |
Stats | Description |
——————————+——————————————– | |
num_entries | Number of entries in the failover table of |
this vbucket | |
erroneous_entries_erased | Number of erroneous entries erased in the |
failover table of this vbucket | |
n:id | vb_uuid of nth failover entry in the |
failover table of this vbucket | |
n:seq | seqno of nth failover entry in the |
failover table of this vbucket |
Each stat begins with ep_dcpq:
followed by a unique client_id and
another colon. For example, if your client is named, slave1
, the
created
stat would be ep_dcpq:slave1:created
.
***Consumer Connections
created | Creation time for the tap connection |
pending_disconnect | True if we’re hanging up on this client |
reserved | True if the dcp stream is reserved |
supports_ack | True if the connection use flow control |
total_acked_bytes | The amount of bytes that the consumer has acked |
unacked_bytes | The amount of bytes the consumer has processed but not acked |
type | The connection type (producer or consumer) |
max_buffer_bytes | Size of flow control buffer |
paused | true if this client is blocked |
paused_reason | Description of why client is paused |
****Per Stream Stats
buffer_bytes | The amount of unprocessed bytes |
buffer_items | The amount of unprocessed items |
end_seqno | The seqno where this stream should end |
flags | The flags used to create this stream |
items_ready | Whether the stream has messages ready to send |
ready_queue_memory | Memory occupied by elements in the DCP readyQ |
opaque | The unique stream identifier |
snap_end_seqno | The start seqno of the last snapshot received |
snap_start_seqno | The end seqno of the last snapshot received |
start_seqno | The start start seqno used to create this stream |
state | The stream state (pending, reading, or dead) |
vb_uuid | The vb uuid used to create this stream |
***Producer Connections
buf_backfill_bytes | The amount of bytes backfilled but not sent |
buf_backfill_items | The amount of items backfilled but not sent |
bytes_sent | The amount of unacked bytes sent to the consumer |
created | Creation time for the tap connection |
flow_control | True if the connection use flow control |
items_remaining | The amount of items remaining to be sent |
items_sent | The amount of items already sent to the consumer |
last_sent_time | The last time this connection sent a message |
last_receive_time | The last time this connection received a message |
max_buffer_bytes | The maximum amount of bytes that can be sent without |
receiving an ack from the consumer | |
noop_enabled | Whether or not this connection sends noops |
noop_tx_interval | The time interval between noop messages |
noop_wait | Whether or not this connection is waiting for a |
noop response from the consumer | |
pending_disconnect | True if we’re hanging up on this client |
priority | The connection priority for streaming data |
num_streams | Total number of streams in the connection in any state |
num_dead_streams | Total number of dead streams in the connection |
reserved | True if the dcp stream is reserved |
supports_ack | True if the connection use flow control |
total_acked_bytes | The amount of bytes that have been acked by the |
consumer when flow control is enabled | |
total_bytes_sent | The amount of bytes actually sent to the consumer |
total_uncompressed_data_size | Size of data before compression sent to the consumer. |
Only present if compression is enabled | |
type | The connection type (producer or consumer) |
unacked_bytes | The amount of bytes the consumer has no acked |
backfill_num_active | Number of active (running) backfills |
backfill_num_snoozing | Number of snoozing (running) backfills |
backfill_num_pending | Number of pending (not running) backfills |
backfill_order | Order backfills should be scheduled |
paused | true if this client is blocked |
paused_reason | Description of why client is paused |
send_stream_end_on_client_close_stream | Send STREAM_END msg when DCP client closes stream |
****Per Stream Stats
backfill_disk_items | The amount of items read during backfill from disk |
backfill_mem_items | The amount of items read during backfill from memory |
backfill_sent | The amount of items sent to the consumer during the |
end_seqno | The seqno send mutations up to |
flags | The flags supplied in the stream request |
items_ready | Whether the stream has items ready to send |
last_sent_seqno | The last seqno sent by this stream |
last_sent_snap_end_seqno | The last snapshot end seqno sent by active stream |
last_read_seqno | The last seqno read by this stream from disk or memory |
ready_queue_memory | Memory occupied by elements in the DCP readyQ |
memory_phase | The amount of items sent during the memory phase |
opaque | The unique stream identifier |
snap_end_seqno | The last snapshot end seqno (Used if a consumer is |
resuming a stream) | |
snap_start_seqno | The last snapshot start seqno (Used if a consumer is |
resuming a stream) | |
start_seqno | The seqno to start sending mutations from |
state | The stream state (pending, backfilling, in-memory, |
takeover-send, takeover-wait, or dead) | |
vb_uuid | The vb uuid used in the stream request |
cur_snapshot_type | The type of the current snapshot being received |
cur_snapshot_start | The start seqno of the current snapshot being |
received | |
cur_snapshot_end | The end seqno of the current snapshot being received |
Aggregated dcp stats allow dcp connections to be logically grouped and aggregated together by prefixes.
For example, if all of your dcp connections started with xdcr:
or
replication
, you could call stats dcpagg :
to request stats grouped by
everything before the first :
character, giving you a set for xdcr
and a
set for replication
.
[prefix]:count | Number of connections matching this prefix |
[prefix]:producer_count | Total producer connections with this prefix |
[prefix]:items_sent | Total items sent with this prefix |
[prefix]:items_remaining | Total items remaining to be sent with this |
prefix | |
[prefix]:total_bytes | Total number of bytes sent with this prefix |
[prefix]:total_uncompressed_data_size | Size of data before compression sent to the |
consumer with this prefix. Only present if | |
compression is enabled | |
[prefix]:backoff | Total number of backoff events |
ep_dcp_num_running_backfills | Total number of running backfills across all |
dcp connections | |
ep_dcp_max_running_backfills | Max running backfills we can have across all |
dcp connections | |
ep_dcp_dead_conn_count | Total dead connections |
Timing stats provide histogram data from high resolution timers over various operations within the system.
As this data is multi-dimensional, some parsing may be required for
machine processing. It’s somewhat human readable, but the stats
script mentioned in the Getting Started section above will do fancier
formatting for you.
Consider the following sample stats:
STAT disk_insert_8,16 9488 STAT disk_insert_16,32 290 STAT disk_insert_32,64 73 STAT disk_insert_64,128 86 STAT disk_insert_128,256 48 STAT disk_insert_256,512 2 STAT disk_insert_512,1024 12 STAT disk_insert_1024,2048 1
This tells you that disk_insert
took 8-16µs 9,488 times, 16-32µs
290 times, and so on.
The same stats displayed through the stats
CLI tool would look like
this:
disk_insert (10008 total) 8us - 16us : ( 94.80%) 9488 ########################################### 16us - 32us : ( 97.70%) 290 # 32us - 64us : ( 98.43%) 73 64us - 128us : ( 99.29%) 86 128us - 256us : ( 99.77%) 48 256us - 512us : ( 99.79%) 2 512us - 1ms : ( 99.91%) 12 1ms - 2ms : ( 99.92%) 1
The following histograms are available from “timings” in the above form to describe when time was spent doing various things:
bg_wait | bg fetches waiting in the dispatcher queue |
bg_load | bg fetches waiting for disk |
set_with_meta | set_with_meta latencies |
access_scanner | access scanner run times |
checkpoint_remover | checkpoint remover run times |
item_pager | item pager run times |
expiry_pager | expiry pager run times |
pending_ops | client connections blocked for operations |
in pending vbuckets | |
storage_age | Analogous to ep_storage_age in main stats |
data_age | Analogous to ep_data_age in main stats |
get_cmd | servicing get requests |
arith_cmd | servicing incr/decr requests |
get_stats_cmd | servicing get_stats requests |
get_vb_cmd | servicing vbucket status requests |
set_vb_cmd | servicing vbucket set state commands |
del_vb_cmd | servicing vbucket deletion commands |
chk_persistence_cmd | waiting for checkpoint persistence |
notify_io | waking blocked connections |
paged_out_time | time (in seconds) objects are non-resident |
disk_insert | waiting for disk to store a new item |
disk_update | waiting for disk to modify an existing item |
disk_del | waiting for disk to delete an item |
disk_vb_del | waiting for disk to delete a vbucket |
disk_commit | waiting for a commit after a batch of updates |
item_alloc_sizes | Item allocation size counters (in bytes) |
bg_batch_size | Batch size for background fetches |
persistence_cursor_get_all_items | Time spent in fetching all items by |
persistence cursor from checkpoint queues | |
dcp_cursors_get_all_items | Time spent in fetching all items by all dcp |
cursors from checkpoint queues | |
sync_write_commit_majority | Commit duration for level=majority SyncWrites |
sync_write_commit_majority_and_persist_on_master | Commit duration for level=majorityPersistActive SyncWrites |
sync_write_commit_persist_to_majority | Commit duration for level=persistMajority SyncWrites |
The following histograms are available from “eviction” and provide a histogram of execution frequencies and eviction thresholds. Note, these statstics are only valid for the hifi_mfu eviction policy.
ep_active_or_pending_frequency_values_evicted | Probabilistic count of frequencies |
that were evicted | |
ep_replica_frequency_values_evicted | Probabilistic count of frequencies |
that were evicted | |
ep_active_or_pending_frequency_values_snapshot | Snapshot of last frequency histogram |
ep_replica_frequency_values_snapshot | Snapshot of last frequency histogram |
The following histograms are available from “scheduler” and “runtimes” describing the scheduling overhead times and task runtimes incurred by various IO and Non-IO tasks respectively:
READ tasks | |
bg_fetcher_tasks | histogram of scheduling overhead/task |
runtimes for background fetch tasks | |
bg_fetcher_meta_tasks | histogram of scheduling overhead/task |
runtimes for background fetch meta tasks | |
vkey_stat_bg_fetcher_tasks | histogram of scheduling overhead/task |
runtimes for fetching item from disk for | |
vkey stat tasks | |
warmup_tasks | histogram of scheduling overhead/task |
runtimes for warmup tasks | |
---|---|
WRITE tasks | |
vbucket_persist_high_tasks | histogram of scheduling overhead/task |
runtimes for snapshot vbucket state in | |
high priority tasks | |
vbucket_persist_low_tasks | histogram of scheduling overhead/task |
runtimes for snapshot vbucket state in | |
low priority tasks | |
vbucket_deletion_tasks | histogram of scheduling overhead/task |
runtimes for vbucket deletion tasks | |
flusher_tasks | histogram of scheduling overhead/task |
runtimes for flusher tasks | |
flush_all_tasks | histogram of scheduling overhead/task |
runtimes for flush all tasks | |
compactor_tasks | histogram of scheduling overhead/task |
runtimes for vbucket level compaction | |
tasks | |
statsnap_tasks | histogram of scheduling overhead/task |
runtimes for stats snapshot tasks | |
mutation_log_compactor_tasks | histogram of scheduling overhead/task |
runtimes for access log compaction tasks | |
AUXIO tasks | |
access_scanner_tasks | histogram of scheduling overhead/task |
runtimes for access scanner tasks | |
backfill_tasks | histogram of scheduling overhead/task |
runtimes for backfill tasks | |
NONIO tasks | |
conn_notification_tasks | histogram of scheduling overhead/task |
runtimes for connection notification | |
tasks | |
checkpoint_remover_tasks | histogram of scheduling overhead/task |
runtimes for checkpoint removal tasks | |
vb_memory_deletion_tasks | histogram of scheduling overhead/task |
runtimes for memory deletion of vbucket | |
tasks | |
checkpoint_stats_tasks | histogram of scheduling overhead/task |
runtimes for checkpoint stats tasks | |
item_pager_tasks | histogram of scheduling overhead/task |
runtimes for item pager tasks | |
hashtable_resize_tasks | histogram of scheduling overhead/task |
runtimes for hash table resizer tasks | |
pending_ops_tasks | histogram of scheduling overhead/task |
runtimes for processing dcp bufferred | |
items tasks | |
conn_manager_tasks | histogram of scheduling overhead/task |
runtimes for dcp/tap connection manager | |
tasks | |
defragmenter_tasks | histogram of scheduling overhead/task |
runtimes for the in-memory defragmenter | |
tasks | |
workload_monitor_tasks | histogram of scheduling overhead/task |
runtimes for the workload monitor which | |
detects and sets the workload pattern |
Hash stats provide information on your vbucket hash tables.
Requesting these stats does affect performance, so don’t do it too
regularly, but it’s useful for debugging certain types of performance
issues. For example, if your hash table is tuned to have too few
buckets for the data load within it, the max_depth
will be too large
and performance will suffer.
avg_count | The average number of items per vbucket |
avg_max | The average max depth of a vbucket hash table |
avg_min | The average min depth of a vbucket hash table |
largest_max | The largest hash table depth of in all vbuckets |
largest_min | The the largest minimum hash table depth of all vbuckets |
max_count | The largest number of items in a vbucket |
min_count | The smallest number of items in a vbucket |
total_counts | The total numer of items in all vbuckets |
It is also possible to get more detailed hash tables stats by using ‘hash detail’. This will print per-vbucket stats.
Each stat is prefixed with vb_
followed by a number, a colon, then
the individual stat name.
For example, the stat representing the size of the hash table for
vbucket 0 is vb_0:size
.
state | The current state of this vbucket |
size | Number of hash buckets |
locks | Number of locks covering hash table operations |
min_depth | Minimum number of items found in a bucket |
max_depth | Maximum number of items found in a bucket |
reported | Number of items this hash table reports having |
counted | Number of items found while walking the table |
resized | Number of times the hash table resized |
mem_size | Running sum of memory used by each item |
mem_size_counted | Counted sum of current memory used by each item |
Checkpoint stats provide detailed information on per-vbucket checkpoint datastructure.
Like Hash stats, requesting these stats has some impact on performance.
Therefore, please do not poll them from the server frequently.
Each stat is prefixed with vb_
followed by a number, a colon, and then
each stat name.
cursor_name:cursor_checkpoint_id | Checkpoint ID at which the cursor is |
name ‘cursor_name’ is pointing now | |
cursor_name:cursor_distance | The distance of cursor from checkpoint |
begin | |
cursor_name:cursor_seqno | The seqno at which the cursor |
‘cursor_name’ is pointing now | |
cursor_name:cursor_op | The type of operation of the item pointed |
by cursor | |
cursor_name:num_visits | Number of times a batch of items have been |
drained from a checkpoint of ‘cursor_name’ | |
cursor_name:num_items_for_cursor | Number of items remaining for the cursor |
open_checkpoint_id | ID of the current open checkpoint |
num_conn_cursors | Number of referencing dcp/tap cursors |
num_checkpoint_items | Number of total items in a checkpoint |
datastructure | |
num_open_checkpoint_items | Number of items in the open checkpoint |
(empty item excluded) | |
num_checkpoints | Number of all checkpoints in the bucket, |
including all Vbuckets/CMs/Destroyers | |
num_checkpoints_pending_destruction | Number of checkpoints detached from CMs |
and owned by Destroyers | |
state | The state of the vbucket this checkpoint |
contains data for | |
persisted_checkpoint_id | The last persisted checkpoint number |
mem_usage | Total memory taken up by items in all |
checkpoints under given manager |
Additionally each Checkpoint will generate the following stats, these are prefixed with the vbucket and the id of the Checkpoint, e.g. “vb_0:id_52:state”
state | Checkpoint open or closed |
type | Type of checkpoint, disk or memory |
key_index_allocator_bytes | The number of bytes currently allocated to |
the key index(s) as returned by the | |
underlying std::allocator implementation, | |
including keys. | |
to_write_allocator_bytes | The number of bytes currently allocated to |
the toWrite queue as returned by the | |
underlying std::allocator implementation | |
mem_usage_queued_items | Size of all items queued in checkpoints, |
computed by checkpoint counters | |
mem_usage_queue_overhead | Bytes consumed by the toWrite struct |
internals, computed by checkpoint counters | |
mem_usage_key_index_overhead | Bytes consumed by the key index. Accounts |
both struct internals and keys. Computed | |
by checkpoint counters | |
num_items | Number of items queued in the checkpoint |
(empty item excluded) |
This provides various memory-related stats including some stats from jemalloc.
mem_used | Engine’s total memory usage |
mem_used_estimate | Engine’s total estimated memory usage |
This is a faster stat to read, but | |
lags mem_used as it’s only updated | |
when a threshold is crossed see | |
mem_used_merge_threshold | |
mem_used_merge_threshold | A threshold which triggers the merge |
of per-core memory used into mem_used | |
bytes | Engine’s total memory usage |
ep_kv_size | Memory used to store item metadata, |
keys and values, no matter the | |
vbucket’s state. If an item’s value | |
is ejected, this stat will be | |
decremented by the size of the | |
item’s value. | |
ep_value_size | Memory used to store values for |
resident keys | |
ep_overhead | Extra memory used by transient data |
like persistence queue, replication | |
queues, checkpoints, etc | |
ep_max_size | Max amount of data allowed in memory |
ep_mem_low_wat | Low water mark for auto-evictions |
ep_mem_low_wat_percent | Low water mark (as a percentage) |
ep_mem_high_wat | High water mark for auto-evictions |
ep_mem_high_wat_percent | High water mark (as a percentage) |
ep_oom_errors | Number of times unrecoverable OOMs |
happened while processing operations | |
ep_tmp_oom_errors | Number of times temporary OOMs |
happened while processing operations | |
ep_blob_num | The number of blob objects in the |
cache | |
ep_blob_overhead | The “unused” memory caused by the |
allocator returning bigger chunks | |
than requested | |
ep_storedval_size | Memory used by storedval objects |
ep_storedval_overhead | The “unused” memory caused by the |
allocator returning bigger chunks | |
than requested | |
ep_storedval_num | The number of storedval objects |
allocated | |
ep_item_num | The number of item objects allocated |
ep_arena_memory_allocated | The total memory allocated from the |
engine’s arena (same as | |
ep_arena:allocated below) | |
ep_arena_memory_resident | The resident set size of the engine’s |
arena. |
The following stats are found by querying jemalloc, definitions of the jemalloc stats can be found at:
ep_arena:allocated: | ep_arena:small.allocated + ep_arena:large.allocated |
ep_arena:arena: | The id of the arena registered to the bucket |
ep_arena:base: | This is “stats.arenas.<i>.base” from jemalloc where <i> is the bucket’s arena |
ep_arena:fragmentation_size: | ep_arena:resident - ep_arena:allocated |
ep_arena:internal: | This is “stats.arenas.<i>.internal” from jemalloc where <i> is the bucket’s arena |
ep_arena:large.allocated: | This is “stats.arenas.<i>.large.allocate” from jemalloc where <i> is the bucket’s arena |
ep_arena:mapped: | This is “stats.arenas.<i>.mapped” from jemalloc where <i> is the bucket’s arena |
ep_arena:resident: | This is “stats.arenas.<i>.resident” from jemalloc where <i> is the bucket’s arena |
ep_arena:retained: | This is “stats.arenas.<i>.retained” from jemalloc where <i> is the bucket’s arena |
ep_arena:small.allocated: | This is “stats.arenas.<i>.small.allocated” from jemalloc where <i> is the bucket’s arena |
ep_arena_global:allocated: | ep_arena_global:small.allocated + ep_arena_global:large.allocated |
ep_arena_global:arena: | The id of the arena used for global (non bucket) allocations. |
ep_arena_global:base: | See “ep_arena:” entry, this is the stat query but for the ‘global’ arena. |
ep_arena_global:fragmentation_size: | ep_arena_global:resident - ep_arena_global:allocated |
ep_arena_global:internal: | See “ep_arena:” entry, this is the stat query but for the ‘global’ arena. |
ep_arena_global:large.allocated: | See “ep_arena:” entry, this is the stat query but for the ‘global’ arena. |
ep_arena_global:mapped: | See “ep_arena:” entry, this is the stat query but for the ‘global’ arena. |
ep_arena_global:resident: | See “ep_arena:” entry, this is the stat query but for the ‘global’ arena. |
ep_arena_global:retained: | See “ep_arena:” entry, this is the stat query but for the ‘global’ arena. |
ep_arena_global:small.allocated: | See “ep_arena:” entry, this is the stat query but for the ‘global’ arena. |
key_cas | The keys current cas value | KV |
key_exptime | Expiration time from the epoch | KV |
key_flags | Flags for this key | KV |
key_is_dirty | If the value is not yet persisted | KV |
key_is_resident | If the value is resident in memory | KV |
key_valid | See description below | V |
key_vb_state | The vbucket state of this key | KV |
All of the above numeric statistics (cas, exptime, flags) are printed as decimal integers.
key_valid
can have the following responses:
this_is_a_bug - Some case we didn’t take care of. dirty - The value in memory has not been persisted yet. length_mismatch - The key length in memory doesn’t match the length on disk. data_mismatch - The data in memroy doesn’t match the data on disk. flags_mismatch - The flags in memory don’t match the flags on disk. valid - The key is both on disk and in memory ram_but_not_disk - The value doesn’t exist yet on disk. item_deleted - The item has been deleted.
Stats warmup
shows statistics related to warmup logic
ep_warmup | Shows if warmup is enabled / disabled |
ep_warmup_estimated_key_count | Estimated number of keys in database |
ep_warmup_estimated_value_count | Estimated number of values in database |
ep_warmup_state | The current state of the warmup thread |
ep_warmup_thread | Warmup thread status |
ep_warmup_key_count | Number of keys warmed up |
ep_warmup_value_count | Number of values warmed up |
ep_warmup_dups | Duplicates encountered during warmup |
ep_warmup_oom | OOMs encountered during warmup |
ep_warmup_time | Time (µs) spent by warming data |
ep_warmup_keys_time | Time (µs) spent by warming keys |
ep_warmup_mutation_log | Number of keys present in mutation log |
ep_warmup_access_log | Number of keys present in access log |
ep_warmup_min_items_threshold | Percentage of total items warmed up |
before we enable traffic | |
ep_warmup_min_memory_threshold | Percentage of max mem warmed up before |
we enable traffic |
These provide various low-level stats and timings from the underlying KV storage system and useful to understand various states of the storage system.
The following stats are available for all database engine:
open | Number of database open operations |
close | Number of database close operations |
readTime | Time spent in read operations |
readSize | Size of data in read operations |
writeTime | Time spent in write operations |
writeSize | Size of data in write operations |
delete | Time spent in delete() calls |
The following stats are available for the CouchStore database engine:
backend_type | Type of backend database engine |
commit | Time spent in CouchStore commit operation |
compaction | Time spent in compacting vbucket database file |
numLoadedVb | Number of Vbuckets loaded into memory |
lastCommDocs | Number of docs in the last commit |
failure_set | Number of failed set operation |
failure_get | Number of failed get operation |
failure_vbset | Number of failed vbucket set operation |
save_documents | Time spent in CouchStore save documents operation |
io_bg_fetch_docs_read | Number of documents (full and meta-only) fetched from disk |
io_bg_fetch_doc_bytes | Number of bytes read while fetching documents (key + value + rev_meta) |
io_flusher_write_amplification | Number of bytes written to disk during front-end flushing, divided by the document bytes for each document saved (key + metadata + value). |
io_total_write_amplification | Number of bytes written to disk during front-end flushing and compaction, divided by the document bytes for each document saved (key + metadata + value). |
io_num_write | Number of io write operations |
io_document_write_bytes | Number of document bytes written (key + value + rev_meta) |
io_total_read_bytes | Number of bytes read (total, including Couchstore B-Tree and other overheads) |
io_total_write_bytes | Number of bytes written (total, including Couchstore B-Tree and other overheads) |
io_compaction_read_bytes | Number of bytes read (compaction only, includes Couchstore B-Tree and other overheads) |
io_compaction_write_bytes | Number of bytes written (compaction only, includes Couchstore B-Tree and other overheads) |
block_cache_hits | Number of block cache hits in buffer cache provided by underlying store |
block_cache_misses | Number of block cache misses in buffer cache provided by underlying store |
getMultiFsReadCount | Number of filesystem read()s per getMulti() request |
getMultiFsReadPerDocCount | Number of filesystem read()s per getMulti() request, divided by the number of documents fetched; gives an average read() count per fetched document |
KV Store Timing stats provide timing information from the underlying storage system. These stats are on shard (group of partitions) level.
The following histograms are available from “kvtimings” in the form described in Timings section above. These stats are prefixed with the rw_<Shard number>: indicating the times spent doing various things:
commit | time spent in commit operations |
compact | time spent in file compaction operations |
snapshot | time spent in VB state snapshot operations |
delete | time spent in delete operations |
save_documents | time spent in persisting documents in storage |
readTime | Time spent in read operations, measured from when the read was initially requsted (bgFetch queued), until when the KVStore completes the read of that document. |
readSize | Size of data in read operations |
writeTime | time spent in writing to storage subsystem |
writeSize | sizes of writes given to storage subsystem |
saveDocCount | batch sizes of the save documents calls |
fsReadTime | time spent in doing filesystem reads |
fsWriteTime | time spent in doing filesystem writes |
fsSyncTime | time spent in doing filesystem sync operations |
fsReadSize | sizes of various filesystem reads issued |
fsWriteSize | sizes of various filesystem writes issued |
fsReadSeek | values of various seek operations in file |
flusherWriteAmplificationRatio | Write Amplification per saveDocs batch |
Some information about the number of shards and Executor pool information. These are available as “workload” stats:
ep_workload:num_shards | number of shards or groups of partitions |
ep_workload:num_writers | number of threads that prioritize write ops |
ep_workload:num_readers | number of threads that prioritize read ops |
ep_workload:num_auxio | number of threads that prioritize aux io ops |
ep_workload:num_nonio | number of threads that prioritize non io ops |
ep_workload:num_sleepers | number of threads that are sleeping |
ep_workload:ready_tasks | number of global tasks that are ready to run |
Additionally the following stats on the current state of the TaskQueues are also presented
HiPrioQ_Writer:InQsize | count high priority bucket writer tasks waiting |
HiPrioQ_Writer:OutQsize | count high priority bucket writer tasks runnable |
HiPrioQ_Reader:InQsize | count high priority bucket reader tasks waiting |
HiPrioQ_Reader:OutQsize | count high priority bucket reader tasks runnable |
HiPrioQ_AuxIO:InQsize | count high priority bucket auxio tasks waiting |
HiPrioQ_AuxIO:OutQsize | count high priority bucket auxio tasks runnable |
HiPrioQ_NonIO:InQsize | count high priority bucket nonio tasks waiting |
HiPrioQ_NonIO:OutQsize | count high priority bucket nonio tasks runnable |
LowPrioQ_Writer:InQsize | count low priority bucket writer tasks waiting |
LowPrioQ_Writer:OutQsize | count low priority bucket writer tasks runnable |
LowPrioQ_Reader:InQsize | count low priority bucket reader tasks waiting |
LowPrioQ_Reader:OutQsize | count low priority bucket reader tasks runnable |
LowPrioQ_AuxIO:InQsize | count low priority bucket auxio tasks waiting |
LowPrioQ_AuxIO:OutQsize | count low priority bucket auxio tasks runnable |
LowPrioQ_NonIO:InQsize | count low priority bucket nonio tasks waiting |
LowPrioQ_NonIO:OutQsize | count low priority bucket nonio tasks runnable |
This provides the stats from AUX dispatcher and non-IO dispatcher, and from all the reader and writer threads running for the specific bucket. Along with stats, the job logs for each of the dispatchers and worker threads is also made available.
The following stats are available for the workers and dispatchers:
state | Threads’s current status: running, sleeping etc. |
runtime | The amount of time since the thread started running |
task | The activity/job the thread is involved with at the moment |
The following stats are for individual job logs:
starttime | The timestamp when the job started |
runtime | Time it took for the job to run |
task | The activity/job the thread ran during that time |
Values for scopes and collections are available from a number of keys. The entire set of scopes/collection or individual scope or collection can be interrogated using name or id.
Available keys:
- Stats for all scopes or a single scope (using scope name as a key)
- Stats for all collections or a single collection (using collection name as a key)
- Stats for a single scope using the id as a key
- Stats for a single collection using the id as a key
Further details are available at a vbucket granularity, individual vbucket view is an optional argument.
“collections” and “collections-byid” returns the following statistics, most keys returned are prefixed with the scope-id and collection-id encoded as 0x prefixed hexadecimal. For brevity, ‘sid’ and ‘cid’ are used for scope-id and collection-id.
sid:cid:disk_size | Approximate disk-usage of the collection. Note the sum of all collection disk-sizes does not equal the bucket disk usage |
sid:cid:items | Number of items stored in the collection. |
sid:cid:maxTTL | The Time-To-Live value for the collection, omitted if none defined. |
sid:cid:mem_used | Approximate memory-usage of the collection. Note the sum of all collection mem_used does not equal the bucket mem_used. |
sid:cid:name | The collection’s name. |
sid:cid:ops_delete | The number of delete operations performed against the collection. |
sid:cid:ops_get | The number of get operations performed against the collection. |
sid:cid:ops_store | The number of storage operations performed against the collection. |
sid:cid:scope_name | The name of the collection’s scope. |
manifest_uid | The uid of the last manifest accepted from the cluster, only returned when all collections are requested (no name or id provided) |
Note for disk-size and upgrade: An upgrade to ‘cheshire-cat’ means all existing data becomes owned by the _default collection, if the upgrade was off-line, the disk-size is initialised to the total disk used by the bucket.
“collections-details” returns vbucket collection data, an optional vbucket (decimal value) allows a single vbucket to be inspected. Keys returned are prefixed with the vbucket ID as “vb_x”, where x is a decimal value and may also include the collection-id encoded as a 0x hexadecimal value (cid used in table).
vb_x:cid:high_seqno | The high-seqno of the collection. |
vb_x:cid:items | The number of items the collection stores in this vbucket. |
vb_x:cid:persisted_high_seqno | The highest persisted seqno. |
vb_x:cid:scope | The collection’s scope (as an 0x id). |
vb_x:cid:maxTTL | The Time-To-Live value for the collection, omitted if none defined. |
vb_x:cid:start_seqno | The start seqno of the collection, the seqno when it was created. |
vb_x:collections | The number of collections the vbucket knows about. |
vb_x:manifest_uid | The id of the manifest last used to update the vbucket. |
“scopes” and “scopes-byid” returns the following statistics. For stats related to the scope only, they are prefixed with the scope-id as a 0x prefixed hexadecimal value, for collections within a scope they are prefixed with scope-id and collection-id as a 0x prefixed hexadecimal value. For brevity, ‘sid’ and ‘cid’ are used for scope-id and collection-id.
When a specific scope is selected, each collection within the scope is returned. When the scope (no argument) key is used only the names of the collections in each scope are returned. The sid:cid stats returned within the scopes view are the same values (and definitions) as the keys/value returned from “collections” and “collections-byid”.
sid:cid:name | The name of a collection in the scope, multiple names maybe returned. |
sid:collections | The count of collections in the scope. |
sid:disk_size | The sum of all collection ‘disk_size’. |
sid:items | The sum of all collection ‘items’. |
sid:mem_used | The sum of all collection ‘mem_used’. |
sid:name | The name of the scope. |
sid:ops_delete | The sum of all collection ‘ops_delete’. |
sid:ops_get | The sum of all collection ‘ops_get’. |
sid:ops_store | The sum of all collection ‘ops_store. |
manifest_uid | The uid of the last manifest accepted from the cluster, only returned when all scopes are requested (no name or id provided) |
“scopes-details” returns vbucket scope data, an optional vbucket (decimal value) allows a single vbucket to be inspected. Keys returned are prefixed with the vbucket ID as “vb_x”, where x is a decimal value and may also include the scope/collection-id encoded as a 0x hexadecimal value (sid/cid used in table).
vb_x:scopes | The number of scopes. |
vb_x:sid | All of the known scope-ids returned, the value is the index position from the internal container |
vb_x:sid:cid:items | The item count of a collection, repeated for all collections. |
vb_x:manifest_uid | The id of the manifest last used to update the vbucket. |
Resets the list of stats below.
Reset Stats:
ep_bg_load |
ep_bg_wait |
ep_bg_max_load |
ep_bg_min_load |
ep_bg_max_wait |
ep_bg_min_wait |
ep_commit_time |
ep_flush_duration |
ep_flush_duration_highwat |
ep_io_bg_fetch_docs_read |
ep_io_num_write |
ep_io_bg_fetch_doc_bytes |
ep_io_write_bytes |
ep_items_expelled_from_checkpoints |
ep_items_rm_from_checkpoints |
ep_num_eject_failures |
ep_num_pager_runs |
ep_num_not_my_vbuckets |
ep_num_value_ejects |
ep_pending_ops_max |
ep_pending_ops_max_duration |
ep_pending_ops_total |
ep_vbucket_del_max_walltime |
pending_ops |
Reset Histograms:
bg_load |
bg_wait |
chk_persistence_cmd |
data_age |
del_vb_cmd |
disk_insert |
disk_update |
disk_del |
disk_vb_del |
disk_commit |
get_stats_cmd |
item_alloc_sizes |
get_vb_cmd |
notify_io |
pending_ops |
persistence_cursor_get_all_items |
dcp_cursors_get_all_items |
set_vb_cmd |
storage_age |
ep_active_or_pending_frequency_values_evicted |
ep_replica_frequency_values_evicted |
ep_active_or_pending_frequency_values_snapshot |
ep_replica_frequency_values_snapshot |
The difference between ep_storage_age
and ep_data_age
is somewhat
subtle, but when you consider that a given record may be updated
multiple times before hitting persistence, it starts to be clearer.
ep_data_age
is how old the data we actually wrote is.
ep_storage_age
is how long the object has been waiting to be
persisted.
Opening the data store is broken into three distinct phases:
During the initialization phase, the server is not accepting connections or otherwise functional. This is often quick, but in a server crash can take some time to perform recovery of the underlying storage.
This time is made available via the ep_dbinit
stat.
After initialization, warmup begins. At this point, the server is capable of taking new writes and responding to reads. However, only records that have been pulled out of the storage or have been updated from other clients will be available for request.
(note that records read from persistence will not overwrite new records captured from the network)
During this phase, ep_warmup_thread
will report running
and
ep_warmed_up
will be increasing as records are being read.
Once complete, ep_warmed_up
will stop increasing and
ep_warmup_thread
will report complete
.
The uuid stats allows clients to check if the unique identifier created and assigned to the bucket when it is created. By looking at this a client can verify that the bucket hasn’t been recreated since it was used.