Skip to content

Commit

Permalink
*: add debugging metrics
Browse files Browse the repository at this point in the history
  • Loading branch information
xiang90 committed Apr 26, 2016
1 parent 7161eee commit 6764509
Show file tree
Hide file tree
Showing 7 changed files with 98 additions and 99 deletions.
131 changes: 65 additions & 66 deletions Documentation/metrics.md
Original file line number Diff line number Diff line change
@@ -1,46 +1,18 @@
# Metrics

**NOTE: The metrics feature is considered experimental. We may add/change/remove metrics without warning in future releases.**
etcd uses [Prometheus][prometheus] for metrics reporting. The metrics can be used for real-time monitoring and debugging. etcd does not persist its metrics; if a member restarts, the metrics will be reset.

etcd uses [Prometheus][prometheus] for metrics reporting in the server. The metrics can be used for real-time monitoring and debugging.
etcd only stores these data in memory. If a member restarts, metrics will reset.

The simplest way to see the available metrics is to cURL the metrics endpoint `/metrics` of etcd. The format is described [here](http://prometheus.io/docs/instrumenting/exposition_formats/).
The simplest way to see the available metrics is to cURL the metrics endpoint `/metrics`. The format is described [here](http://prometheus.io/docs/instrumenting/exposition_formats/).

Follow the [Prometheus getting started doc][prometheus-getting-started] to spin up a Prometheus server to collect etcd metrics.

The naming of metrics follows the suggested [best practice of Prometheus][prometheus-naming]. A metric name has an `etcd` prefix as its namespace and a subsystem prefix (for example `wal` and `etcdserver`).

etcd now exposes the following metrics:

## etcdserver

| Name | Description | Type |
|-----------------------------------------|--------------------------------------------------|-----------|
| file_descriptors_used_total | The total number of file descriptors used | Gauge |
| proposal_durations_seconds | The latency distributions of committing proposal | Histogram |
| pending_proposal_total | The total number of pending proposals | Gauge |
| proposal_failed_total | The total number of failed proposals | Counter |

High file descriptors (`file_descriptors_used_total`) usage (near the file descriptors limitation of the process) indicates a potential out of file descriptors issue. That might cause etcd fails to create new WAL files and panics.

[Proposal][glossary-proposal] durations (`proposal_durations_seconds`) provides a histogram about the proposal commit latency. Latency can be introduced into this process by network and disk IO.
The naming of metrics follows the suggested [Prometheus best practices][prometheus-naming]. A metric name has an `etcd` or `etcd_debugging` prefix as its namespace and a subsystem prefix (for example `wal` and `etcdserver`).

Pending proposal (`pending_proposal_total`) gives you an idea about how many proposal are in the queue and waiting for commit. An increasing pending number indicates a high client load or an unstable cluster.
## etcd namespace metrics

Failed proposals (`proposal_failed_total`) are normally related to two issues: temporary failures related to a leader election or longer duration downtime caused by a loss of quorum in the cluster.

## wal

| Name | Description | Type |
|------------------------------------|--------------------------------------------------|-----------|
| fsync_durations_seconds | The latency distributions of fsync called by wal | Histogram |
| last_index_saved | The index of the last entry saved by wal | Gauge |

Abnormally high fsync duration (`fsync_durations_seconds`) indicates disk issues and might cause the cluster to be unstable.
The metrics under the `etcd` prefix are for monitoring and alerting. They are stable high level metrics. If there is any change of these metrics, it will be included in release notes.


## http requests
### http requests

These metrics describe the serving of requests (non-watch events) served by etcd members in non-proxy mode: total
incoming requests, request failures and processing latency (inc. raft rounds for storage). They are useful for tracking
Expand Down Expand Up @@ -71,38 +43,9 @@ Example Prometheus queries that may be useful from these metrics (across all etc

Show the 0.90-tile latency (in seconds) of read/write (respectively) event handling across all members, with a window of `5m`.

## snapshot

| Name | Description | Type |
|--------------------------------------------|------------------------------------------------------------|-----------|
| snapshot_save_total_durations_seconds | The total latency distributions of save called by snapshot | Histogram |

Abnormally high snapshot duration (`snapshot_save_total_durations_seconds`) indicates disk issues and might cause the cluster to be unstable.


## rafthttp

| Name | Description | Type | Labels |
|-----------------------------------|--------------------------------------------|--------------|--------------------------------|
| message_sent_latency_seconds | The latency distributions of messages sent | HistogramVec | sendingType, msgType, remoteID |
| message_sent_failed_total | The total number of failed messages sent | Summary | sendingType, msgType, remoteID |


Abnormally high message duration (`message_sent_latency_seconds`) indicates network issues and might cause the cluster to be unstable.

An increase in message failures (`message_sent_failed_total`) indicates more severe network issues and might cause the cluster to be unstable.

Label `sendingType` is the connection type to send messages. `message`, `msgapp` and `msgappv2` use HTTP streaming, while `pipeline` does HTTP request for each message.

Label `msgType` is the type of raft message. `MsgApp` is log replication message; `MsgSnap` is snapshot install message; `MsgProp` is proposal forward message; the others are used to maintain raft internal status. If you have a large snapshot, you would expect a long msgSnap sending latency. For other types of messages, you would expect low latency, which is comparable to your ping latency if you have enough network bandwidth.

Label `remoteID` is the member ID of the message destination.


## proxy
### proxy

etcd members operating in proxy mode do not do store operations. They forward all requests
to cluster instances.
etcd members operating in proxy mode do not directly perform store operations. They forward all requests to cluster instances.

Tracking the rate of requests coming from a proxy allows one to pin down which machine is performing most reads/writes.

Expand All @@ -128,7 +71,63 @@ Example Prometheus queries that may be useful from these metrics (across all etc

Number of failed request on the proxy. This should be 0, spikes here indicate connectivity issues to etcd cluster.

## etcd_debugging namespace metrics

The metrics under the `etcd_debugging` prefix are for debugging. They are very implementation dependent and volatile. They might be changed or removed without any warning in new etcd releases. Some of the metrics might be moved to the `etcd` prefix when they become more stable.

### etcdserver

| Name | Description | Type |
|-----------------------------------------|--------------------------------------------------|-----------|
| file_descriptors_used_total | The total number of file descriptors used | Gauge |
| proposal_durations_seconds | The latency distributions of committing proposal | Histogram |
| pending_proposal_total | The total number of pending proposals | Gauge |
| proposal_failed_total | The total number of failed proposals | Counter |

Heavy file descriptor (`file_descriptors_used_total`) usage (i.e., near the process's file descriptor limit) indicates a potential file descriptor exhaustion issue. If the file descriptors are exhausted, etcd may panic because it cannot create new WAL files.

[Proposal][glossary-proposal] durations (`proposal_durations_seconds`) provides a proposal commit latency histogram. The reported latency reflects network and disk IO delays in etcd.

Pending proposal (`pending_proposal_total`) indicates how many proposals are queued for commit. A rising pending proposal total suggests there is a high client load or the cluster is unstable.

Failed proposals (`proposal_failed_total`) are normally related to two issues: temporary failures related to a leader election or longer duration downtime caused by a loss of quorum in the cluster.

### wal

| Name | Description | Type |
|------------------------------------|--------------------------------------------------|-----------|
| fsync_durations_seconds | The latency distributions of fsync called by wal | Histogram |
| last_index_saved | The index of the last entry saved by wal | Gauge |

Abnormally high fsync duration (`fsync_durations_seconds`) indicates disk issues and might cause the cluster to be unstable.

### snapshot

| Name | Description | Type |
|--------------------------------------------|------------------------------------------------------------|-----------|
| snapshot_save_total_durations_seconds | The total latency distributions of save called by snapshot | Histogram |

Abnormally high snapshot duration (`snapshot_save_total_durations_seconds`) indicates disk issues and might cause the cluster to be unstable.

### rafthttp

| Name | Description | Type | Labels |
|-----------------------------------|--------------------------------------------|--------------|--------------------------------|
| message_sent_latency_seconds | The latency distributions of messages sent | HistogramVec | sendingType, msgType, remoteID |
| message_sent_failed_total | The total number of failed messages sent | Summary | sendingType, msgType, remoteID |


Abnormally high message duration (`message_sent_latency_seconds`) indicates network issues and might cause the cluster to be unstable.

An increase in message failures (`message_sent_failed_total`) indicates more severe network issues and might cause the cluster to be unstable.

Label `sendingType` is the connection type to send messages. `message`, `msgapp` and `msgappv2` use HTTP streaming, while `pipeline` does HTTP request for each message.

Label `msgType` is the type of raft message. `MsgApp` is log replication messages; `MsgSnap` is snapshot install messages; `MsgProp` is proposal forward messages; the others maintain internal raft status. Given large snapshots, a lengthy msgSnap transmission latency should be expected. For other types of messages, given enough network bandwidth, latencies comparable to ping latency should be expected.

Label `remoteID` is the member ID of the message destination.

[glossary-proposal]: glossary.md#proposal
[prometheus]: http://prometheus.io/
[prometheus-getting-started](http://prometheus.io/docs/introduction/getting_started/)
[prometheus-getting-started]: http://prometheus.io/docs/introduction/getting_started/
[prometheus-naming]: http://prometheus.io/docs/practices/naming/
8 changes: 4 additions & 4 deletions etcdserver/metrics.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,29 +24,29 @@ import (
var (
// TODO: with label in v3?
proposeDurations = prometheus.NewHistogram(prometheus.HistogramOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "server",
Name: "proposal_durations_seconds",
Help: "The latency distributions of committing proposal.",
Buckets: prometheus.ExponentialBuckets(0.001, 2, 14),
})
proposePending = prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "server",
Name: "pending_proposal_total",
Help: "The total number of pending proposals.",
})
// This is number of proposal failed in client's view.
// The proposal might be later got committed in raft.
proposeFailed = prometheus.NewCounter(prometheus.CounterOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "server",
Name: "proposal_failed_total",
Help: "The total number of failed proposals.",
})

fileDescriptorUsed = prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "server",
Name: "file_descriptors_used_total",
Help: "The total number of file descriptors used.",
Expand Down
28 changes: 14 additions & 14 deletions mvcc/metrics.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,87 +21,87 @@ import (
var (
rangeCounter = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "range_total",
Help: "Total number of ranges seen by this member.",
})

putCounter = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "put_total",
Help: "Total number of puts seen by this member.",
})

deleteCounter = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "delete_total",
Help: "Total number of deletes seen by this member.",
})

txnCounter = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "txn_total",
Help: "Total number of txns seen by this member.",
})

keysGauge = prometheus.NewGauge(
prometheus.GaugeOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "keys_total",
Help: "Total number of keys.",
})

watchStreamGauge = prometheus.NewGauge(
prometheus.GaugeOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "watch_stream_total",
Help: "Total number of watch streams.",
})

watcherGauge = prometheus.NewGauge(
prometheus.GaugeOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "watcher_total",
Help: "Total number of watchers.",
})

slowWatcherGauge = prometheus.NewGauge(
prometheus.GaugeOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "slow_watcher_total",
Help: "Total number of unsynced slow watchers.",
})

totalEventsCounter = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "events_total",
Help: "Total number of events sent by this member.",
})

pendingEventsGauge = prometheus.NewGauge(
prometheus.GaugeOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "pending_events_total",
Help: "Total number of pending events to be sent.",
})

indexCompactionPauseDurations = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "index_compaction_pause_duration_milliseconds",
Help: "Bucketed histogram of index compaction pause duration.",
Expand All @@ -111,7 +111,7 @@ var (

dbCompactionPauseDurations = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "db_compaction_pause_duration_milliseconds",
Help: "Bucketed histogram of db compaction pause duration.",
Expand All @@ -121,7 +121,7 @@ var (

dbCompactionTotalDurations = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "db_compaction_total_duration_milliseconds",
Help: "Bucketed histogram of db compaction total duration.",
Expand All @@ -130,7 +130,7 @@ var (
})

dbTotalSize = prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "mvcc",
Name: "db_total_size_in_bytes",
Help: "Total size of the underlying database in bytes.",
Expand Down
4 changes: 2 additions & 2 deletions rafthttp/metrics.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ var (
// time range than other type of messages.
msgSentDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "rafthttp",
Name: "message_sent_latency_seconds",
Help: "message sent latency distributions.",
Expand All @@ -39,7 +39,7 @@ var (
)

msgSentFailed = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: "etcd",
Namespace: "etcd_debugging",
Subsystem: "rafthttp",
Name: "message_sent_failed_total",
Help: "The total number of failed messages sent.",
Expand Down
8 changes: 4 additions & 4 deletions snap/metrics.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,16 +19,16 @@ import "github.com/prometheus/client_golang/prometheus"
var (
// TODO: save_fsync latency?
saveDurations = prometheus.NewHistogram(prometheus.HistogramOpts{
Namespace: "etcd",
Subsystem: "snapshot",
Namespace: "etcd_debugging",
Subsystem: "snap",
Name: "save_total_durations_seconds",
Help: "The total latency distributions of save called by snapshot.",
Buckets: prometheus.ExponentialBuckets(0.001, 2, 14),
})

marshallingDurations = prometheus.NewHistogram(prometheus.HistogramOpts{
Namespace: "etcd",
Subsystem: "snapshot",
Namespace: "etcd_debugging",
Subsystem: "snap",
Name: "save_marshalling_durations_seconds",
Help: "The marshalling cost distributions of save called by snapshot.",
Buckets: prometheus.ExponentialBuckets(0.001, 2, 14),
Expand Down
Loading

0 comments on commit 6764509

Please sign in to comment.