Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Restore sample configurations broken during initial migration #11276

Merged
merged 10 commits into from
Jun 22, 2022
45 changes: 45 additions & 0 deletions plugins/aggregators/derivative/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,6 +154,51 @@ with only occasional changes.

```toml @sample.conf
# Calculates a derivative for every field.
[[aggregators.derivative]]
## The period in which to flush the aggregator.
period = "30s"
##
## If true, the original metric will be dropped by the
## aggregator and will not get sent to the output plugins.
drop_original = false
##
## This aggregator will estimate a derivative for each field, which is
## contained in both the first and last metric of the aggregation interval.
## Without further configuration the derivative will be calculated with
## respect to the time difference between these two measurements in seconds.
## The formula applied is for every field:
##
## value_last - value_first
## derivative = --------------------------
## time_difference_in_seconds
##
## The resulting derivative will be named *fieldname_rate*. The suffix
## "_rate" can be configured by the *suffix* parameter. When using a
## derivation variable you can include its name for more clarity.
# suffix = "_rate"
##
## As an abstraction the derivative can be calculated not only by the time
## difference but by the difference of a field, which is contained in the
## measurement. This field is assumed to be monotonously increasing. This
## feature is used by specifying a *variable*.
## Make sure the specified variable is not filtered and exists in the metrics
## passed to this aggregator!
# variable = ""
##
## When using a field as the derivation parameter the name of that field will
## be used for the resulting derivative, e.g. *fieldname_by_parameter*.
##
## Note, that the calculation is based on the actual timestamp of the
## measurements. When there is only one measurement during that period, the
## measurement will be rolled over to the next period. The maximum number of
## such roll-overs can be configured with a default of 10.
# max_roll_over = 10
##
```

To calculate a derivative for every field use

```toml
[[aggregators.derivative]]
## Specific Derivative Aggregator Arguments:

Expand Down
50 changes: 37 additions & 13 deletions plugins/aggregators/derivative/sample.conf
Original file line number Diff line number Diff line change
@@ -1,17 +1,41 @@
# Calculates a derivative for every field.
[[aggregators.derivative]]
## Specific Derivative Aggregator Arguments:

## Configure a custom derivation variable. Timestamp is used if none is given.
# variable = ""

## Suffix to add to the field name for the derivative name.
## The period in which to flush the aggregator.
period = "30s"
##
## If true, the original metric will be dropped by the
srebhan marked this conversation as resolved.
Show resolved Hide resolved
## aggregator and will not get sent to the output plugins.
drop_original = false
##
## This aggregator will estimate a derivative for each field, which is
## contained in both the first and last metric of the aggregation interval.
## Without further configuration the derivative will be calculated with
## respect to the time difference between these two measurements in seconds.
## The formula applied is for every field:
##
## value_last - value_first
## derivative = --------------------------
## time_difference_in_seconds
##
## The resulting derivative will be named *fieldname_rate*. The suffix
## "_rate" can be configured by the *suffix* parameter. When using a
## derivation variable you can include its name for more clarity.
# suffix = "_rate"

## Roll-Over last measurement to first measurement of next period
##
## As an abstraction the derivative can be calculated not only by the time
## difference but by the difference of a field, which is contained in the
## measurement. This field is assumed to be monotonously increasing. This
## feature is used by specifying a *variable*.
## Make sure the specified variable is not filtered and exists in the metrics
## passed to this aggregator!
# variable = ""
##
## When using a field as the derivation parameter the name of that field will
## be used for the resulting derivative, e.g. *fieldname_by_parameter*.
##
## Note, that the calculation is based on the actual timestamp of the
## measurements. When there is only one measurement during that period, the
## measurement will be rolled over to the next period. The maximum number of
## such roll-overs can be configured with a default of 10.
# max_roll_over = 10

## General Aggregator Arguments:

## calculate derivative every 30 seconds
period = "30s"
##
5 changes: 0 additions & 5 deletions plugins/inputs/cassandra/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,6 @@ querying table metrics with a wildcard for the keyspace or table name.
```toml @sample.conf
# Read Cassandra metrics through Jolokia
[[inputs.cassandra]]
## DEPRECATED: The cassandra plugin has been deprecated. Please use the
## jolokia2 plugin instead.
##
## see https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2

context = "/jolokia/read"
## List of cassandra servers exposing jolokia read service
servers = ["myuser:mypassword@10.10.10.1:8778","10.10.10.2:8778",":8778"]
Expand Down
5 changes: 0 additions & 5 deletions plugins/inputs/cassandra/sample.conf
Original file line number Diff line number Diff line change
@@ -1,10 +1,5 @@
# Read Cassandra metrics through Jolokia
[[inputs.cassandra]]
## DEPRECATED: The cassandra plugin has been deprecated. Please use the
## jolokia2 plugin instead.
##
## see https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2

context = "/jolokia/read"
## List of cassandra servers exposing jolokia read service
servers = ["myuser:mypassword@10.10.10.1:8778","10.10.10.2:8778",":8778"]
Expand Down
1 change: 0 additions & 1 deletion plugins/inputs/kafka_consumer_legacy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ instances of telegraf can read from the same topic in parallel.
## Configuration

```toml @sample.conf
## DEPRECATED: The 'kafka_consumer_legacy' plugin is deprecated in version 1.4.0, use 'inputs.kafka_consumer' instead, NOTE: 'kafka_consumer' only supports Kafka v0.8+.
# Read metrics from Kafka topic(s)
[[inputs.kafka_consumer_legacy]]
## topic(s) to consume
Expand Down
1 change: 0 additions & 1 deletion plugins/inputs/kafka_consumer_legacy/sample.conf
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
## DEPRECATED: The 'kafka_consumer_legacy' plugin is deprecated in version 1.4.0, use 'inputs.kafka_consumer' instead, NOTE: 'kafka_consumer' only supports Kafka v0.8+.
# Read metrics from Kafka topic(s)
[[inputs.kafka_consumer_legacy]]
## topic(s) to consume
Expand Down
1 change: 0 additions & 1 deletion plugins/inputs/logparser/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,6 @@ Migration Example:
## Configuration

```toml @sample.conf
## DEPRECATED: The 'logparser' plugin is deprecated in version 1.15.0, use 'inputs.tail' with 'grok' data format instead.
# Read metrics off Arista LANZ, via socket
[[inputs.logparser]]
## Log files to parse.
Expand Down
1 change: 0 additions & 1 deletion plugins/inputs/logparser/sample.conf
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
## DEPRECATED: The 'logparser' plugin is deprecated in version 1.15.0, use 'inputs.tail' with 'grok' data format instead.
# Read metrics off Arista LANZ, via socket
[[inputs.logparser]]
## Log files to parse.
Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/riemann_listener/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ client that use riemann clients using riemann-protobuff format.

```toml @sample.conf
# Riemann protobuff listener
[[inputs.rimann_listener]]
[[inputs.riemann_listener]]
## URL to listen on
## Default is "tcp://:5555"
# service_address = "tcp://:8094"
Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/riemann_listener/sample.conf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Riemann protobuff listener
[[inputs.rimann_listener]]
[[inputs.riemann_listener]]
## URL to listen on
## Default is "tcp://:5555"
# service_address = "tcp://:8094"
Expand Down
1 change: 0 additions & 1 deletion plugins/inputs/snmp_legacy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ The SNMP input plugin gathers metrics from SNMP agents
## Configuration

```toml @sample.conf
# DEPRECATED! PLEASE USE inputs.snmp INSTEAD.
[[inputs.snmp_legacy]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
Expand Down
1 change: 0 additions & 1 deletion plugins/inputs/snmp_legacy/sample.conf
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
# DEPRECATED! PLEASE USE inputs.snmp INSTEAD.
[[inputs.snmp_legacy]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
Expand Down
26 changes: 15 additions & 11 deletions plugins/inputs/vsphere/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,10 @@ Compatibility information is available from the govmomi project

## Configuration

```toml
# Read metrics from one or many vCenters
```toml @sample.conf
-# Read metrics from one or many vCenters
[[inputs.vsphere]]
## List of vCenter URLs to be monitored. These three lines must be uncommented
## List of vCenter URLs to be monitored. These three lines must be uncommented
## and edited for the plugin to work.
vcenters = [ "https://vcenter.local/sdk" ]
username = "user@corp.local"
Expand Down Expand Up @@ -144,7 +144,7 @@ Compatibility information is available from the govmomi project
# datastore_metric_exclude = [] ## Nothing excluded by default
# datastore_instances = false ## false by default

## Datastores
## Datastores
# datastore_include = [ "/*/datastore/**"] # Inventory path to datastores to collect (by default all are collected)
# datastore_exclude = [] # Inventory paths to exclude
# datastore_metric_include = [] ## if omitted or empty, all metrics are collected
Expand Down Expand Up @@ -188,12 +188,6 @@ Compatibility information is available from the govmomi project
## preserve the full precision when averaging takes place.
# use_int_samples = true

## The number of vSphere 5 minute metric collection cycles to look back for non-realtime metrics. In
## some versions (6.7, 7.0 and possible more), certain metrics, such as cluster metrics, may be reported
## with a significant delay (>30min). If this happens, try increasing this number. Please note that increasing
## it too much may cause performance issues.
# metric_lookback = 3

## Custom attributes from vCenter can be very useful for queries in order to slice the
## metrics along different dimension and for forming ad-hoc relationships. They are disabled
## by default, since they can add a considerable amount of tags to the resulting metrics. To
Expand All @@ -205,19 +199,29 @@ Compatibility information is available from the govmomi project
# custom_attribute_include = []
# custom_attribute_exclude = ["*"]

## The number of vSphere 5 minute metric collection cycles to look back for non-realtime metrics. In
## some versions (6.7, 7.0 and possible more), certain metrics, such as cluster metrics, may be reported
## with a significant delay (>30min). If this happens, try increasing this number. Please note that increasing
## it too much may cause performance issues.
# metric_lookback = 3

## Optional SSL Config
# ssl_ca = "/path/to/cafile"
# ssl_cert = "/path/to/certfile"
# ssl_key = "/path/to/keyfile"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false

## The Historical Interval value must match EXACTLY the interval in the daily
# "Interval Duration" found on the VCenter server under Configure > General > Statistics > Statistic intervals
# historical_interval = "5m"
```

NOTE: To disable collection of a specific resource type, simply exclude all
metrics using the XX_metric_exclude. For example, to disable collection of VMs,
add this:

```toml @sample.conf
```toml
vm_metric_exclude = [ "*" ]
```

Expand Down
Loading