Skip to content

Commit

Permalink
Adjust getting started section to new config format
Browse files Browse the repository at this point in the history
  • Loading branch information
fabxc committed May 19, 2015
1 parent a40ad09 commit f733434
Showing 1 changed file with 74 additions and 106 deletions.
180 changes: 74 additions & 106 deletions content/docs/introduction/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ git clone https://github.com/prometheus/prometheus.git
## Building Prometheus

Building Prometheus currently still requires a `make` step, as some parts of
the source are autogenerated (protobufs, web assets, lexer/parser files).
the source are autogenerated (web assets).

```language-bash
cd prometheus
Expand All @@ -44,43 +44,35 @@ manner about itself, it may also be used to scrape and monitor its own health.

While a Prometheus server which collects only data about itself is not very
useful in practice, it is a good starting example. Save the following basic
Prometheus configuration as a file named `prometheus.conf`:
Prometheus configuration as a file named `prometheus.yml`:

```
# Global default settings.
global: {
scrape_interval: "15s" # By default, scrape targets every 15 seconds.
evaluation_interval: "15s" # By default, evaluate rules every 15 seconds.
# Attach these extra labels to all time series collected by this Prometheus instance.
labels: {
label: {
name: "monitor"
value: "tutorial-monitor"
}
}
}
# A job definition containing exactly one endpoint to scrape: Prometheus itself.
job: {
# The job name is added as a label `job={job-name}` to any time series scraped from this job.
name: "prometheus"
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: "5s"
# Let's define a group of static targets to scrape for this job. In this
# case, only one.
target_group: {
# These endpoints are scraped via HTTP.
target: "http://localhost:9090/metrics"
}
}
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default (10s).
# Attach these extra labels to all timeseries collected by this Prometheus instance.
labels:
monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
scrape_timeout: 10s
target_groups:
- targets: ['localhost:9090']
```

Prometheus configuration is supplied in an ASCII form of [protocol
buffers](https://developers.google.com/protocol-buffers/docs/overview/). The
[schema definition](https://github.com/prometheus/prometheus/blob/master/config/config.proto)
has a complete documentation of all available configuration options.
For a complete specification of configuration options, see the
[configuration documentation](/docs/operating/configuration).


## Starting Prometheus

Expand All @@ -90,7 +82,7 @@ Prometheus build directory and run:
```language-bash
# Start Prometheus.
# By default, Prometheus stores its database in /tmp/metrics (flag -storage.local.path).
./prometheus -config.file=prometheus.conf
./prometheus -config.file=prometheus.yml
```

Prometheus should start up and it should show a status page about itself at
Expand All @@ -107,7 +99,7 @@ environment variable to a value similar to the number of available CPU
cores:

```language-bash
GOMAXPROCS=8 ./prometheus -config.file=prometheus.conf
GOMAXPROCS=8 ./prometheus -config.file=prometheus.yml
```

Blindly setting `GOMAXPROCS` to a high value can be
Expand Down Expand Up @@ -201,36 +193,26 @@ endpoints to a single job, adding extra labels to each group of targets. In
this example, we will add the `group="production"` label to the first group of
targets, while adding `group="canary"` to the second.

To achieve this, add the following job definition to your `prometheus.conf` and
To achieve this, add the following job definition to your `prometheus.yml` and
restart your Prometheus instance:

```
job: {
name: "example-random"
scrape_interval: "5s"
# The "production" targets for this job.
target_group: {
target: "http://localhost:8080/metrics"
target: "http://localhost:8081/metrics"
labels: {
label: {
name: "group"
value: "production"
}
}
}
# The "canary" targets for this job.
target_group: {
target: "http://localhost:8082/metrics"
labels: {
label: {
name: "group"
value: "canary"
}
}
}
}
scrape_configs:
- job_name: 'example-random'
scrape_interval: 5s
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
scrape_timeout: 10s
target_groups:
- targets: ['localhost:8080', 'localhost:8081']
labels:
group: 'production'
- targets: ['localhost:8082']
labels:
group: 'canary'
```

Go to the expression browser and verify that Prometheus now has information
Expand Down Expand Up @@ -263,52 +245,38 @@ job_service:rpc_durations_microseconds_count:avg_rate5m = avg(rate(rpc_durations
```

To make Prometheus pick up this new rule, add a `rule_files` statement to the
global configuration section in your `prometheus.conf`. The config should now
global configuration section in your `prometheus.yml`. The config should now
look like this:

```
# Global default settings.
global: {
scrape_interval: "15s" # By default, scrape targets every 15 seconds.
evaluation_interval: "15s" # By default, evaluate rules every 15 seconds.
# Attach these extra labels to all time series collected by this Prometheus instance.
labels: {
label: {
name: "monitor"
value: "tutorial-monitor"
}
}
# Load and evaluate rules in this file every 'evaluation_interval' seconds. This field may be repeated.
rule_file: "prometheus.rules"
}
job: {
name: "example-random"
scrape_interval: "5s"
# The "production" targets for this job.
target_group: {
target: "http://localhost:8080/metrics"
target: "http://localhost:8081/metrics"
labels: {
label: {
name: "group"
value: "production"
}
}
}
# The "canary" targets for this job.
target_group: {
target: "http://localhost:8082/metrics"
labels: {
label: {
name: "group"
value: "canary"
}
}
}
}
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default (10s).
# Attach these extra labels to all timeseries collected by this Prometheus instance.
labels:
monitor: 'codelab-monitor'
rule_files:
- 'prometheus.rules'
scrape_configs:
- job_name: 'example-random'
scrape_interval: 5s
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
scrape_timeout: 10s
target_groups:
- targets: ['localhost:8080', 'localhost:8081']
labels:
group: 'production'
- targets: ['localhost:8082']
labels:
group: 'canary'
```

Restart Prometheus with the new configuration and verify that a new time series
Expand Down

0 comments on commit f733434

Please sign in to comment.