Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ ADD . /src
WORKDIR /src
RUN go get -t github.com/stretchr/testify/suite
RUN go get -d -v -t
RUN go test --cover ./... --run UnitTest
RUN go test --cover ./... --run UnitTest -p 1
RUN CGO_ENABLED=0 GOOS=linux go build -v -o docker-flow-monitor


Expand Down
7 changes: 6 additions & 1 deletion docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,8 +103,13 @@ curl `[IP_OF_ONE_OF_SWARM_NODES]:8080/v1/docker-flow-monitor/reconfigure?scrapeP

Please consult [Prometheus Configuration](https://prometheus.io/docs/operating/configuration/) for more information about the available options.

## Scrapes
## Scrape Secret Configuration

Additional scrapes can be added through files prefixed with `scrape_`. By default, all such files located in `/run/secrets` are automatically added to the `scrape_configs` section of the configuration. The directory can be changed by setting a different value to the environment variable `CONFIGS_DIR`.

The simplest way to add scrape configs is to use Docker [secrets](https://docs.docker.com/engine/swarm/secrets/) or [configs](https://docs.docker.com/engine/swarm/configs/).


## Scrape Label Configuration

When using a version of [Docker Flow Swarm Listener](https://github.com/vfarcic/docker-flow-swarm-listener), DFSL, newer than `18.02.06-31`, you can configure DFSL to send node node hostnames to `Docker Flow Monitor`, DFM. This can be done by setting `DF_INCLUDE_NODE_IP_INFO` to `true` in the DFSL environment. DFM will automatically display the node hostnames as a label for each prometheus target. The `DF_SCRAPE_TARGET_LABELS` env variable allows for additional labels to be displayed. For example, if a service has env variables `com.df.env=prod` and `com.df.domain=frontend`, you can set `DF_SCRAPE_TARGET_LABELS=env,domain` in DFM to display the `prod` and `frontend` labels in prometheus.
Binary file added docs/img/flexiable-labeling-targets-page.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
110 changes: 110 additions & 0 deletions docs/tutorial-flexible-labeling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
# Flexible Labeling with Docker Flow Monitor

*Docker Flow Monitor* and *Docker Flow Swarm Listener* can be configured to allow for more flexible labeling of exporters. Please read the [Running Docker Flow Monitor](tutorial.md) tutorial before reading this one. This tutorial focuses on configuring the stacks to allow for flexible labeling.

## Setting Up A Cluster

!!! info
Feel free to skip this section if you already have a Swarm cluster that can be used for this tutorial

We'll create a Swarm cluster consisting of three nodes created with Docker Machine.

```bash
git clone https://github.com/vfarcic/docker-flow-monitor.git

cd docker-flow-monitor

./scripts/dm-swarm.sh

eval $(docker-machine env swarm-1)
```

## Deploying Docker Flow Monitor

We will deploy [stacks/docker-flow-monitor-flexible-labels.yml](https://github.com/vfarcic/docker-flow-monitor/blob/master/stacks/docker-flow-monitor-flexible-labels.yml) stack that contains three services: `monitor`, `alert-manager` and `swarm-listener`. The `swarm-listener` service includes an additional environment variable: `DF_INCLUDE_NODE_IP_INFO=true`. This configures `swarm-listener` to send node and ip information to `mointor`.

The `monitor` service includes the environment variable: `DF_SCRAPE_TARGET_LABELS=env,metricType`. This sets up flexible labeling for exporters. If an exporter defines a deploy label `com.df.env` or `com.df.metricType`, that label will be used by `monitor`.

Let's deploy the `monitor` stack:

```bash
docker network create -d overlay monitor

docker stack deploy \
-c stacks/docker-flow-monitor-flexible-labels.yml \
monitor
```

## Collecting Metrics and Defining Alerts

We will deploy exporters stack defined in [stacks/exporters-tutorial-flexible-labels.yml](https://github.com/vfarcic/docker-flow-monitor/blob/master/stacks/exporters-tutorial-flexible-labels.yml), two containing two services: `cadvisor` and `node-exporter`.

The definition of the `cadvisor` service contains additional deploy labels:

```yaml
cadvisor:
image: google/cadvisor
networks:
- monitor
...
deploy:
mode: global
labels:
...
- com.df.scrapeNetwork=monitor
- com.df.env=prod
- com.df.metricType=system
```

The `com.df.scrapeNetwork` deploy label tells `swarm-listener` to use `cadvisor`'s IP on the `monitor` network. This is important because the `monitor` service is using the `monitor` network to scrape `cadvisor`. The `com.df.env=prod` and `com.df.metricType=system` deploy labels configures flexible labeling for `cadvisor`.

The second service, `node-exporter` is also configured with flexiable labels:

```yaml
node-exporter:
image: basi/node-exporter
networks:
- monitor
...
deploy:
mode: global
labels:
...
- com.df.scrapeNetwork=monitor
- com.df.env=dev
- com.df.metricType=system
```

Let's deploy the `exporter` stack

```bash
docker stack deploy \
-c stacks/exporters-tutorial-flexible-labels.yml \
exporter
```

Please wait until the service in the stack are up-and-running. You can check their status by executing `docker stack ps exporter`.

Now we can open the *Prometheus* targets page from a browser.

> If you're a Windows user, Git Bash might not be able to use the `open` command. If that's the case, replace the `open` command with `echo`. As a result, you'll get the full address that should be opened directly in your browser of choice.

```bash
open "http://$(docker-machine ip swarm-1):9090/targets"
```

You should see a targets page similar to the following:

![Flexiable Labeling Targets Page](img/flexiable-labeling-targets-page.png)

Each service is labeled with its associated `com.df.env` or `com.df.metricType` deploy label. In addition, the `node` label is the hostname the service is running on.

## What Now?

*Docker Flow Monitors*'s flexible labeling feature provides more information about your services. Please consult the documentation for any additional information you might need. Feel free to open [an issue](https://github.com/vfarcic/docker-flow-monitor/issues) if you require additional info, if you find a bug, or if you have a feature request.

Before you go, please remove the cluster we created and free those resources for something else.

```bash
docker-machine rm -f swarm-1 swarm-2 swarm-3
```
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ pages:
- Tutorial:
- Running Docker Flow Monitor: tutorial.md
- Auto-Scaling Services Using Instrumented Metrics: auto-scaling.md
- Flexible Labeling with Docker Flow Monitor: tutorial-flexible-labeling.md
- Configuration: config.md
- Usage: usage.md
- Migration Guide: migration.md
Expand Down
67 changes: 66 additions & 1 deletion prometheus/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ package prometheus

import (
"bytes"
"encoding/json"
"fmt"
"net/url"
"os"
Expand All @@ -18,14 +19,17 @@ import (
// WriteConfig creates Prometheus configuration at configPath and writes alerts into /etc/prometheus/alert.rules
func WriteConfig(configPath string, scrapes map[string]Scrape, alerts map[string]Alert) {
c := &Config{}
fileSDDir := "/etc/prometheus/file_sd"
alertRulesPath := "/etc/prometheus/alert.rules"

configDir := filepath.Dir(configPath)
FS.MkdirAll(configDir, 0755)
FS.MkdirAll(fileSDDir, 0755)
c.InsertScrapes(scrapes)

if len(alerts) > 0 {
logPrintf("Writing to alert.rules")
afero.WriteFile(FS, "/etc/prometheus/alert.rules", []byte(GetAlertConfig(alerts)), 0644)
afero.WriteFile(FS, alertRulesPath, []byte(GetAlertConfig(alerts)), 0644)
c.RuleFiles = []string{"alert.rules"}
}

Expand All @@ -35,6 +39,7 @@ func WriteConfig(configPath string, scrapes map[string]Scrape, alerts map[string
logPrintf("Unable to insert alertmanager url %s into prometheus config", alertmanagerURL)
}
}
c.CreateFileStaticConfig(scrapes, fileSDDir)

for _, e := range os.Environ() {
envSplit := strings.SplitN(e, "=", 2)
Expand Down Expand Up @@ -98,6 +103,9 @@ func (c *Config) InsertScrapes(scrapes map[string]Scrape) {
if len(metricsPath) == 0 {
metricsPath = "/metrics"
}
if s.NodeInfo != nil && len(*s.NodeInfo) > 0 {
continue
}
if s.ScrapeType == "static_configs" {
newScrape = &ScrapeConfig{
ServiceDiscoveryConfig: ServiceDiscoveryConfig{
Expand Down Expand Up @@ -152,6 +160,63 @@ func (c *Config) InsertScrapesFromDir(dir string) {

}

// CreateFileStaticConfig creates static config files
func (c *Config) CreateFileStaticConfig(scrapes map[string]Scrape, fileSDDir string) {

staticFiles := map[string]struct{}{}
for _, s := range scrapes {
fsc := FileStaticConfig{}
if s.NodeInfo == nil {
continue
}
for n := range *s.NodeInfo {
tg := TargetGroup{}
tg.Targets = []string{fmt.Sprintf("%s:%d", n.Addr, s.ScrapePort)}
tg.Labels = map[string]string{}
if s.ScrapeLabels != nil {
for k, v := range *s.ScrapeLabels {
tg.Labels[k] = v
}
}
tg.Labels["node"] = n.Name
tg.Labels["service"] = s.ServiceName
fsc = append(fsc, &tg)
}

if len(fsc) == 0 {
continue
}

fscBytes, err := json.Marshal(fsc)
if err != nil {
continue
}
filePath := fmt.Sprintf("%s/%s.json", fileSDDir, s.ServiceName)
afero.WriteFile(FS, filePath, fscBytes, 0644)
newScrape := &ScrapeConfig{
ServiceDiscoveryConfig: ServiceDiscoveryConfig{
FileSDConfigs: []*SDConfig{{
Files: []string{filePath},
}},
},
JobName: s.ServiceName,
}
c.ScrapeConfigs = append(c.ScrapeConfigs, newScrape)
staticFiles[filePath] = struct{}{}
}

// Remove scrapes that are not in fileStaticServices
currentStaticFiles, err := afero.Glob(FS, fmt.Sprintf("%s/*.json", fileSDDir))
if err != nil {
return
}
for _, file := range currentStaticFiles {
if _, ok := staticFiles[file]; !ok {
FS.Remove(file)
}
}
}

func normalizeScrapeFile(content []byte) []byte {
spaceCnt := 0
for i, c := range content {
Expand Down
Loading