Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docker logs support to the Elastic Log Driver #19531

Merged
merged 14 commits into from
Jul 9, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion x-pack/dockerlogbeat/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
"name": "LOG_DIR",
"description": "Mount for local log cache",
"destination": "/var/log/docker",
"source": "/var/log",
"source": "/var/lib/docker",
"type": "none",
"options": [
"rw",
Expand Down
27 changes: 22 additions & 5 deletions x-pack/dockerlogbeat/docs/configuration.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ for more information about the environment variables.
[float]
[[local-log-opts]]
=== Configuring the local log
This plugin fully supports `docker logs`, and it maintains a local log spool in the event that upstream ES connections are down. Unfortunately, due to the limitations in the docker plugin API, we can't "clean up" log files when a container is destroyed. The plugin mounts the `/var/log` directory on the host to write logs to `/var/log/containers` on host. To change this directory, you must change the mount inside the plugin:
This plugin fully supports `docker logs`, and it maintains a local copy of logs that can be read without a connection to Elasticsearch. The plugin mounts the `/var/log` directory on the host to write logs to `/var/log/containers` on the host. If you want to change the log location on the host, you must change the mount inside the plugin:
fearful-symmetry marked this conversation as resolved.
Show resolved Hide resolved

1. Disable the plugin:
+
Expand All @@ -127,7 +127,7 @@ This plugin fully supports `docker logs`, and it maintains a local log spool in
docker plugin disable elastic/{log-driver-alias}:{version}
----

2. Set the debug level:
2. Set the bindmount directory:
+
["source","sh",subs="attributes"]
----
Expand All @@ -142,7 +142,23 @@ docker plugin set elastic/{log-driver-alias}:{version} LOG_DIR.source=NEW_LOG_LO
docker plugin enable elastic/{log-driver-alias}:{version}
----

In situations where logs can't be easily managed, for example, Docker for Mac, you can also configure the plugin to remove log files when a container is stopped. This will prevent you from reading logs on a stopped container, but it will rotate logs without user intervention. To enable removal of logs for stopped containers, you must change the `DESTROY_LOGS_ON_STOP` environment variable:

The local log also supports the `max-file`, `max-size` and `compress` options that are https://docs.docker.com/config/containers/logging/json-file/#options[a part of the Docker default file logger]. For example:

["source","sh",subs="attributes"]
----
docker run --log-driver=elastic/{log-driver-alias}:{version} \
--log-opt endpoint="myhost:9200" \
--log-opt user="myusername" \
--log-opt password="mypassword" \
--log-opt max-file=10 \
--log-opt max-size=5M \
--log-opt compress=true \
-it debian:jessie /bin/bash
----


In situations where logs can't be easily managed, for example, you can also configure the plugin to remove log files when a container is stopped. This will prevent you from reading logs on a stopped container, but it will rotate logs without user intervention. To enable removal of logs for stopped containers, you must change the `DESTROY_LOGS_ON_STOP` environment variable:

1. Disable the plugin:
+
Expand All @@ -151,7 +167,7 @@ In situations where logs can't be easily managed, for example, Docker for Mac, y
docker plugin disable elastic/{log-driver-alias}:{version}
----

2. Set the debug level:
2. Enable log removal:
+
["source","sh",subs="attributes"]
----
Expand All @@ -164,4 +180,5 @@ docker plugin set elastic/{log-driver-alias}:{version} DESTROY_LOGS_ON_STOP=true
["source","sh",subs="attributes"]
----
docker plugin enable elastic/{log-driver-alias}:{version}
----
----

1 change: 1 addition & 0 deletions x-pack/dockerlogbeat/handlers.go
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,7 @@ func readLogHandler(pm *pipelinemanager.PipelineManager) func(w http.ResponseWri
w.Header().Set("Content-Type", "application/x-json-stream")
wf := ioutils.NewWriteFlusher(w)
io.Copy(wf, stream)
fearful-symmetry marked this conversation as resolved.
Show resolved Hide resolved
stream.Close()
wf.Close()
fearful-symmetry marked this conversation as resolved.
Show resolved Hide resolved

} //end func
Expand Down
15 changes: 14 additions & 1 deletion x-pack/dockerlogbeat/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ package main
import (
"fmt"
"os"
"strconv"

"github.com/docker/go-plugins-helpers/sdk"

Expand Down Expand Up @@ -41,6 +42,14 @@ func genNewMonitoringConfig() (*common.Config, error) {
return cfg, nil
}

func setDestroyLogsOnStop() (bool, error) {
setting, ok := os.LookupEnv("DESTROY_LOGS_ON_STOP")
if !ok {
return false, nil
}
return strconv.ParseBool(setting)
}

func fatal(format string, vs ...interface{}) {
fmt.Fprintf(os.Stderr, format, vs...)
os.Exit(1)
Expand All @@ -60,7 +69,11 @@ func main() {
fatal("error starting log handler: %s", err)
}

pipelines := pipelinemanager.NewPipelineManager(logcfg)
logDestroy, err := setDestroyLogsOnStop()
if err != nil {
fatal("DESTROY_LOGS_ON_STOP must be 'true' or 'false': %s", err)
}
pipelines := pipelinemanager.NewPipelineManager(logDestroy)

sdkHandler := sdk.NewHandler(`{"Implements": ["LoggingDriver"]}`)
// Create handlers for startup and shutdown of the log driver
Expand Down
22 changes: 13 additions & 9 deletions x-pack/dockerlogbeat/pipelinemanager/clientLogReader.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,13 +25,18 @@ import (
// There's a many-to-one relationship between clients and pipelines.
// Each container with the same config will get its own client to the same pipeline.
type ClientLogger struct {
logFile *pipereader.PipeReader
client beat.Client
pipelineHash uint64
closer chan struct{}
// logFile is the FIFO reader that reads from the docker container stdio
logFile *pipereader.PipeReader
// client is the libbeat client object that sends logs upstream
client beat.Client
// pipelineHash is a hash of the libbeat publisher pipeline config
pipelineHash uint64
// ContainerMeta is the metadata object for the container we get from docker
ContainerMeta logger.Info
logger *logp.Logger
logSpool logger.Logger
// logger is an error message logger
logger *logp.Logger
// localLog manages the local JSON logs for containers
localLog logger.Logger
}
fearful-symmetry marked this conversation as resolved.
Show resolved Hide resolved

// newClientFromPipeline creates a new Client logger with a FIFO reader and beat client
Expand All @@ -55,9 +60,8 @@ func newClientFromPipeline(pipeline beat.PipelineConnector, inputFile *pipereade
return &ClientLogger{logFile: inputFile,
client: client,
pipelineHash: hash,
closer: make(chan struct{}),
ContainerMeta: info,
logSpool: localLog,
localLog: localLog,
logger: clientLogger}, nil
}

Expand Down Expand Up @@ -107,7 +111,7 @@ func (cl *ClientLogger) publishLoop(reader chan logdriver.LogEntry) {
return
}

cl.logSpool.Log(constructLogSpoolMsg(entry))
cl.localLog.Log(constructLogSpoolMsg(entry))
line := strings.TrimSpace(string(entry.Line))

cl.client.Publish(beat.Event{
Expand Down
27 changes: 19 additions & 8 deletions x-pack/dockerlogbeat/pipelinemanager/pipelineManager.go
Original file line number Diff line number Diff line change
Expand Up @@ -53,16 +53,19 @@ type PipelineManager struct {
clientLogger map[string]logger.Logger
// logDirectory is the bindmount for local container logsd
logDirectory string
// destroyLogsOnStop indicates for the client to remove log files when a container stops
destroyLogsOnStop bool
}

// NewPipelineManager creates a new Pipeline map
func NewPipelineManager(logCfg *common.Config) *PipelineManager {
func NewPipelineManager(logDestroy bool) *PipelineManager {
return &PipelineManager{
Logger: logp.NewLogger("PipelineManager"),
pipelines: make(map[uint64]*Pipeline),
clients: make(map[string]*ClientLogger),
clientLogger: make(map[string]logger.Logger),
logDirectory: "/var/log/docker/containers",
Logger: logp.NewLogger("PipelineManager"),
pipelines: make(map[uint64]*Pipeline),
clients: make(map[string]*ClientLogger),
clientLogger: make(map[string]logger.Logger),
logDirectory: "/var/log/docker/containers",
destroyLogsOnStop: logDestroy,
}
}

Expand Down Expand Up @@ -111,12 +114,20 @@ func (pm *PipelineManager) CreateClientWithConfig(containerConfig ContainerOutpu

// Why is this empty by default? What should be here? Who knows!
fearful-symmetry marked this conversation as resolved.
Show resolved Hide resolved
if info.LogPath == "" {
info.LogPath = filepath.Join(pm.logDirectory, info.ContainerID)
info.LogPath = filepath.Join(pm.logDirectory, info.ContainerID, fmt.Sprintf("%s-json.log", info.ContainerID))
}
err = os.MkdirAll(filepath.Dir(info.LogPath), 0755)
if err != nil {
return nil, errors.Wrap(err, "error creating directory for local logs")
}
// set a default log size
if _, ok := info.Config["max-size"]; !ok {
info.Config["max-size"] = "10M"
}
// set a default log count
if _, ok := info.Config["max-file"]; !ok {
info.Config["max-file"] = "5"
}

localLog, err := jsonfilelog.New(info)
fearful-symmetry marked this conversation as resolved.
Show resolved Hide resolved
if err != nil {
Expand Down Expand Up @@ -264,7 +275,7 @@ func (pm *PipelineManager) removeLogger(info logger.Info) {
}
logger.Close()
delete(pm.clientLogger, info.ContainerID)
if os.Getenv("DESTROY_LOGS_ON_STOP") == "true" {
if pm.destroyLogsOnStop {
pm.removeLogFile(info.ContainerID)
}
}
Expand Down
6 changes: 4 additions & 2 deletions x-pack/dockerlogbeat/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,9 @@ The logs are in `/var/log/docker`. If you want to make the logs useful, you need

## Local logs

This plugin fully supports `docker logs`, and it maintains a local log spool in the event that upstream ES connections are down. Unfortunately, due to the limitations in the docker plugin API, we can't "clean up" log files when a container is destroyed. The plugin mounts the `/var/log` directory on the host to write logs. This mount point can be changed via [Docker](https://docs.docker.com/engine/reference/commandline/plugin_set/#change-the-source-of-a-mount). The plugin can also be configured to do a "hard" cleanup and destroy logs when a container stops. To enable this, set the `DESTROY_LOGS_ON_STOP` environment var inside the plugin:
This plugin fully supports `docker logs`, and it maintains a local copy of logs that can be read without a connection to Elasticsearch. Unfortunately, due to the limitations in the docker plugin API, we can't "clean up" log files when a container is destroyed. The plugin mounts the `/var/log` directory on the host to write logs. This mount point can be changed via [Docker](https://docs.docker.com/engine/reference/commandline/plugin_set/#change-the-source-of-a-mount). The plugin can also be configured to do a "hard" cleanup and destroy logs when a container stops. To enable this, set the `DESTROY_LOGS_ON_STOP` environment var inside the plugin:
fearful-symmetry marked this conversation as resolved.
Show resolved Hide resolved
```
docker plugin set d805664c550e DESTROY_LOGS_ON_STOP=true
```
```

You can also set `max-file`, `max-size` and `compress` via `--log-opts`