Skip to content

chowden/docker-elk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Docker ELK stack

Based on the official Docker images:

Bring a single ElasticSearch node up, with an instance of Logstash and Kibana via docker-compose.

Contents

  1. Requirements
  2. Getting started
  3. Configuration
  4. Storage
    • [Elasticsearch data persistance](#elasticsearch-data persistance)
  5. Extensibility
  6. JVM tuning

Requirements

Host setup

  1. Install Docker version 1.10.0+
  2. Install Docker Compose version 1.6.0+
  3. Clone this repository Git Clone

Getting Started

Bringing up the stack

Note: In case you switched branch or updated a base image - you may need to run docker-compose build first

Change into the docker-elk directory that's been created and then start the ELK stack using docker-compose:

$ docker-compose up

To run this stack in the background (detached mode - my personal preference):

$ docker-compose up -d

Kibana is exposed on localhost, port 5601. http://localhost:5601

By default, the stack exposes the following ports:

  • 5000: Logstash TCP input.
  • 9200: Elasticsearch HTTP
  • 9300: Elasticsearch TCP transport
  • 5601: Kibana

Now that the stack is running, you will want to inject some log entries. The shipped Logstash configuration allows you to send content via TCP:

$ nc localhost 5000 < /path/to/logfile.log

Initial setup

Default Kibana index pattern creation

When Kibana launches for the first time, it is not configured with any index pattern.

Via the Kibana web UI

NOTE: You need to inject data into Logstash before being able to configure a Logstash index pattern via the Kibana web UI. Then all you have to do is hit the Create button.

Refer to Connect Kibana with Elasticsearch for detailed instructions about the index pattern configuration.

On the command line

Create an index pattern via the Kibana API:

$ curl -XPOST -D- 'http://localhost:5601/api/saved_objects/index-pattern' \
    -H 'Content-Type: application/json' \
    -H 'kbn-version: 6.1.0' \
    -d '{"attributes":{"title":"logstash-*","timeFieldName":"@timestamp"}}'

The created pattern will automatically be marked as the default index pattern as soon as the Kibana UI is opened for the first time.

Configuration

NOTE: Configuration is not dynamically reloaded, you will need to restart the stack after any change in the configuration of a component.

Kibana Configuration

Kibana is configured inside the docker-compose file a read-write file - logstash.yml:

volumes:
  - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:rw
  - ./logstash/pipeline:/usr/share/logstash/pipeline:ro

Logstash Configuration

The Logstash configuration is stored in logstash/config/logstash.yml and is similarly mapped as RW.

volumes:
  - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
  - ./logstash/pipeline:/usr/share/logstash/pipeline:ro

ElasticSearch Configuration

The Elasticsearch configuration is stored in elasticsearch/config/elasticsearch.yml and is mapped as RO - so any changes you make while docker cluster is up will not persist.

volumes:
  - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro

How can I scale out the Elasticsearch cluster?

Follow these useful instructions - it's a to-do or a fork of this build Scaling out Elasticsearch

Storage

ElasticSearch Data Persistence

The data stored in Elasticsearch will be persisted after container reboot on this configuration.

Inside the docker-compose.yml:

volumes:
  - ~/docker/elasticsearch/data:/usr/share/elasticsearch/data:rw

This will store Elasticsearch data inside your home directoy under /docker/elasticsearch/data'

NOTE: beware of these OS-specific considerations:

  • Linux: the unprivileged elasticsearch user is used within the Elasticsearch image, therefore the mounted data directory must be owned by the uid 1000.
  • macOS: the default Docker for Mac configuration allows mounting files from /Users/, /Volumes/, /private/, and /tmp exclusively. Follow the instructions from the documentation to add more locations.

Extensibility

Plugins

To add plugins to any ELK component you have to:

  1. Add a RUN statement to the corresponding Dockerfile (eg. RUN logstash-plugin install logstash-filter-json)
  2. Add the associated plugin code configuration to the service configuration (eg. Logstash input/output)
  3. Rebuild the images using the docker-compose build command

How can I enable the provided extensions?

A few extensions are available inside the extensions directory. These extensions provide features which are not part of the standard Elastic stack, but can be used to enrich it with extra integrations.

The documentation for these extensions is provided inside each individual subdirectory, on a per-extension basis. Some of them require manual changes to the default ELK configuration.

JVM tuning

Logstash and Elastic are configured with 256Mb:

environment:
  LS_JAVA_OPTS: "-Xmx256m -Xms256m"

About

Docker Container and Compose for the ELK Stack

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published