Run the Elastic stack with Docker Compose.
It gives you the ability to analyze any data set by using the searching/aggregation capabilities of Elasticsearch and the visualization power of Kibana.
Uses the official Docker images from Elastic:
And built from sources
This repository based at deviantony/docker-elk, but adapted to my own requirements. The main goal of this project is running production-ready single node Elasticsearch instance.
Comparing to original repo:
- Using original container images. This time I don't use plugins and see no point to build custom images.
- Using
basic
license by default. - Enabled bootstrap checks.
- Enabled TLS and X-Pack security features.
- Configured container memory ulimits according to Elasticsearch documentation.
- Added healthcheck scripts.
- Added Logstash pipelines config file binding.
- Docker Engine version 18.06 or newer
- Docker Compose version 1.26.0 or newer
ℹ️ Following instructions assumes that you are using Docker compose V2. If you use legacy docker-compose, use
docker-compose
instead ofdocker compose
. - 3 GB of RAM
ℹ️ Especially on Linux, make sure your user has the required permissions to
interact with the Docker daemon.
ℹ️ Change Java heap with your requirements.
If you are using the legacy Hyper-V mode of Docker Desktop for Windows, ensure File Sharing is
enabled for the C:
drive.
The default configuration of Docker Desktop for Mac allows mounting files from /Users/
, /Volume/
, /private/
,
/tmp
and /var/folders
exclusively. Make sure the repository is cloned in one of those locations, or follow the
instructions from the documentation to add more locations.
Increase virtual memory map:
$ sudo sysctl -w vm.max_map_count=262144
/etc/sysctl.d/
dir with this setting.
Please, check Elastic docs for more information.
- Clone this repository onto the Docker host.
- Follow TLS setup settings
- Enable built-in system accounts:
- Start Elasticsearch with
docker compose up -d elastic
- After a few seconds run
That will generate passwords for system accounts.
docker compose exec elastic bin/elasticsearch-setup-passwords auto --batch -u https://localhost:9200
- Add
logstash_writer
role andlogstash_internal
user if needed with POST request ℹ️ Replace variables below with your values# Create role curl --insecure \ --user elastic:${ELASTIC_PASSWORD} \ --request POST \ --header "Content-Type: application/json" \ --data '{"cluster":["manage_index_templates","monitor","manage_ilm"],"indices":[{"names":["logs-generic-default","logstash-*","ecs-logstash-*"],"privileges":["write","create","create_index","manage","manage_ilm"]},{"names":["logstash","ecs-logstash"],"privileges":["write","manage"]}]}' \ https://localhost:9200/_security/role/logstash_writer # Create iser curl --insecure \ --user elastic:${ELASTIC_PASSWORD} \ --request POST \ --header "Content-Type: application/json" \ --data '{"password":"${LOGSTASH_INTERNAL_PASSWD}","roles":["logstash_writer"]}' \ https://localhost:9200/_security/user/logstash_internal
- Add
remote_logging_agent
role andbeats_writer
user if needed with POST request ℹ️ Replace variables below with your values# Create role curl --insecure \ --user elastic:${ELASTIC_PASSWORD} \ --request POST \ --header "Content-Type: application/json" \ --data '{"cluster":["manage_index_templates","manage_ingest_pipelines","monitor","manage_ilm","manage_pipeline"],"indices":[{"names":["logs-*","filebeat-*","metrics-*","metricbeat-*"],"privileges":["write","create","create_index","manage","manage_ilm"]}]}' \ https://localhost:9200/_security/role/remote_logging_agent # Create iser curl --insecure \ --user elastic:${ELASTIC_PASSWORD} \ --request POST \ --header "Content-Type: application/json" \ --data '{"password":"${BEATS_WRITER_PASSWD}","roles":["remote_logging_agent","remote_monitoring_agent"]}' \ https://localhost:9200/_security/user/beats_writer
- Fill passwords with generated ones in following files:
.env
logstash/pipeline/main.conf
- Start Elasticsearch with
- Fill
.env
file. - Load Filebeat and Metricbeat Kibana settings with
docker compose run filebeat setup -E output.elasticsearch.username=elastic -E output.elasticsearch.password=${your_elastic_root_password} -c config/filebeat.docker.yml --strict.perms=false docker compose run metricbeat setup -E output.elasticsearch.username=elastic -E output.elasticsearch.password=${your_elastic_root_password} -c config/metricbeat.docker.yml --strict.perms=false
- Start services with:
docker compose up
You can also run all services in the background (detached mode) by adding the-d
flag to the above command.
There are two network drivers that can be used with docker-compose: bridge
and host
.
bridge: Add virtual network and pass-through selected ports. Also provide ability to use internal domain names (elastic
, kibana
, etc). Unfortunately, brings some routing overhead.
host: Just use host network. No network isolation, no internal domains, no overhead.
According to Rally testing with metricbeat
race, there is no significant difference.
Using host network:
To use host network for Elastic stack, remove network
and ports
sections from docker-compose.yml
file and add network_mode: host
key to services you want to use host network driver. You can use all services with host network mode.
When Elasticsearch set to use host network, change elasticsearch.hosts
to localhost
both in Kibana and Logstash configs.
Check docker compose reference for more information.
To stay synced with remote repo it's recommended to add all local changes to docker-compose.override.yml
.
Override file is a same as docker-compose file, but not required all section specified. Just overrides.
More info at docker docs
Elasticsearch data is persisted inside a volume by default.
In order to entirely shutdown the stack and remove all persisted data, use the following Docker Compose command:
$ docker compose down -v
Give Kibana about a minute to initialize, then access the Kibana web UI by opening http://localhost:5601 in a web browser and use the following credentials to log in:
- user: elastic
- password: <your generated elastic password>
When Kibana launches for the first time, it is not configured with any index pattern.
ℹ️ You need to inject data into Logstash before being able to configure a Logstash index pattern via the Kibana web UI.
Navigate to the Discover view of Kibana from the left sidebar. You will be prompted to create an index pattern. Enter
logstash-*
to match Logstash indices then, on the next page, select @timestamp
as the time filter field. Finally,
click Create index pattern and return to the Discover view to inspect your log entries.
Refer to Connect Kibana with Elasticsearch and Creating an index pattern for detailed instructions about the index pattern configuration.
Create an index pattern via the Kibana API:
$ curl -XPOST -D- 'http://localhost:5601/api/saved_objects/index-pattern' \
-H 'Content-Type: application/json' \
-H 'kbn-version: 8.1.2' \
-u elastic:<your generated elastic password> \
-d '{"attributes":{"title":"logstash-*","timeFieldName":"@timestamp"}}'
The created pattern will automatically be marked as the default index pattern as soon as the Kibana UI is opened for the first time.
ℹ️ Configuration is not dynamically reloaded, you will need to restart individual components after any configuration change.
Learn more about the security of the Elastic stack at Secure the Elastic Stack.
The Elasticsearch configuration is stored in elastic/elasticsearch.yml
.
You can also specify the options you want to override by setting environment variables inside the Compose file:
elastic:
environment:
network.host: _non_loopback_
cluster.name: my-cluster
Please refer to the following documentation page for more details about how to configure Elasticsearch inside Docker containers: Install Elasticsearch with Docker.
The Kibana default configuration is stored in kibana/config/kibana.yml
.
It's highly recommended to use Kibana with secure TLS connection. There is two ways to achieve that:
- Setup reverse proxy (like Nginx).
- Setup Kibana using TLS itself.
You can find Kibana TLS setup instructions in tls/README.md
Please refer to the following documentation page for more details about how to configure Kibana inside Docker containers: Install Kibana with Docker.
ℹ️ Do not use the logstash_system
user inside the Logstash pipeline file, it does not have sufficient permissions to create indices. Follow the instructions at Configuring Security in Logstash to create a user with suitable roles.
The Logstash configuration is stored in logstash/logstash.yml
, Logstash pipelines configuration is in logstash/pipelines.yml
Please refer to the following documentation page for more details about how to configure Logstash inside Docker containers: Configuring Logstash for Docker.
Filebeat and Metricbeat are using for Elastic stack monitoring. Refered docs:
- Collecting Elasticsearch monitoring data with Metricbeat
- Collecting Elasticsearch log data with Filebeat
- Good unofficial article
Beats can be configured with beats/filebeat.docker.yml
file or with docker labels. But for some reason X-Pack monitoring configured with labels doesn't works.
Please refer to the following documentation page for more details about how to configure Filebeat inside Docker containers:
Fleet is a new way to manage log shippers. Instead of bundle of beats now we can use only one service, called Elastic Agent
. And Fleet is a management server for Elastic Agent.
In order to impossibility of preconfigure Kibana for Fleet server with environment variables, use web UI to configure Fleet and then fill FLEET_SERVER_POLICY_ID
and FLEET_SERVER_SERVICE_TOKEN
with your values.
Elastic package registry is service which Kibana and Fleet system uses to get integration packages. Usually it's optional but required when using Fleet system isolated from official elastic registry.
Follow the instructions from the Wiki: Scaling out Elasticsearch
Repo contains healthcheck bash scripts and utility buit with Go. You can choose one oh them or don't use service healthcheck.
Usage: healthcheck [options] [elastic | kibana | logstash] [host]
By default tool configurated for default repo settings (https for elastic, default ports, ignoring invalid certs).
- To use basic auth, add
-u <username
(Default remote_monitoring_user) and-p <password>
flags. - Trigger status can be setted with RegExp by
-s
flag, e.g:healthcheck -s 'green|yellow' elastic
- Accept non default hostname/scheme, e.g:
healthcheck elastic http://elastic
- Add mount point for each script to corresponding service.
- Change
healthcheck: test: "CMD"
to service healthcheck script. - Change checking endpoint and username/password.
To add plugins to any Elastic stack component, you have to:
- Create Dockerfile for service you want to apply plugin.
- Add a
RUN
statement to the correspondingDockerfile
(e.g.RUN logstash-plugin install logstash-filter-json
)
# https://www.docker.elastic.co/
FROM docker.elastic.co/logstash/logstash:${LOGSTASH_VERSION}
# Add your logstash plugins setup here
RUN logstash-plugin install logstash-filter-json
- Add the associated plugin code configuration to the service configuration (eg. Logstash input/output)
- Add following to docker compose service section you want to apply plugin (e.g. Logstash):
build:
context: logstash/
- (Re)Build the images using the
docker compose build
command
By default, both Elasticsearch and Logstash start with 1/4 of the total host memory allocated to the JVM Heap Size.
The startup scripts for Elasticsearch and Logstash can append extra JVM options from the value of an environment variable, allowing the user to adjust the amount of memory that can be used by each component:
Service | Environment variable |
---|---|
Elasticsearch | ES_JAVA_OPTS |
Logstash | LS_JAVA_OPTS |
For example, to increase the maximum JVM Heap Size for Logstash:
logstash:
environment:
LS_JAVA_OPTS: -Xmx1g -Xms1g
As for the Java Heap memory (see above), you can specify JVM options to enable JMX and map the JMX port on the Docker host.
Update the {ES,LS}_JAVA_OPTS
environment variable with the following content (I've mapped the JMX service on the port 18080, you can change that). Do not forget to update the -Djava.rmi.server.hostname
option with the IP address of your Docker host (replace DOCKER_HOST_IP):
logstash:
environment:
LS_JAVA_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=18080 -Dcom.sun.management.jmxremote.rmi.port=18080 -Djava.rmi.server.hostname=DOCKER_HOST_IP -Dcom.sun.management.jmxremote.local.only=false
This time, there are no plans on support for Docker Swarm mode.