This is a highly configurable ElasticSearch (v1.5.0) Docker image built using Docker's automated build process published to the public Docker Hub Registry. It has optional AWS EC2 discovery.
It is usually the back-end for a Logstash instance with Kibana as the frontend forming what is commonly referred to as an ELK stack.
To start a basic container using ephemeral storage:
docker run --name %p \
--publish 9200:9200 \
--publish 9300:9300 \
cgswong/elasticsearch
Within the container the data (/esvol/data
), log (/esvol/logs
) and config (/esvol/config
) directories are exposed as volumes so to start a default container with attached persistent/shared storage for data:
mkdir -p /es/data
docker run --rm --name %p
--publish 9200:9200 \
--publish 9300:9300 \
--volume /es/data:/esvol/data \
cgswong/elasticsearch
Attaching persistent storage ensures that the data is retained across container restarts (with some obvious caveats). It is recommended this be done via a data container, preferably hosting an AWS S3 bucket or other externalized, distributed persistent storage.
A few plugins are installed namely:
-
BigDesk: Provides live charts and statistics for an Elasticsearch cluster. You can open web browser and navigate to
http://localhost:9200/_plugin/bigdesk/
it will open Bigdesk and auto-connect to the ES node. You may need to change thelocalhost
and9200
port to the correct values for your environment/setup. -
Elasticsearch Head: A web front end for an Elasticsearch cluster. Open
http://localhost:9200/_plugin/head/
and it will run it as a plugin within the Elasticsearch cluster. -
Curator: Helps with management of indices. You can learn more at the Elasticsearch Curator documentation site
http://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html
.
The following environment variables can be used to configure the container using the Docker -e
(or --env
) flag:
ES_CFG_URL
Download external elasticsearch configuration file for use.ES_PORT
Use to change from the default client port of 9200.ES_CLUSTER
The name of the elasticsearch cluster, default is "es01".ES_DISCOVERY
Set to "ec2" to enable AWS EC2 discovery, and also set AWS_ACCESS_KEY, AWS_SECRET_KEY and AWS_S3_BUCKET.AWS_S3_BUCKET
The AWS S3 bucket to use for snapshot backups.AWS_ACCESS_KEY
The AWS access key to be used for discovery. Not required if the instance profile has ec2 DescribeInstance permissions.AWS_SECRET_KEY
The AWS secret key to be used for discovery. Not required if the instance profile has ec2 DescribeInstance permissions.
Any port within a Docker image must be appropriately exposed (and mapped) on the Docker host. To avoid port conflicts, a service discovery mechanism must be used and the correct hostname/ip and port on the Docker host passed to remote containers/hosts. Also, if using your own configuration file, you can either set the appropriate values within the file, or make use of variable substitution using the above (review the default file in the image for the expected format).
The following volumes are exposed for Docker host volume mounts using -v
Docker command line option:
/esvol/config
: Elasticsearch configuration file,elasticsearch.yml
. The image also supports using a downloadable external configuration file specified via theES_CFG_URL
environment variable./esvol/data
: Elasticsearch data files./esvol/logs
: Elasticsearch log files.
The container must be able to access any URL provided, otherwise it will exit with a failure code.
Sample systemd unit files have been provided to show how service discovery could be achieved using this image, assuming the same is being done for the other components in the ELK stack. The examples use etcd and DNS as the service registries though there are other options.
Please refer to the appropriate systemd unit file for further details.