diff --git a/docs/ADVANCED.md b/docs/ADVANCED.md index b87a9d54..a4be3ae9 100644 --- a/docs/ADVANCED.md +++ b/docs/ADVANCED.md @@ -5,8 +5,8 @@ Let's dive into the nitty-gritty of how to tweak the setup of your containerized ## Navigation * [Runtime configuration](#runtime-configuration) - * [Using default.yml](#using-defaultyml) - * [Configuration specs for default.yml](#configuration-specs-for-defaultyml) + * [Using `default.yml`](#using-defaultyml) + * [Configuration specs for `default.yml`](#configuration-specs-for-defaultyml) * [Global variables](#global-variables) * [Configure Splunk](#configure-splunk) * [Configured app installation paths](#configure-app-installation-paths) @@ -38,24 +38,30 @@ The purpose of the `default.yml` is to define a standard set of variables that c #### Generation The image contains a script to enable dynamic generation of this file automatically. Run the following command to generate a `default.yml`: -``` +```bash $ docker run --rm -it splunk/splunk:latest create-defaults > default.yml ``` You can also pre-seed some settings based on environment variables during this `default.yml` generation process. For example, you can define `SPLUNK_PASSWORD` with the following command: -``` +```bash $ docker run --rm -it -e SPLUNK_PASSWORD= splunk/splunk:latest create-defaults > default.yml ``` #### Usage When starting the docker container, the `default.yml` can be mounted in `/tmp/defaults/default.yml` or fetched dynamically with `SPLUNK_DEFAULTS_URL`. Ansible provisioning will read in and honor these settings. Environment variables specified at runtime will take precedence over anything defined in `default.yml`. -``` +```bash # Volume-mounting option -$ docker run -d -p 8000:8000 -v default.yml:/tmp/defaults/default.yml -e SPLUNK_START_ARGS=--accept-license -e SPLUNK_PASSWORD= splunk/splunk:latest +$ docker run -d -p 8000:8000 -e "SPLUNK_PASSWORD=" \ + -e "SPLUNK_START_ARGS=--accept-license" \ + -v default.yml:/tmp/defaults/default.yml \ + splunk/splunk:latest # URL option -$ docker run -d -p 8000:8000 -v -e SPLUNK_DEFAULTS_URL=http://company.net/path/to/default.yml -e SPLUNK_START_ARGS=--accept-license -e SPLUNK_PASSWORD= splunk/splunk:latest +$ docker run -d -p 8000:8000 -v -e "SPLUNK_PASSWORD=" \ + -e "SPLUNK_START_ARGS=--accept-license" \ + -e "SPLUNK_DEFAULTS_URL=http://company.net/path/to/default.yml" \ + splunk/splunk:latest ``` ### Configuration specs for default.yml @@ -65,7 +71,7 @@ $ docker run -d -p 8000:8000 -v -e SPLUNK_DEFAULTS_URL=http://company.net/path/t Variables at the root level influence the behavior of everything in the container, as they have global scope. Example: -``` +```yaml --- retry_num: 100 ``` @@ -79,7 +85,9 @@ retry_num: 100 The major object `splunk` in the YAML file contains variables that control how Splunk operates. Sample: -``` + +```yaml +--- splunk: opt: /opt home: /opt/splunk @@ -98,7 +106,9 @@ splunk: # hec.token is used only for ingestion (receiving Splunk events) token: smartstore: null + ... ``` + | Variable Name | Description | Parent Object | Default Value | Required for Standalone | Required for Search Head Clustering | Required for Index Clustering | | --- | --- | --- | --- | --- | --- | --- | @@ -124,12 +134,15 @@ splunk: The `app_paths` section under `splunk` controls how apps are installed inside the container. Sample: -``` +```yaml +--- +splunk: app_paths: default: /opt/splunk/etc/apps shc: /opt/splunk/etc/shcluster/apps idxc: /opt/splunk/etc/master-apps httpinput: /opt/splunk/etc/apps/splunk_httpinput + ... ``` | Variable Name | Description | Parent Object | Default Value | Required for Standalone | Required for Search Head Clustering | Required for Index Clustering | @@ -144,12 +157,15 @@ Sample: Search Head Clustering is configured using the `shc` section under `splunk`. Sample: -``` +```yaml +--- +splunk: shc: enable: false secret: replication_factor: 3 replication_port: 9887 + ... ``` | Variable Name | Description | Parent Object | Default Value | Required for Standalone | Required for Search Head Clustering | Required for Index Clustering | @@ -164,12 +180,15 @@ Sample: Indexer Clustering is configured using the `idxc` section under `splunk`. Sample: -``` +```yaml +--- +splunk: idxc: secret: search_factor: 2 replication_factor: 3 replication_port: 9887 + ... ``` | Variable Name | Description | Parent Object | Default Value | Required for Standalone| Required for Search Head Clustering | Required for Index Clustering | @@ -181,16 +200,22 @@ Sample: ## Install apps Apps can be installed by using the `SPLUNK_APPS_URL` environment variable when creating the Splunk container: -``` -$ docker run -it --name splunk -e SPLUNK_START_ARGS=--accept-license -e SPLUNK_PASSWORD= -e SPLUNK_APPS_URL=http://company.com/path/to/app.tgz splunk/splunk:latest +```bash +$ docker run --name splunk -e "SPLUNK_PASSWORD=" \ + -e "SPLUNK_START_ARGS=--accept-license" \ + -e "SPLUNK_APPS_URL=http://company.com/path/to/app.tgz" \ + -it splunk/splunk:latest ``` See the [full app installation guide](advanced/APP_INSTALL.md) to learn how to specify multiple apps and how to install apps in a distributed environment. ## Apply Splunk license Licenses can be added with the `SPLUNK_LICENSE_URI` environment variable when creating the Splunk container: -``` -$ docker run -it --name splunk -e SPLUNK_START_ARGS=--accept-license -e SPLUNK_PASSWORD= -e SPLUNK_LICENSE_URI=http://company.com/path/to/splunk.lic splunk/splunk:latest +```bash +$ docker run --name splunk -e "SPLUNK_PASSWORD=" \ + -e "SPLUNK_START_ARGS=--accept-license" \ + -e "SPLUNK_LICENSE_URI=http://company.com/path/to/splunk.lic" \ + -it splunk/splunk:latest ``` See the [full license installation guide](advanced/LICENSE_INSTALL.md) to learn how to specify multiple licenses and how to use a central, containerized license manager. @@ -200,8 +225,8 @@ When Splunk boots, it registers all the config files in various locations on the Using the Splunk Docker image, users can also create their own config files, following the same INI file format that drives Splunk. This is a power-user/admin-level feature, as invalid config files can break or prevent start-up of your Splunk installation. -User-specified config files are set in `default.yml` by creating a `conf` key under `splunk`, in the format below: -``` +User-specified config files are set in `default.yml` by creating a `conf` key under `splunk`, in the format below: +```yaml --- splunk: conf: @@ -217,7 +242,7 @@ splunk: This generates a file `user-prefs.conf`, owned by the correct Splunk user and group and located in the given directory (in this case, `/opt/splunkforwarder/etc/users/admin/user-prefs/local`). Following INI format, the contents of `user-prefs.conf` will resemble the following: -``` +```ini [general] search_syntax_highlighting = dark default_namespace = appboilerplate @@ -235,7 +260,7 @@ This is a capability only available for indexer clusters (cluster_master + index The Splunk Docker image supports SmartStore in a bring-your-own backend storage provider format. Due to the complexity of this option, SmartStore is only enabled if you specify all the parameters in your `default.yml` file. Sample configuration that persists *all* indexes (default) with a SmartStore backend: -``` +```yaml --- splunk: smartstore: @@ -259,20 +284,22 @@ The SmartStore cache manager controls data movement between the indexer and the * The `index` stanza corresponds to [indexes.conf options](https://docs.splunk.com/Documentation/Splunk/latest/admin/Indexesconf). This example defines cache settings and retention policy: -``` -smartstore: - cachemanager: - max_cache_size: 500 - max_concurrent_uploads: 7 - index: - - indexName: custom_index - remoteName: my_storage - scheme: http - remoteLocation: my_storage.net - maxGlobalDataSizeMB: 500 - maxGlobalRawDataSizeMB: 200 - hotlist_recency_secs: 30 - hotlist_bloom_filter_recency_hours: 1 +```yaml +splunk: + smartstore: + cachemanager: + max_cache_size: 500 + max_concurrent_uploads: 7 + index: + - indexName: custom_index + remoteName: my_storage + scheme: http + remoteLocation: my_storage.net + maxGlobalDataSizeMB: 500 + maxGlobalRawDataSizeMB: 200 + hotlist_recency_secs: 30 + hotlist_bloom_filter_recency_hours: 1 + ... ``` ## Use a deployment server @@ -291,7 +318,7 @@ To secure network traffic from one Splunk instance to another (e.g. forwarders t If you are enabling SSL on one tier of your Splunk topology, it's likely all instances will need it. To achieve this, generate your server and CA certificates and add them to the `default.yml`, which gets shared across all Splunk docker containers. Sample `default.yml` snippet to configure Splunk TCP with SSL: -``` +```yaml splunk: ... s2s: @@ -312,7 +339,7 @@ Building your own images from source is possible, but neither supported nor reco The supplied `Makefile` in the root of this project contains commands to control the build: 1. Fork the [docker-splunk GitHub repository](https://github.com/splunk/docker-splunk/) 1. Clone your fork using git and create a branch off develop - ``` + ```bash $ git clone git@github.com:YOUR_GITHUB_USERNAME/docker-splunk.git $ cd docker-splunk ``` @@ -351,14 +378,12 @@ The `splunk/common-files` directory contains a Dockerfile that extends the base ``` $ make minimal-redhat-8 ``` - * **Bare image** Build a full Splunk base image *without* Ansible. ``` $ make bare-redhat-8 ``` - * **Full image** Build a full Splunk base image *with* Ansible. diff --git a/docs/ARCHITECTURE.md b/docs/ARCHITECTURE.md index 387b387e..649b17e7 100644 --- a/docs/ARCHITECTURE.md +++ b/docs/ARCHITECTURE.md @@ -1,5 +1,5 @@ ## Architecture -From a design perspective, the containers brought up with the `docker-splunk` images are meant to provision themselves locally and asynchronously. The execution flow of the provisioning process is meant to gracefully handle interoperability in this manner, while also maintaining idempotency and reliability. +From a design perspective, the containers brought up with the `docker-splunk` images are meant to provision themselves locally and asynchronously. The execution flow of the provisioning process is meant to gracefully handle interoperability in this manner, while also maintaining idempotency and reliability. ## Navigation @@ -9,7 +9,7 @@ From a design perspective, the containers brought up with the `docker-splunk` im * [Supported platforms](#supported-platforms) ## Networking -By default, the Docker image exposes a variety of ports for both external interaction as well as internal use. +By default, the Docker image exposes a variety of ports for both external interaction as well as internal use. ``` EXPOSE 8000 8065 8088 8089 8191 9887 9997 ``` @@ -28,11 +28,13 @@ Below is a table detailing the purpose of each port, which can be used as a refe ## Design -##### Remote networking -Particularly when bringing up distributed Splunk topologies, there is a need for one Splunk instances to make a request against another Splunk instance in order to construct the cluster. These networking requests are often prone to failure, as when Ansible is executed asyncronously there are no guarantees that the requestee is online/ready to receive the message. +#### Remote networking +Particularly when bringing up distributed Splunk topologies, there is a need for one Splunk instances to make a request against another Splunk instance in order to construct the cluster. These networking requests are often prone to failure, as when Ansible is executed asynchronously there are no guarantees that the requestee is online/ready to receive the message. While developing new playbooks that require remote Splunk-to-Splunk connectivity, we employ the use of `retry` and `delay` options for tasks. For instance, in this example below, we add indexers as search peers of individual search head. To overcome error-prone networking, we have retry counts with delays embedded in the task. There are also break-early conditions that maintain idempotency so we can progress if successful: -``` + + +```yaml - name: Set all indexers as search peers command: "{{ splunk.exec }} add search-server https://{{ item }}:{{ splunk.svc_port }} -auth {{ splunk.admin_user }}:{{ splunk.password }} -remoteUsername {{ splunk.admin_user }} -remotePassword {{ splunk.password }}" become: yes @@ -49,9 +51,12 @@ While developing new playbooks that require remote Splunk-to-Splunk connectivity no_log: "{{ hide_password }}" when: "'splunk_indexer' in groups" ``` + Another utility you can add when creating new plays is an implicit wait. For more information on this, see the `roles/splunk_common/tasks/wait_for_splunk_instance.yml` play which will wait for another Splunk instance to be online before making any connections against it. -``` + + +```yaml - name: Check Splunk instance is running uri: url: https://{{ splunk_instance_address }}:{{ splunk.svc_port }}/services/server/info?output_mode=json @@ -68,6 +73,7 @@ Another utility you can add when creating new plays is an implicit wait. For mor ignore_errors: true no_log: "{{ hide_password }}" ``` + ## Supported platforms At the current time, this project only officially supports running Splunk Enterprise on `debian:stretch-slim`. We do have plans to incorporate other operating systems and Windows in the future. diff --git a/docs/CONTRIBUTING.md b/docs/CONTRIBUTING.md index 99206283..f0efa8d2 100644 --- a/docs/CONTRIBUTING.md +++ b/docs/CONTRIBUTING.md @@ -116,7 +116,7 @@ There are multiple types of tests. The location of the test code varies with typ $ make medium-tests ``` -3. **Large:** Exercises the entire system, end-to-end; used to identify crucial performance and basic functionality that will be run for every code check-in and commit; may launch or interact with services in a datacenter, preferably with a staging environment to avoid affecting production +3. **Large:** Exercises the entire system, end-to-end; used to identify crucial performance and basic functionality that will be run for every code check-in and commit; may launch or interact with services in a data center, preferably with a staging environment to avoid affecting production ``` $ make large-tests ``` diff --git a/docs/EXAMPLES.md b/docs/EXAMPLES.md index 9131081c..0179fcc2 100644 --- a/docs/EXAMPLES.md +++ b/docs/EXAMPLES.md @@ -14,7 +14,7 @@ Note that for more complex scenarios, we will opt to use a [Docker compose file] * [...with any app](#create-standalone-with-app) * [...with a SplunkBase app](#create-standalone-with-splunkbase-app) * [...with SSL enabled](#create-standalone-with-ssl-enabled) - * [...with a Free license](#create-standalone-with-free-license) + * [...with a Splunk Free license](#create-standalone-with-splunk-free-license) * [Create standalone and universal forwarder](#create-standalone-and-universal-forwarder) * [Create heavy forwarder](#create-heavy-forwarder) * [Create heavy forwarder and deployment server](#create-heavy-forwarder-and-deployment-server) @@ -27,13 +27,16 @@ Note that for more complex scenarios, we will opt to use a [Docker compose file] ## Create standalone from CLI Execute the following to bring up your deployment: -``` -$ docker run --name so1 --hostname so1 -p 8000:8000 -e "SPLUNK_PASSWORD=" -e "SPLUNK_START_ARGS=--accept-license" -it splunk/splunk:latest +```bash +$ docker run --name so1 --hostname so1 -p 8000:8000 \ + -e "SPLUNK_PASSWORD=" \ + -e "SPLUNK_START_ARGS=--accept-license" \ + -it splunk/splunk:latest ``` ## Create standalone from compose -
docker-compose.yml +
docker-compose.yml

```yaml version: "3.6" @@ -48,7 +51,7 @@ services: ports: - 8000 ``` -
+

Execute the following to bring up your deployment: ``` @@ -56,9 +59,9 @@ $ SPLUNK_PASSWORD= docker-compose up -d ``` ## Create standalone with license -Adding a Splunk Enterprise license can be done in multiple ways. Please review the following compose files below to see how it can be achieved, either with a license hosted on a webserver or with a license file as a direct mount. +Adding a Splunk Enterprise license can be done in multiple ways. Review the following compose files below to see how it can be achieved, either with a license hosted on a webserver or with a license file as a direct mount. -
docker-compose.yml - license from URL +
docker-compose.yml - license from URL

```yaml version: "3.6" @@ -74,9 +77,9 @@ services: ports: - 8000 ``` -
+

-
docker-compose.yml - license from file +
docker-compose.yml - license from file

```yaml version: "3.6" @@ -94,8 +97,7 @@ services: volumes: - ./splunk.lic:/tmp/license/splunk.lic ``` -
- +

Execute the following to bring up your deployment: ``` @@ -103,9 +105,9 @@ $ SPLUNK_PASSWORD= docker-compose up -d ``` ## Create standalone with HEC -To learn more about what the HTTP event collector (HEC) is and how to use it, please review the documentation [here](https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector). +To learn more about the HTTP Event Collector (HEC) and how to use it, see [Set up and use HTTP Event Collector](https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector). -
docker-compose.yml +
docker-compose.yml

```yaml version: "3.6" @@ -121,7 +123,7 @@ services: ports: - 8000 ``` -
+

Execute the following to bring up your deployment: ``` @@ -129,7 +131,7 @@ $ SPLUNK_PASSWORD= docker-compose up -d ``` To validate HEC is provisioned properly and functional: -``` +```bash $ curl -k https://localhost:8088/services/collector/event -H "Authorization: Splunk abcd1234" -d '{"event": "hello world"}' {"text": "Success", "code": 0} ``` @@ -137,7 +139,7 @@ $ curl -k https://localhost:8088/services/collector/event -H "Authorization: Spl ## Create standalone with app Splunk apps can also be installed using this Docker image. -
docker-compose.yml +
docker-compose.yml

```yaml version: "3.6" @@ -153,7 +155,7 @@ services: ports: - 8000 ``` -
+

Execute the following to bring up your deployment: ``` @@ -163,7 +165,7 @@ $ SPLUNK_PASSWORD= docker-compose up -d ## Create standalone with SplunkBase app Apps showcased on SplunkBase can also be installed using this Docker image. -
docker-compose.yml +
docker-compose.yml

```yaml version: "3.6" @@ -175,13 +177,13 @@ services: environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_APPS_URL=https://splunkbase.splunk.com/app/2890/release/4.1.0/download - - SPLUNKBASE_USERNAME= + - SPLUNKBASE_USERNAME=<username> - SPLUNKBASE_PASSWORD - SPLUNK_PASSWORD ports: - 8000 ``` -
+

Execute the following to bring up your deployment: ``` @@ -190,12 +192,12 @@ $ SPLUNKBASE_PASSWORD= SPLUNK_PASSWORD= docker-co ## Create standalone with SSL enabled To enable SSL over SplunkWeb, you'll first need to generate your self-signed certificates. Please see the [Splunk docs](https://docs.splunk.com/Documentation/Splunk/latest/Security/Self-signcertificatesforSplunkWeb) on how to go about doing this. For the purposes of local development, you can use: -``` +```bash openssl req -x509 -newkey rsa:4096 -passout pass:abcd1234 -keyout /home/key.pem -out /home/cert.pem -days 365 -subj /CN=localhost ``` Once you have your certificates available, you can execute the following to bring up your deployment with SSL enabled on the Splunk Web UI: -``` +```bash $ docker run --name so1 --hostname so1 -p 8000:8000 \ -e "SPLUNK_HTTP_ENABLESSL=true" \ -e "SPLUNK_HTTP_ENABLESSL_CERT=/home/cert.pem" \ @@ -207,18 +209,22 @@ $ docker run --name so1 --hostname so1 -p 8000:8000 \ -it splunk/splunk:latest ``` -## Create Standalone with Free license +## Create standalone with Splunk Free license [Splunk Free](https://docs.splunk.com/Documentation/Splunk/latest/Admin/MoreaboutSplunkFree) is the totally free version of Splunk software. The Free license lets you index up to 500 MB per day and will never expire. Execute the following to bring up a Splunk Free standalone environment: -``` -$ docker run --name so1 --hostname so1 -p 8000:8000 -e SPLUNK_PASSWORD= -e SPLUNK_START_ARGS=--accept-license -e SPLUNK_LICENSE_URI=Free -it splunk/splunk:latest +```bash +$ docker run --name so1 --hostname so1 -p 8000:8000 \ + -e "SPLUNK_PASSWORD=" \ + -e "SPLUNK_START_ARGS=--accept-license" \ + -e "SPLUNK_LICENSE_URI=Free" \ + -it splunk/splunk:latest ``` ## Create standalone and universal forwarder You can also enable distributed deployments. In this case, we can create a Splunk universal forwarder running in a container to stream logs to a Splunk standalone, also running in a container. -
docker-compose.yml +
docker-compose.yml

```yaml version: "3.6" @@ -261,7 +267,7 @@ services: - 8000 - 8089 ``` -
+

Execute the following to bring up your deployment: ``` @@ -271,7 +277,7 @@ $ SPLUNK_PASSWORD= docker-compose up -d ## Create heavy forwarder The following will allow you spin up a forwarder, and stream its logs to an independent, external indexer located at `idx1-splunk.company.internal`, as long as that hostname is reachable on your network. -
docker-compose.yml +
docker-compose.yml

```yaml version: "3.6" @@ -299,7 +305,7 @@ services: ports: - 1514 ``` -
+

Execute the following to bring up your deployment: ``` @@ -309,7 +315,7 @@ $ SPLUNK_PASSWORD= docker-compose up -d ## Create heavy forwarder and deployment server The following will allow you spin up a forwarder, and stream its logs to an independent, external indexer located at `idx1-splunk.company.internal`, as long as that hostname is reachable on your network. Additionally, it brings up a deployment server, which will download an app and distribute it to the heavy forwarder. -
docker-compose.yml +
docker-compose.yml

```yaml version: "3.6" @@ -352,7 +358,7 @@ services: - SPLUNK_APPS_URL=https://artifact.company.internal/splunk_app.tgz - SPLUNK_PASSWORD ``` -
+

Execute the following to bring up your deployment: ``` @@ -366,7 +372,8 @@ $ docker run -it -e SPLUNK_PASSWORD= splunk/splunk:latest create-defau ``` Additionally, review the `docker-compose.yml` below to understand how linking Splunk instances together through roles and environment variables is accomplished: -
docker-compose.yml + +
docker-compose.yml

```yaml version: "3.6" @@ -481,7 +488,7 @@ services: volumes: - ./default.yml:/tmp/defaults/default.yml ``` -
+

Execute the following to bring up your deployment: ``` @@ -495,7 +502,8 @@ $ docker run -it -e SPLUNK_PASSWORD= splunk/splunk:latest create-defau ``` Additionally, review the `docker-compose.yml` below to understand how linking Splunk instances together through roles and environment variables is accomplished: -
docker-compose.yml + +
docker-compose.yml

```yaml version: "3.6" @@ -611,7 +619,7 @@ services: volumes: - ./default.yml:/tmp/defaults/default.yml ``` -
+

Execute the following to bring up your deployment: ``` @@ -625,7 +633,8 @@ $ docker run -it -e SPLUNK_PASSWORD= splunk/splunk:latest create-defau ``` Additionally, review the `docker-compose.yml` below to understand how linking Splunk instances together through roles and environment variables is accomplished: -
docker-compose.yml + +
docker-compose.yml

```yaml version: "3.6" @@ -812,7 +821,7 @@ services: volumes: - ./default.yml:/tmp/defaults/default.yml ``` -
+

Execute the following to bring up your deployment: ``` @@ -820,7 +829,8 @@ $ docker-compose up -d ``` ## Enable root endpoint on SplunkWeb -
docker-compose.yml + +
docker-compose.yml

```yaml version: "3.6" @@ -836,7 +846,7 @@ services: ports: - 8000 ``` -
+

Execute the following to bring up your deployment: ``` @@ -846,7 +856,8 @@ $ SPLUNK_PASSWORD= docker-compose up -d Then, visit SplunkWeb on your browser with the root endpoint in the URL, such as `http://localhost:8000/splunkweb`. ## Create sidecar forwarder -
k8s-sidecar.yml + +
k8s-sidecar.yml

```yaml apiVersion: v1 @@ -878,7 +889,7 @@ spec: - name: shared-data emptyDir: {} ``` -
+

Execute the following to bring up your deployment: ``` @@ -888,4 +899,4 @@ $ kubectl apply -f k8s-sidecar.yml After your pod is ready, the universal forwarder will be reading the logs generated by your app via the shared volume mount. In the ideal case, your app is generating the logs while the forwarder is reading them and streaming the output to a separate Splunk instance located at splunk.company.internal. ## More -There are a variety of Docker compose scenarios in the `docker-splunk` repo [here](https://github.com/splunk/docker-splunk/tree/develop/test_scenarios). Please feel free to use any of those for reference in terms of different topologies! +There are a variety of Docker compose scenarios in the `docker-splunk` repo [here](https://github.com/splunk/docker-splunk/tree/develop/test_scenarios). Feel free to use any of those for reference in deploying different topologies! diff --git a/docs/INTRODUCTION.md b/docs/INTRODUCTION.md index 976e2743..3c4beeaa 100644 --- a/docs/INTRODUCTION.md +++ b/docs/INTRODUCTION.md @@ -1,24 +1,24 @@ ## The Need for Containers -Splunk Enterprise is most commonly deployed with dedicated hardware, and in configurations to support the size of your organization. Expanding your Splunk Enterprise service using only dedicated hardware involves procuring new hardware, installing the operating system, installing and then configuring Splunk Enterprise. Expanding to meet the needs of your users rapidly becomes difficult and overly complex in this model. +Splunk Enterprise is most commonly deployed with dedicated hardware, and in configurations to support the size of your organization. Expanding your Splunk Enterprise service using only dedicated hardware involves procuring new hardware, installing the operating system, installing and then configuring Splunk Enterprise. Expanding to meet the needs of your users rapidly becomes difficult and overly complex in this model. The overhead of this operation normally leads people down the path of creating virtual machines using a hypervisor. A hypervisor provides a significant improvement to the speed of spinning up more compute resources, but comes with one major drawback: the overhead of running multiple operating systems on one host. - + ## The Advent of Docker In recent years, [Docker](https://www.docker.com) has become the de-facto tool designed make it easier to create, deploy, and run applications through the use of containers. -Containers allow an application to be the only process that runs in a VM-like, isolated environment. Unlike a hypervisor, a container-based system does not require the use of a guest operating system. This allows a single host to dedicate more resources towards the application. +Containers allow an application to be the only process that runs in a VM-like, isolated environment. Unlike a hypervisor, a container-based system does not require the use of a guest operating system. This allows a single host to dedicate more resources towards the application. For more information on how containers or Docker works, we'll [let Docker do the talking](https://www.docker.com/resources/what-container). - + The Splunk user community has asked us to support containerization as a platform for running Splunk. The promise of running applications in a microservice-oriented architecture evangelizes the principles of infrastructure-as-code and declarative directives, and we aimed to bring those benefits with the work in this codebase. This project delivers on that request: to provide the rich functionality that Splunk Enterprise offers with the user-friendliness and production-readiness of container-native software. ## History -In 2015, Denis Gladkikh (@outcoldman) created an open-source GitHub repository for installing Splunk Enterprise, Splunk Universal Forwarder and Splunk Light inside containers. +In 2015, Denis Gladkikh ([@outcoldman](https://github.com/outcoldman)) created an open-source GitHub repository for installing Splunk Enterprise, Splunk Universal Forwarder, and Splunk Light inside containers. -Universal Forwarders and standalone instances were being brought online at a rapid pace, which introduced a new level of complexity into the enterprise environment. In 2018, a new container image was created to improve the flexibility with which Splunk Enterprise could be operated in larger and more dynamic environments. Splunk's new container can now start with a small environment and grow with the deployment. This however has caused a divergence from the open-source community edition of the Splunk Enterprise container. +Universal Forwarders and standalone instances were being brought online at a rapid pace, which introduced a new level of complexity into the enterprise environment. In 2018, a new container image was created to improve the flexibility with which Splunk Enterprise could be operated in larger and more dynamic environments. The new Splunk container can now with a small environment and grow with the deployment. This, however, has caused a divergence from the open-source community edition of the Splunk Enterprise container. -As a result, containers for Splunk Enterprise versions prior to 7.1 can not be used with, or in conjunction with, this new version as it is not backward compatible. We are also unable to support version updates from any prior container to the current version released with Splunk Enterprise and Splunk Universal Forwarder 7.2, as the older versions are not forward compatible. We are sorry for any inconvenience this may cause. +As a result, containers for Splunk Enterprise versions prior to 7.1 can not be used with, or in conjunction with, this new version as it is not backward compatible. We are also unable to support version updates from any prior container to the current version released with Splunk Enterprise and Splunk Universal Forwarder 7.2, as the older versions are not forward compatible. We are sorry for any inconvenience this may cause. diff --git a/docs/SECURITY.md b/docs/SECURITY.md index 6f1ae709..ab195c90 100644 --- a/docs/SECURITY.md +++ b/docs/SECURITY.md @@ -5,59 +5,42 @@ This section will cover various security considerations when using the Splunk En The Splunk Enterprise and Universal Forwarder containers may be started using one of the following three user accounts: -* `splunk` (most secure): This user has no privileged access and cannot use `sudo` to change to another user account. -It is a member of the `ansible` group, which enables it to run the embedded playbooks at startup. When using the -`splunk` user, all processes will run as this user. Note that you must set the `SPLUNK_HOME_OWNERSHIP_ENFORCEMENT` -environment variable to `false` when starting as this user. ***Recommended for production*** +* `splunk` (most secure): This user has no privileged access and cannot use `sudo` to change to another user account. It is a member of the `ansible` group, which enables it to run the embedded playbooks at startup. When using the `splunk` user, all processes will run as this user. The `SPLUNK_HOME_OWNERSHIP_ENFORCEMENT` environment variable must be set to `false` when starting as this user. ***Recommended for production*** -* `ansible` (middle ground): This user is a member of the `sudo` group and able to execute `sudo` commands without a -password. It uses privileged access at startup only to perform certain actions which cannot be performed by regular -users (see below). After startup, `sudo` access will automatically be removed from the `ansible` user if the -environment variable `STEPDOWN_ANSIBLE_USER` is set to `true`. ***This is the default user account*** +* `ansible` (middle ground): This user is a member of the `sudo` group and able to execute `sudo` commands without a password. It uses privileged access at startup only to perform certain actions which cannot be performed by regular users (see below). After startup, `sudo` access will automatically be removed from the `ansible` user if the environment variable `STEPDOWN_ANSIBLE_USER` is set to `true`. ***This is the default user account*** -* `root` (least secure): This is a privileged user running with UID of `0`. Some customers may want to use this for -forwarder processes that require access to log files which cannot be read by any other user. ***This is not recommended*** +* `root` (least secure): This is a privileged user running with UID of `0`. Some customers may want to use this for forwarder processes that require access to log files which cannot be read by any other user. ***This is not recommended*** ### After Startup ### By default, the primary Splunk processes will always run as the unprivileged user and group `splunk`, -irregardless of which user account the containers are started with. You can override this by changing the following: +regardless of which user account the containers are started with. You can override this by changing the following: * User: `splunk.user` variable in your `default.yml` template, or the `SPLUNK_USER` environment variable * Group: `splunk.group` variable in your `default.yml` template, or the `SPLUNK_GROUP` environment variable Note that the containers are built with the `splunk` user having UID `41812` and the `splunk` group having GID `41812`. -You may want to override these settings to ensure that Splunk forwarder processes have access to read your log files. -For example, you can ensure that all processes run as `root` by starting as the `root` user with the environment -variable `SPLUNK_USER` also set to `root` (this is not recommended). +You may want to override these settings to ensure that Splunk forwarder processes have access to read your log files. For example, you can ensure that all processes run as `root` by starting as the `root` user with the environment variable `SPLUNK_USER` also set to `root` (this is not recommended). ### Privileged Features ### -Certain features supported by the Splunk Enterprise and Universal Forwarder containers require that they are started -with privileged access using either the `ansible` or `root` user accounts. +Certain features supported by the Splunk Enterprise and Universal Forwarder containers require that they are started with privileged access using either the `ansible` or `root` user accounts. #### Splunk Home Ownership #### -By default, at startup the containers will ensure that all files located under the `SPLUNK_HOME` directory -(`/opt/splunk`) are owned by user `splunk` and group `splunk`. This helps to ensure that the Splunk processes are -able to read and write any external volumes mounted for `/opt/splunk/etc` and `/opt/splunk/var`. While all supported -versions of the docker engine will automatically set proper ownership for these volumes, external orchestration systems +By default, at startup the containers will ensure that all files located under the `SPLUNK_HOME` directory (`/opt/splunk`) are owned by user `splunk` and group `splunk`. This helps to ensure that the Splunk processes are able to read and write any external volumes mounted for `/opt/splunk/etc` and `/opt/splunk/var`. While all supported versions of the docker engine will automatically set proper ownership for these volumes, external orchestration systems typically will require extra steps. -If you know that this step is unnecessary, you can disable it by setting the `SPLUNK_HOME_OWNERSHIP_ENFORCEMENT` -environment variable to `false`. Note that this must be disabled when starting containers with the `splunk` user -account. +If you know that this step is unnecessary, you can disable it by setting the `SPLUNK_HOME_OWNERSHIP_ENFORCEMENT` environment variable to `false`. This must be disabled when starting containers with the `splunk` user account. #### Package Installation #### -The `JAVA_VERSION` environment variable can be used to automatically install OpenJDK at startup time. This feature -requires starting as a privileged user account. +The `JAVA_VERSION` environment variable can be used to automatically install OpenJDK at startup time. This feature requires starting as a privileged user account. ### Kubernetes Users ### -For Kubernetes, we recommend using the `fsGroup` [Security Context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) -to ensure that all Pods are able to write to your Persistent Volumes. For example: +For Kubernetes, we recommend using the `fsGroup` [Security Context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) to ensure that all Pods are able to write to your Persistent Volumes. For example: ``` apiVersion: v1 @@ -77,8 +60,7 @@ spec: ... ``` -This can be used to create a Splunk Enterprise Pod running as the unprivileged `splunk` user which is able to securely -read and write from any Persistent Volumes that are created for it. +This can be used to create a Splunk Enterprise Pod running as the unprivileged `splunk` user which is able to securely read and write from any Persistent Volumes that are created for it. Red Hat OpenShift users can leverage the built-in `nonroot` [Security Context Constraint](https://docs.openshift.com/container-platform/3.9/admin_guide/manage_scc.html) to run Pods with the above Security Context: diff --git a/docs/SETUP.md b/docs/SETUP.md index 85085647..acbd7229 100644 --- a/docs/SETUP.md +++ b/docs/SETUP.md @@ -25,8 +25,10 @@ This section explains how to start basic standalone and distributed deployments. Start a single containerized instance of Splunk Enterprise with the command below, replacing `` with a password string that conforms to the [Splunk Enterprise password requirements](https://docs.splunk.com/Documentation/Splunk/latest/Security/Configurepasswordsinspecfile). -``` -$ docker run -it -p 8000:8000 -e "SPLUNK_PASSWORD=" -e "SPLUNK_START_ARGS=--accept-license" splunk/splunk:latest +```bash +$ docker run -p 8000:8000 -e "SPLUNK_PASSWORD=" \ + -e "SPLUNK_START_ARGS=--accept-license" \ + -it splunk/splunk:latest ``` This command does the following: @@ -51,8 +53,11 @@ $ docker network create --driver bridge --attachable skynet #### Splunk Enterprise Start a single, standalone instance of Splunk Enterprise in the network created above, replacing `` with a password string that conforms to the [Splunk Enterprise password requirements](https://docs.splunk.com/Documentation/Splunk/latest/Security/Configurepasswordsinspecfile). -``` -$ docker run -it --network skynet --name so1 --hostname so1 -p 8000:8000 -e "SPLUNK_PASSWORD=" -e "SPLUNK_START_ARGS=--accept-license" splunk/splunk:latest +```bash +$ docker run --network skynet --name so1 --hostname so1 -p 8000:8000 \ + -e "SPLUNK_PASSWORD=" \ + -e "SPLUNK_START_ARGS=--accept-license" \ + -it splunk/splunk:latest ``` This command does the following: @@ -67,8 +72,12 @@ After the container starts up successfully, you can access Splunk Web at ` with a password string that conforms to the [Splunk Enterprise password requirements](https://docs.splunk.com/Documentation/Splunk/latest/Security/Configurepasswordsinspecfile). -``` -$ docker run -it --network skynet --name uf1 --hostname uf1 -e "SPLUNK_PASSWORD=" -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_STANDALONE_URL=so1" splunk/universalforwarder:latest +```bash +$ docker run --network skynet --name uf1 --hostname uf1 \ + -e "SPLUNK_PASSWORD=" \ + -e "SPLUNK_START_ARGS=--accept-license" \ + -e "SPLUNK_STANDALONE_URL=so1" \ + -it splunk/universalforwarder:latest ``` This command does the following: diff --git a/docs/STORAGE_OPTIONS.md b/docs/STORAGE_OPTIONS.md index c5e9eeae..98998640 100644 --- a/docs/STORAGE_OPTIONS.md +++ b/docs/STORAGE_OPTIONS.md @@ -1,15 +1,10 @@ ## Data Storage ## -This section will cover examples of different options for configuring data persistence. This includes both indexed data and -configuration items. Splunk only supports data persistence to volumes mounted outside of the container. Data persistence for -folders inside of the container is not supported. The following are intended as only as examples and unofficial guidelines. +This section will cover examples of different options for configuring data persistence. This includes both indexed data and configuration items. Splunk only supports data persistence to volumes mounted outside of the container. Data persistence for folders inside of the container is not supported. The following are intended as only as examples and unofficial guidelines. ### Storing indexes and search artifacts ### -Splunk Enterprise, by default, Splunk Enterprise uses the var directory for indexes, search artifacts, etc. In the public image, the Splunk Enterprise -home directory is /opt/splunk, and the indexes are configured to run under var/. If you want to persist the indexed -data, then mount an external directory into the container under this folder. +Splunk Enterprise, by default, Splunk Enterprise uses the var directory for indexes, search artifacts, etc. In the public image, the Splunk Enterprise home directory is /opt/splunk, and the indexes are configured to run under var/. If you want to persist the indexed data, then mount an external directory into the container under this folder. -If you do not want to modify or persist any configuration changes made outside of what has been defined in the docker -image file, then use the following steps for your service. +If you do not want to modify or persist any configuration changes made outside of what has been defined in the docker image file, then use the following steps for your service. #### Step 1: Create a named volume #### To create a simple named volume in your Docker environment, run the following command @@ -19,7 +14,7 @@ docker volume create so1-var See Docker's official documentation for more complete instructions and additional options. #### Step 2: Define the docker compose YAML and start the service#### -Using the Docker Compose format, save the following contents into a docker-compose.yml file +Using the Docker Compose format, save the following contents into a docker-compose.yml file: ``` version: "3.6" @@ -51,31 +46,22 @@ services: - so1-var:/opt/splunk/var ``` -This mounts only the contents of /opt/splunk/var, so anything outside of this folder will not persist. Any configuration changes will not -remain when the container exits. Note that changes will persist between starting and stopping a container. See -Docker's documentation for more discussion on the difference between starting, stopping, and exiting if the difference -between them is unclear. +This mounts only the contents of /opt/splunk/var, so anything outside of this folder will not persist. Any configuration changes will not remain when the container exits. Note that changes will persist between starting and stopping a container. See the Docker documentation for more discussion on the difference between starting, stopping, and exiting if the difference between them is unclear. -In the same directory as docker-compose.yml run the following command +In the same directory as `docker-compose.yml`, run the following command to start the service. ``` docker-compose up ``` -to start the service. #### Viewing the contents of the volume #### -To view the data outside of the container run +To view the data outside of the container run: ``` docker volume inspect so1-var ``` The output of that command should list where the data is stored. ### Storing indexes, search artifacts, and configuration changes ### -In this section, we build off of the previous example to save the configuration as well. This can make it easier to save modified -configurations, but simultaneously allows configuration drift to occur. If you want to keep configuration drift from -happening, but still want to be able to persist some of the data, you can save off the specific "local" folders that -you want the data to be persisted for (such as etc/system/local). However, be careful when doing this because you will -both know what folders you need to save off and the number of volumes can proliferate rapidly - depending on the -deployment. Please take the "Administrating Splunk" through Splunk Education prior to attempting this configuration. +In this section, we build off of the previous example to save the configuration as well. This can make it easier to save modified configurations, but simultaneously allows configuration drift to occur. If you want to keep configuration drift from happening, but still want to be able to persist some of the data, you can save off the specific "local" folders that you want the data to be persisted for (such as etc/system/local). However, be careful when doing this because you will both know what folders you need to save off and the number of volumes can proliferate rapidly - depending on the deployment. Please take the "Administrating Splunk" through Splunk Education prior to attempting this configuration. In these examples, we will assume that the entire etc folder is being mounted into the container. @@ -86,9 +72,8 @@ docker volume create so1-etc ``` See Docker's official documentation for more complete instructions and additional options. -#### Step 2: Define the docker compose YAML #### -Notice that this differs from the previous example by adding in the so1-etc volume references. -In the following example, save the following data into a file named docker-compose.yml +#### Step 2: Define the Docker Compose YAML #### +Notice that this differs from the previous example by adding in the so1-etc volume references. In the following example, save the following data into a file named `docker-compose.yml`. ``` version: "3.6" @@ -122,15 +107,12 @@ services: - so1-etc:/opt/splunk/etc ``` -In the directory where the docker-compose.yml file is saved, run +In the same directory as `docker-compose.yml`, run the following command to start the service: ``` docker-compose up ``` -to start the service. -When the volume is mounted the data will persist after the container exits. If a container has exited and restarted, -but no data shows up, then check the volume definition and verify that the container did not create a new volume -or that the volume mounted is in the same location. +When the volume is mounted the data will persist after the container exits. If a container has exited and restarted, but no data shows up, then check the volume definition and verify that the container did not create a new volume or that the volume mounted is in the same location. #### Viewing the contents of the volume #### To view the etc directory outside of the container run one or both of the commands @@ -140,8 +122,7 @@ docker volume inspect so1-etc The output of that command should list the directory associated with the volume mount. #### Volume Mount Guidelines #### -**Do not mount the same folder into two different Splunk Enterprise instances, this can cause inconsistencies in the -indexed data and undefined behavior within Splunk Enterprise itself.** +Do not mount the same folder into two different Splunk Enterprise instances. This can cause inconsistencies in the indexed data and undefined behavior within Splunk Enterprise itself. ### Upgrading Splunk instances in your containers ### Upgrading Splunk instances requires volumes to be mounted for /opt/splunk/var and /opt/splunk/etc. @@ -150,9 +131,9 @@ Upgrading Splunk instances requires volumes to be mounted for /opt/splunk/var an Follow the named volume creation tutorial above in order to have /opt/splunk/var and /opt/splunk/etc mounted for persisting data. #### Step 2: Update your yaml file with a new image and SPLUNK_UPGRADE=true #### -In the same yaml file you initially used to deploy Splunk instances, update the specified image to the next version of Splunk image. Then, set **SPLUNK_UPGRADE=true** in the environment of all containers you wish to upgrade. Make sure to state relevant named volumes so persisted data can be mounted to a new container. +In the same yaml file you initially used to deploy Splunk instances, update the specified image to the next version of Splunk image. Then, set `SPLUNK_UPGRADE=true` in the environment of all containers you wish to upgrade. Make sure to state relevant named volumes so persisted data can be mounted to a new container. -Below is an example yaml with SPLUNK_UPGRADE=true +Below is an example yaml with `SPLUNK_UPGRADE=true`: ``` version: "3.6" @@ -188,9 +169,9 @@ services: ``` #### Step 3: Deploy your containers using the updated yaml #### -Similar to how you initially deployed your containers, run the command with the updated yaml that contains a reference to the new image and SPLUNK_UPGRADE=true in the environment. Make sure that you do NOT destory previously existing network and volumes. After running the command with the yaml file, your containers should be recreated with the new version of Splunk and persisted data properly mounted to /opt/splunk/var and /opt/splunk/etc. +Similar to how you initially deployed your containers, run the command with the updated yaml that contains a reference to the new image and SPLUNK_UPGRADE=true in the environment. Make sure that you do NOT destroy previously existing network and volumes. After running the command with the yaml file, your containers should be recreated with the new version of Splunk and persisted data properly mounted to /opt/splunk/var and /opt/splunk/etc. #### Different types of volumes #### Using named volume is recommended so it is easier to attach and detach volumes to different Splunk instances while persisting your data. If you use anonymous volumes, Docker gives them random and unique names so you can still reuse anonymous volumes on different containers. If you use bind mounts, make sure that the mounts are setup properly to persist /opt/splunk/var and opt/splunk/etc. Starting new containers without proper mounts will result in a loss of your data. -Note [Docker Volume Documentation](https://docs.docker.com/storage/volumes/#create-and-manage-volumes) for more details about managing volumes. +See [Create and manage volumes](https://docs.docker.com/storage/volumes/#create-and-manage-volumes) in the Docker documentation for more information. diff --git a/docs/SUMMARY.md b/docs/SUMMARY.md deleted file mode 100644 index 7442747b..00000000 --- a/docs/SUMMARY.md +++ /dev/null @@ -1,15 +0,0 @@ -# Splunk Enterprise / Splunk Universal Forwarder Container Documentation - -* [Getting Started](INTRODUCTION.md) - * [Prerequisites](SETUP.md) - * [Install](SETUP.md#install) - * [Run](SETUP.md#run) - * [Installing a Splunk Enterprise License](LICENSE_INSTALL.md) -* [Advanced Usage](ADVANCED.md) - * [Environment Variables](ADVANCED.md#valid-enterprise-environment-variables) - * [Smartstore](ADVANCED.md#enabling-smartstore) -* [Storing Data](STORAGE_OPTIONS.md) -* [FAQ / Troubleshooting](TROUBLESHOOTING.md) -* [Contributing](CONTRIBUTING.md) -* [Licensing](LICENSING.md) -* [Changelog](CHANGELOG.md) diff --git a/docs/TROUBLESHOOTING.md b/docs/TROUBLESHOOTING.md index b67d652f..3af58d15 100644 --- a/docs/TROUBLESHOOTING.md +++ b/docs/TROUBLESHOOTING.md @@ -49,8 +49,7 @@ $ docker logs -f ``` #### Interactive shell -If your container is still running but in a bad state, you can try to debug by putting yourself within the context of that process. - +If your container is still running but in a bad state, you can try to debug by putting yourself within the context of that process. To gain interactive shell access to the container's runtime as the splunk user, you can run: ``` @@ -115,10 +114,10 @@ ok: [localhost] META: ran handlers Thursday 21 February 2019 00:50:56 +0000 (0:00:01.148) 0:00:01.185 ***** ``` -With the above, you'll notice how much more rich and verbose the Ansible output becomes, simply by adding more verbosity to the actual Ansible execution. +With the above, you'll notice how much more rich and verbose the Ansible output becomes, simply by adding more verbosity to the actual Ansible execution. #### No-provision -The `no-provision` is a fairly useless supported command - after launching the container, it won't run Ansible so Splunk will not get installed or even setup. Instead, it tails a file to keep the instance up and running. +The `no-provision` is a fairly useless supported command - after launching the container, it won't run Ansible so Splunk will not get installed or even setup. Instead, it tails a file to keep the instance up and running. This `no-provision` keyword is an argument that gets passed into the container's entrypoint script, so you can use it in the following manner: ``` @@ -163,4 +162,4 @@ $ docker cp :/opt/splunk/ - SPLUNK_INDEXER_URL= - - SPLUNK_SEARCH_HEAD_URL= + - SPLUNK_SEARCH_HEAD_URL= - SPLUNK_SEARCH_HEAD_CAPTAIN_URL= - SPLUNK_CLUSTER_MASTER_URL= - SPLUNK_ROLE= @@ -104,7 +104,7 @@ Acceptable roles for SPLUNK_ROLE are as follows: * splunk_cluster_master * splunk_heavy_forwarder -For more information about these roles, refer to [Splunk Splexicon](https://docs.splunk.com/splexicon). +For more information about these roles, refer to the [Splunk Splexicon](https://docs.splunk.com/splexicon). After creating a Compose file, you can start an entire cluster with `docker-compose`: ``` @@ -137,7 +137,9 @@ In the above example, the container id is `bbbe650dd544`. So, the `docker logs` ``` docker logs -f bbbe650dd544 ``` -As Ansible runs, the results from each play can be seen on the screen as well as writen to an ansible.log file stored inside the container. +As Ansible runs, the results from each play can be seen on the screen, as well as written to an `ansible.log` file stored inside the container. + + ``` PLAY [localhost] *************************************************************** @@ -163,8 +165,11 @@ Wednesday 29 August 2018 09:28:29 +0000 (0:00:00.123) 0:01:23.447 ****** changed: [localhost] => (item=USERNAME) changed: [localhost] => (item=PASSWORD) ``` + + Once Ansible has finished running, a summary screen will be displayed. + ``` PLAY RECAP ********************************************************************* localhost : ok=12 changed=6 unreachable=0 failed=1 @@ -196,7 +201,9 @@ Stopping Splunk helpers... Done. ``` -It's important to call out the `RECAP` line, as it's the biggest indicator if Splunk Enterprise was configured correctly. In this example, there was a failure during the container creation. The offending play is: + + +It's important to call out the `RECAP` line, as it's the biggest indicator of whether Splunk Enterprise was configured correctly. In this example, there was a failure during container creation. The offending play is: ``` TASK [Splunk_cluster_master : Set indexer discovery] *************************** diff --git a/docs/advanced/LICENSE_INSTALL.md b/docs/advanced/LICENSE_INSTALL.md index 9e4ef32f..20f338ab 100644 --- a/docs/advanced/LICENSE_INSTALL.md +++ b/docs/advanced/LICENSE_INSTALL.md @@ -1,7 +1,7 @@ ## Installing a Splunk Enterprise License -Splunk's Docker image supports the ability to bring your own Enterprise license. By default, the image includes the ability to use up to the trial license. Please see the documentation for more information on what [additional features and capabilities are unlocked with a full Enterprise license](https://docs.splunk.com/Documentation/Splunk/latest/Admin/HowSplunklicensingworks) +The Splunk Docker image supports the ability to bring your own Enterprise license. By default, the image includes the ability to use up to the trial license. Please see the documentation for more information on what [additional features and capabilities are unlocked with a full Enterprise license](https://docs.splunk.com/Documentation/Splunk/latest/Admin/HowSplunklicensingworks) -There are primarily two different ways to apply a license when starting your container: either through a file/directory volume-mounted inside the container, or through an external URL for dynamic downloads. The enviroment variable `SPLUNK_LICENSE_URI` supports both of these methods. +There are primarily two different ways to apply a license when starting your container: either through a file/directory volume-mounted inside the container, or through an external URL for dynamic downloads. The environment variable `SPLUNK_LICENSE_URI` supports both of these methods. ## Navigation @@ -96,7 +96,7 @@ $ SPLUNK_PASSWORD= docker stack deploy --compose-file=docker-compose.y ``` ## Splunk Free license -Not to be confused with an actual free Splunk enterprise license, but [Splunk Free](https://docs.splunk.com/Documentation/Splunk/latest/Admin/MoreaboutSplunkFree) is a product offering that enables the power of Splunk with a never-expiring but ingest-limited license. By default, when you create a Splunk environment using this Docker container, it will enable a Splunk Trial license which is good for 30 days from the start of your instance. With Splunk Free, you can create a full developer environment of Splunk for any personal, sustained usage. +Not to be confused with an actual free Splunk enterprise license, but [Splunk Free](https://docs.splunk.com/Documentation/Splunk/latest/Admin/MoreaboutSplunkFree) is a product offering that enables the power of Splunk with a never-expiring but ingest-limited license. By default, when you create a Splunk environment using this Docker container, it will enable a Splunk Trial license which is good for 30 days from the start of your instance. With Splunk Free, you can create a full developer environment of Splunk for any personal, sustained usage. To bring up a single instance using Splunk Free, you can run the following command: ``` diff --git a/docs/index.md b/docs/index.md index 5065574e..b048e012 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,13 +1,13 @@ # Welcome to the docker-splunk documentation! -Welcome to Splunk's official documentation on containerizing Splunk Enterprise and Splunk Universal Forwarder deployments with Docker. +Welcome to the official Splunk documentation on containerizing Splunk Enterprise and Splunk Universal Forwarder deployments with Docker. -#### What is Splunk Enterprise? +### What is Splunk Enterprise? [Splunk Enterprise](https://www.splunk.com/en_us/software/splunk-enterprise.html) is a platform for operational intelligence. Our software lets you collect, analyze, and act upon the untapped value of big data that your technology infrastructure, security systems, and business applications generate. It gives you insights to drive operational performance and business results. Learn more about the features and capabilities of [Splunk Products](https://www.splunk.com/en_us/software.html) and how you can [bring them into your organization](https://www.splunk.com/en_us/enterprise-data-platform.html). -#### What is docker-splunk? +### What is docker-splunk? This is the official source code repository for building Docker images of Splunk Enterprise and Splunk Universal Forwarder. By introducing containerization, we can marry the ideals of infrastructure-as-code and declarative directives to manage and run Splunk and its other product offerings. This repository should be used by people interested in running Splunk in their container orchestration environments. With this Docker image, we support running a standalone development Splunk instance as easily as running a full-fledged distributed production cluster, all while maintaining the best practices and recommended standards of operating Splunk at scale. @@ -16,7 +16,7 @@ The provisioning of these disjoint containers is handled by the [splunk-ansible] --- -#### Table of Contents +### Table of Contents * [Introduction](INTRODUCTION.md) * [Getting Started](SETUP.md)