Lightning-fast deployment of Splunk for simple testing and evaluation.
This code is experimental and unsupported but is in recently development and rests upon the strong foundation of a production-grade module, the Puppet Approved puppet/splunk maintained by Vox Pupuli.
- Description
- Setup - The basics of getting started with splunk_qd
- Usage - Configuration options and additional functionality
- Limitations - OS compatibility, etc.
- Development - Guide for contributing to the module
The premise of this project is to provide a facility for bringing online new instances of Splunk Enterprise with the option of independently managing the Universal Forwarder, install add-ons for the purpose of development, testing major upgrades, of product evaluation. To make sure usage is as simple as possible and still implement a complete deployment workflow, we chose to focus on Puppet Bolt as opposed to classic Puppet. We'll re-use and depend on the Vox Pupuli puppet/splunk module when ever appropriate so that any installation initially deployed through this method can be promoted to a production install and continuously maintained by Puppet safely and without many changes if any.
This means that what you'll find in this project is a Bolt Plan that will deploy different components of the Splunk Enterprise environment by applying Puppet manifests using Bolt's agentless functionality with some glue in between that might not fit well into Puppet's preference for managing desired state but is really only applicable for the use case of rapid initial deployment.
It is not intended for this project to take over management of existing Splunk Enterprise installations but some functions can co-exist with a living installation and the automation underpinning everything is robust enough that it can be repurposed, e.g. onboarding and upgrading the Universal Forwarder.
Some functionality requires that you have an account on splunkbase and are able to obtain add-on or app archives from it, if you desire to use splunk_qd to install them there is no way to automate fetching them from splunkbase. The module does include two add-ons though, one for Linux/Unix and another for Windows with the intention of providing a fully encompassing test drive experience for new users of Splunk Enterprise.
In all cases you need to have installed and be familiar with Puppet Bolt and have SSH and WinRM access to hosts and you have administrative privileges on them so you can run escalated commands. This directory serve as a fresh Bolt project directory where you can construct an inventory.yaml
file which can be highly specific if you plan on customizing a test drive oriented deployment for evaluation. An example Puppetfile
can be found in the project's examples directory, the modules listed in the example Puppetfile
are the minimum requirements for splunk_qd.
Description: I have an existing Splunk Enterprise infrastructure and would like to automate the deployment and configuration of a specific version of the Splunk Universal Forwarder on a set of nodes.
Steps:
-
Run the splunk_qd plan and provide a list of targets and the deployment_server parameter to configure nodes to retrieve add-on configurations from an existing fully configured instance of Splunk Enterprise
bolt plan run splunk_qd deployment_server=splunk.example.com --targets db1.example.com,web5.example.com,dns3.example.com
Description: I want to deploy and configure a set of nodes running the Splunk Universal Forwarder to send the data captured by the Splunk Add-On for Microsoft Windows and Splunk Add-On for Unix and Linux to a freshly deployed installation of Splunk Enterprise so I can evaluate the software.
Steps:
- In your CLI of choice, browse to the splunk_qd repository you’ve downloaded or cloned from GitHub.
- Run
bolt —version
to validate that Bolt is installed successfully. This guide validated on version 1.37.0 but any recent version of Bolt should work with this guide. - Run
bolt puppetfile install
and Bolt will install all the Forge content necessary to complete this guide into Boltdir/modules, referencing the Puppetfile in the Boltdir. - Next, we’ll tell Bolt which machines to work with using any number of inventory targets. If you already have infrastructure suitable for deploying Splunk, copy
Boltdir/examples/inventory.yaml
toBoltdir/inventory.yaml
and continue to the next step. Alternatively, if you’re a Terraform user, you’ll find an example .tf Plan and integrated Bolt inventory.yaml inBoltdir/examples/terraform
. CopyBoltdir/examples/terraform/inventory.yaml
toBoltdir/inventory.yaml
and continue to the next step. - Open
Boltdir/inventory.yaml
in your editor of choice. - Modify config.ssh.user to the correct login user for your hosts
- Modify config.winrm.user to the correct login user for your hosts
- Modify config.winrm.password to the correct login password for your hosts
- Set the value of groups.name['search'].targets to the fully qualified domain name or IP address of the node you want to install Splunk Enterprise on
- Find the nested targets parameter under groups.name[‘forwarder’].groups.name[‘linux_forwarders’] and modify the array of nodes so it contains the fully qualified domain name or IP addresses for the Linux nodes you wish to manage the Splunk Universal Forwarder on
- Find the nested targets parameter under groups.name[‘forwarder’].groups.name[‘windows_forwarders’] and modify the array of nodes so it contains the fully qualified domain name or IP addresses for the Windows nodes you wish to manage the Splunk Universal Forwarder on
- The example
inventory.yaml
file we started with has an addons variable set within each group, which is where add-on installation is defined and it currently setup to source add-ons for both sets of nodes from within the module - After you’ve made you configuration changes, write and close
inventory.yaml
- Now you should be ready to run the following command:
bolt plan run splunk_qd mode=testdrive
- After a couple of minutes, Bolt should have successfully deployed Splunk Enterprise, configured apps and add-ons, and connected other infrastructure to Splunk by deploying forwarders. Visit the FQDN or IP address of the machine you associated with the search group in step 9 on port
8000
to login with the stock default admin/changeme login. - Well done! You’ve successfully automated the deployment of Splunk Enterprise in minutes. The Bolt Plan underpinning this guide supports SSL configurations with LetsEncrypt, password management, and other options for enterprise deployments. Have a look at the Plan documentation and play around with specifying different options using
bolt plan run splunk_qd param=value
.
Description: I want to deploy and configure a set of nodes running the Splunk Universal Forwarder to send the data captured by a set of add-ons and apps of my choosing to a freshly deployed installation of Splunk Enterprise so I can evaluate the software.
Steps:
-
Follow steps 1 through 8 of Scenario 2
-
The example
inventory.yaml
file we started with has an addons variable set within each group, which is where add-on installation is defined and was originally setup to source add-ons for both sets of nodes from within the module but in this scenario you're only going to use that for guidance and instead obtain you own add-ons -
To install add-ons you must first obtain them from splunkbase in .tgz format, the add-ons used in the example
inventory.yaml
are Splunk Add-on for Unix and Linux and Splunk Add-on for Microsoft Windows -
Once you've downloaded add-ons you need to discover their installation name, this is done by expanding the .tgz archive and opening the
app.manifest
within the resulting directory and noting the value of info.id.name -
That installation name for the Splunk Add-on for Unix and Linux obtained in step 3 can be found on line 27 of the example
inventory.yaml
, it is set to Splunk_TA_nix and you'll find similar on line 53, Splunk_TA_windows -
Once you know your add-ons' installation name and have set it as the value of name, set the filename key to the name of the original archive downloaded from splunkbase for Linux based add-ons but before doing this for Windows add-ons, re-archive them as .zip archives because the .tgz format is not well supported on Windows
-
Configure inputs by adding entries into the inputs hash, each add-on input is a hash of input name and sub-hash of settings, keys being the setting and values being what the setting should be set to. (DON'T STOP HERE: There are a couple more steps below the following example)
Example
The following entry from
inputs.conf
:[monitor:///var/log] whitelist = (\.log|log$|messages|secure|auth|mesg$|cron$|acpid$|\.out) blacklist = (lastlog|anaconda\.syslog) disabled = false
Becomes the following when converted to the
inventory.yaml
format:monitor:///var/log: whitelist: (\.log|log$|messages|secure|auth|mesg$|cron$|acpid$|\.out) blacklist: (lastlog|anaconda\.syslog) disabled: false
-
After you've configured all your add-ons and inputs, write and close
inventory.yaml
-
Copy the add-on archive(s) to
$boltdir/site-modules/splunk_qd/files/addons/
-
Now you should be ready to run the following command:
bolt plan run splunk_qd mode=testdrive
If you are familiar with and keen on using Terraform then you'll find the manifests we used when developing Scenario 2 and 3 in the $boltdir/site-modules/splunk_qd/examples/terraform
directory, as well as a sample inventory.yaml
that uses the Terraform inventory plugin
By design we are dependent on the puppet-splunk module so limited to the deployment targets it supports and must adhere to its opinions on search head, indexer, and forwarder configuration
All contributions should adhere to Puppet 5 or a greater compatible syntax and best practices and work when executed through the latest release of Puppet Bolt. In addition, code cannot depend on the existence of a Puppet Server or PuppetDB.