Skip to content

SkySoft-ATM/terraform-provider-elastic

Repository files navigation

Elastic Cloud Provider for Terraform

release

Requirements

Installation

As simple as:

curl -sfL https://raw.githubusercontent.com/SkySoft-ATM/terraform-provider-elastic/master/install.sh | sh
chmod +x ./bin/terraform-provider-elastic
mkdir -p ~/.terraform.d/plugins/hashicorp.com/skysoft-atm/elastic/0.0.2/linux_amd64
mv ./bin/terraform-provider-elastic ~/.terraform.d/plugins/hashicorp.com/skysoft-atm/elastic/0.0.2/linux_amd64/

Then

terraform init

Using the provider

provider "elastic" {
  kibana_url = var.kibana_url
  cloud_auth = var.cloud_auth
}

where kibana_url is the Kibana URL exposing logstash pipeline API and cloud_auth the credential to authenticate on kibana api (please note that at this stage only Basic Authentication is supported and provider should not be configured with identity managed externally)

Upgrading the provider

The elastic provider doesn't upgrade automatically once you've started using it. After a new release you can run

terraform init -upgrade

to upgrade to the latest stable version of the elastic provider.

Creating pipeline resources

resource "elastic_logstash_pipeline" "test" {
  pipeline_id = "test"
  pipeline = "input { stdin {} } output { stdout {} }"
  description = "My so great pipeline"
  settings { // Required even if empty (default values will be used)
    	batch_delay				= 50
    	batch_size 				= 125
	workers 				= 1
	queue_checkpoint_writes 		= 1024
	queue_max_bytes 			= "1gb"
	queue_type 				= "memory"
  } 
}

pipeline defintion can be a little be tedious to define inside a JSON, so the templatefile terraform native function can be used. Example below illustrates the usage:

resource "elastic_logstash_pipeline" "test" {
  pipeline_id = "test"
  pipeline = templatefile("${path.module}/pipeline.conf", {
    CLOUD_ID   = var.cloud_id
    CLOUD_AUTH = var.cloud_auth
  })
  description = "My so great pipeline"
  settings { // Required even if empty (default values will be used)
    	batch_delay				= 50
    	batch_size 				= 125
	workers 				= 1
	queue_checkpoint_writes 		= 1024
	queue_max_bytes 			= "1gb"
	queue_type 				= "memory"
  } 
}

An example of pipeline.conf is available here

Using data sources

data "elastic_logstash_pipeline" "filebeat" {
  pipeline_id = "filebeat"
}