Skip to content

infrahouse/terraform-aws-kibana

Repository files navigation

terraform-aws-kibana

The module creates Kibana for the Elasticsearch cluster.

Usage

Prerequisites

Elasticsearch cluster is a natural pre-requisite of Kibana. However, the Elasticsearch cluster itself requires certain AWS resources. Check the elasticsearch module documentation for those.

Elasticsearch cluster

The Elasticsearch cluster requires two terraform apply-s. One with bootstrap_mode = true and another with bootstrap_mode = false. Use following Terraform snippet to provision the cluster.

module "elasticsearch" {
  source  = "infrahouse/elasticsearch/aws"
  version = "~> 0.6"
  providers = {
    aws     = aws
    aws.dns = aws
  }
  cluster_name         = "some-cluster-name"
  cluster_master_count = 3
  cluster_data_count   = 1
  environment          = "development"
  internet_gateway_id  = module.service-network.internet_gateway_id
  key_pair_name        = aws_key_pair.test.key_name
  subnet_ids           = module.service-network.subnet_private_ids
  zone_id              = var.zone_id
  bootstrap_mode       = var.bootstrap_mode
}

Kibana

One the Elasticsearch cluster is ready (and by "ready" I mean master and data nodes are up & running), you can provision Kibana.

module "kibana" {
  source  = "infrahouse/kibana/aws"
  version = "~> 0.1"
  providers = {
    aws     = aws
    aws.dns = aws
  }
  asg_subnets                = module.service-network.subnet_private_ids
  elasticsearch_cluster_name = "some-cluster-name"
  elasticsearch_url          = var.elasticsearch_url
  internet_gateway_id        = module.service-network.internet_gateway_id
  kibana_system_password     = module.elasticsearch.kibana_system_password
  load_balancer_subnets      = module.service-network.subnet_public_ids
  ssh_key_name               = aws_key_pair.test.key_name
  zone_id                    = var.zone_id
}

Note the inputs:

  • asg_subnets - these are subnet ids where autoscaling group with EC2 instance for Kibana ECS will be created. They need to be private subnets - you don't want to expose them to Internet.
  • load_balancer_subnets - these are subnet ids where the load balancer will be created. Can be public, but I recommend to deploy the load balancer in the private subnets and configure VPN access for users that need Kibana.

The kibana module will output URL where Kibana UI is available. User elastic username and its password to access Kibana first time.

Requirements

Name Version
terraform ~> 1.5
aws ~> 5.11

Providers

Name Version
aws ~> 5.11
random n/a

Modules

Name Source Version
kibana infrahouse/ecs/aws ~> 2.2

Resources

Name Type
random_string.kibana-encryptionKey resource
aws_route53_zone.kibana_zone data source

Inputs

Name Description Type Default Required
asg_subnets Auto Scaling Group Subnets. list(string) n/a yes
elasticsearch_cluster_name Elasticsearch cluster name. string n/a yes
elasticsearch_url URL of Elasticsearch masters. string n/a yes
environment Name of environment. string "development" no
internet_gateway_id Internet gateway id. Usually created by 'infrahouse/service-network/aws' string n/a yes
kibana_system_password Password for kibana_system user. This user is an Elasticsearch built-in user. string n/a yes
load_balancer_subnets Load Balancer Subnets. list(string) n/a yes
ssh_key_name ssh key name installed in ECS host instances. string n/a yes
zone_id Zone where DNS records will be created for the service and certificate validation. string n/a yes

Outputs

Name Description
kibana_password n/a
kibana_url n/a
kibana_username n/a