Note: you will not find a more complete package than this in github (for Amazon ELK)
These Terraform codes will deploy:
- Amazon ES (VPC) domain
- 2 units EC2
- NLB (internal)
- Route53 Private Subdomain for Logstash input and ES Endpoint URL
For creating your own AMI
Logstash docker using official image
Using port 5044 for Filebeat input
Included in the yaml will also:
- install the Amazon ES plugin and create the logstash pipeline and config volumes\
- use Timezone Asia/Singapore (change this to your own TZ)
Launch EC2 (minimum t3a.small - Ubuntu LTS 18)
Install Docker and use the docker-compose.yaml to setup launch Logstash container
Put your pipeline and config files in the EC2 (same paths in the yaml)
If there are no "pipeline" and "config" directory in that path, create them
For your ES output (in the pipeline file), point it to the private subdomain (aws-elasticsearch.elkdev.com) instead of the actual ES Endpoint URL.
Also add this to logstash pipeline outputs: ssl_certificate_verification => false
Capture the AMI and use the ID in your TF deployment
- copy the ES Endpoint URL to the ES subdomain (aws-elasticsearch.elkdev.com) CNAME record
- copy the NLB DNS name to the logstash subdomain (logstash.elkdev.com) CNAME record
all values must be defined
All default values (when needed)
cluster_config
ebs_options
encrypt_at_rest (note: need to use at least R5 instance type to support encryption)
module "aws_security_group":
ports
cidr_blocks
description
module "ec2-logstash":
instance_type = "instance.size"
name = "logstash-dev"
department = "your-department"
owner = "your name here"
project = "elk-project"
ticket = "CS-xxx"
module "aws_es" :
vpc_options
subnet_ids = ["subnet-yoursubnet1", "subnet-yoursubnet2"]
resource "aws_instance" "ec2-logstash1" and "ec2-logstash2":
ami = "ami-"
subnet_id = "subnet-"
availability_zone = "us-east-1a"
key_name = "instance key"
resource "aws_lb"
subnets = ["subnet-yoursubnet1","subnet-yoursubnet2"]
variable "nlb_config"
variable "tg_config"
variable "domain_name"
variable "ebs_options_volume_size"
All default values (when needed)
Un-hash and define vpc_id default if you do not want to input it during tf apply
module "aws_security_group":
ports
cidr_blocks
description
Note: This is for you to archive your logs to S3
Create your bucket first and input the bucket ARN into:
"Resource": [
"arn:aws:s3:::yourbucketarn"