This fork answers a couple of issues I encountered with the original design.
Do you want to:
-
Use Auth0 or Google Oauth for authentication, with more providers coming (AWS Cognito, Facebook, etc. File an issue if you want it.)
-
Not deal with Authentication at all and use strong firewall rules to define access?
-
use Ubuntu 16.10LTS with up to date packages
-
provide your team with a snap in docker container preconfigured to capture stderr/stdout and ship to kinesis [todo]
Then this fork is for you!
ELK stands for Elasticsearch 2, Logstash 2 and Kibana 4 and is being promoted by Elasticsearch as a "devops" logging solution.
This implemenation of an ELK stack is designed to run in AWS EC2 VPC and is secured using Google OAuth 2.0. It consists of one or more instances behind an Elastic Load Balancer (ELB) running the following components:
- Kibana 4.x
- Sense
- Logtrail
- Elasticsearch 2.x
- ES Head
- Logstash 2.x indexer
- Logcabin
- Node.js application proxy
Only Elasticsearch HTTP, Logstash indexer, and the application proxy ports are exposed on the ELB and all requests to the application proxy for Kibana or Elasticsearch are authenticated using Google OAuth.
Elasticsearch is configured to listen on the network but is firewalled to the ElkHttpSecurityGroup. Use this group to allow your client systems to update Elasticsearch. Dynamic scripting has been disabled to address security concerns with remote code execution since elasticsearch version 1.4.3.
The ELB requires a healthcheck to ensure instances in the load balancer are healthy. To achieve this, access to the root URL for Elasticsearch is available at the path /__es
and it is not authenticated.
Shipping logs to the ELK stack via tcp is left as an exercise for the user however example configurations are included in the repo under the /examples
directory. TBC
A very simple one that reads from stdin and tails a log file then echoes to stdout and forwards to the ELK stack is below:
$ logstash --debug -e '
input { stdin { } file { path => "/var/log/system.log" } }
output { stdout { } tcp { host => "INSERT-ELB-DNS-NAME-HERE" port => 6379 codec => json_lines } }'
Logstash is also setup to ingest logs via a Kinesis Stream using the logstash-input-kinesis plugin.
You can find the Kinesis stream information in the Cloudformation stack output.
The expected input codec is json
.
This ELK stack assumes your AWS VPC is configured as per AWS guidelines which is to have a public and private subnet in each availability zone for the region. See Your VPC and Subnets guide for more information.
The easiest way to ensure you have the required VPC setup would be to delete your existing VPC, if possible, and then use the Start VPC Wizard which will create a correctly configured VPC for you.
A. Test with aws cloudformation validate-template --template-body file:///$PATH/$TO/elk-stack/cloudformation/ELK_Stack_Multi_AZ_in_Private_VPC.json' B. Kick off Cloudformation 1. Upload the Cloudformation template via the console. 2. Or CLI with
aws cloudformation create-stack --template-body file:///$PATH/$TO/elk-stack/cloudformation/ELK_Stack_Multi_AZ_in_Private_VPC.json`
You can choose to use Google OAuth or Auth0 to configure authentication. Additionally the public load balancer is firewalled to the AllowedHttpCidr
param in the CloudFormation template.
In Auth0 I recommend creating a new Client
for testing this. In our environment we created a client specifically for administrative, testing, etc. functions.
In the Client -> Settings
page add the public URL + '/authorize'
for ELK (found in the Outputs tab in Cloudformation).
-
For the
AllowedDomain
choose myclientname.auth0.com. -
Add your OAuthClientId and OAuthClient secret.
-
Try to log in!
-
Go to Google Developer Console and create a new client ID for a web application
You can leave the URLs as they are and update them once the ELK stack has been created. Take note of the Client ID and Client Secret as you will need them in the next step.
-
Enable the "Google+ API" for your new client. This is the only Google API needed.
-
Launch the ELK stack using the AWS console or
aws
command-line tool and enter the required parameters. Note that some parameters, like providing a Route53 Hosted Zone Name to create a DNS alias for the public ELB, are optional. -
Once the ELK stack has launched revisit the Google developer console and update the URLs copying the output for
GoogleOAuthRedirectURL
toAUTHORIZED REDIRECT URI
and the same URL but without to path toAUTHORISED JAVASCRIPT ORIGINS
.
Once the instance is available you should see Kibana configured for Logstash.
To configure logstash clients for your minons take a look in scripts/
.
- Please note that billing features could take up to 5 minutes or more on an m1.large before appearing. You can however view if it is working using the ES head plugin as new entries are indexed.
The following elasticsearch plugins are installed:
- AWS Cloud plugin - uses AWS API for the unicast discovery mechanism
- elasticsearch-head - web frontend for elasticsearch cluster
The "head" plugin web page is available at proxied (ie. authenticated) endpoints based on how the ELK stack is deployed:
- Head ->
http://<ELB>/__es/_plugin/head/
Also Sense is installed if you prefer a more DSL/RESTful approach.
This ELK stack cloudformation template takes many parameters, explanations for each are shown when launching the stack. Note that Billing Dashboards, Route 53 DNS, EBS volumes and S3 snapshots are optional.
Logstash grok patterns can be tested online at https://grokdebug.herokuapp.com/
The Kibana dashboards are configured via the GUI.
Guardian ELK Stack Cloudformation Templates and Logcabin Proxy
Copyright 2014-2016 Guardian News & Media
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.