Traildash is a simple, yet powerful, dashboard for AWS CloudTrail logs.
To quote AWS:
AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. AWS charges a few dollars a month for CloudTrail for a typical organization.
The data in CloudTrail is essential, but it's unfortunately trapped in many tiny JSON files stored in AWS S3. Traildash grabs those files, stores them in ElasticSearch, and presents a Kibana dashboard so you can analyze recent activity in your AWS account.
- Who terminated my EC2 instance?
- When was that Route53 entry changed?
- What idiot added 0.0.0.0/0 to the security group?
- Customizable Kibana dashboards for your CloudTrail logs
- Easy to setup: under 15 minutes
- Self-contained Kibana 3.1.2 release
- HTTPS server with custom SSL cert/key or optional self-signed cert
- Easy-to-deploy Linux/OSX binaries, or a Docker image
- ElasticSearch proxy ensures your logs are secure and read-only
- No need to open direct access to your ElasticSearch instance
- Helps to achieve PCI and HIPAA compliance in the cloud
Configure the Traildash Docker container with a few environment variables, and you're off to the races.
-
Fill in the "XXX" blanks and run with docker:
docker run -i -d -p 7000:7000 \ -e "AWS_ACCESS_KEY_ID=XXX" \ -e "AWS_SECRET_ACCESS_KEY=XXX" \ -e "AWS_SQS_URL=https://XXX" \ -e "AWS_REGION=XXX" -e "DEBUG=1" \ -v /home/traildash:/var/lib/elasticsearch/ \ appliedtrust/traildash
-
Open http://localhost:7000/ in your browser
AWS_SQS_URL AWS SQS URL.
AWS Credentials can be provided by either:
-
IAM roles/profiles (See Setup Traildash in AWS)
-
Environment Variables AWS_ACCESS_KEY_ID AWS Key ID. AWS_SECRET_ACCESS_KEY AWS Secret Key.
-
Config file (SDK standard format), ~/.aws/credentials
[default] aws_access_key_id = ACCESS_KEY aws_secret_access_key = SECRET_KEY region = AWS_REGION
AWS_REGION AWS Region (SQS and S3 regions must match. default: us-east-1).
WEB_LISTEN Listen IP and port for web interface (default: 0.0.0.0:7000).
ES_URL ElasticSearch URL (default: http://localhost:9200).
DEBUG Enable debugging output.
SSL_MODE "off": disable HTTPS and use HTTP (default)
"custom": use custom key/cert stored stored in ".tdssl/key.pem" and ".tdssl/cert.pem"
"selfSigned": use key/cert in ".tdssl", generate an self-signed cert if empty
We recommend using the appliedtrust/traildash docker container for convenience, as it includes a bundled ElasticSearch instance. If you'd like to run your own ElasticSearch instance, or simply don't want to use Docker, it's easy to run from the command-line. The traildash executable is configured with environment variables rather than CLI flags - here's an example:
export AWS_ACCESS_KEY_ID=AKIXXX
export AWS_SECRET_ACCESS_KEY=XXX
export AWS_SQS_URL=XXX
export AWS_REGION=us-east-1
export WEB_LISTEN=0.0.0.0:7000
export ES_URL=http://localhost:9200
export DEBUG=1
traildash
traildash --version
- AWS CloudTrail creates a new log file, stores it in S3, and notifies an SNS topic.
- The SNS topic notifes a dedicated SQS queue about the new log file in S3.
- Traildash polls the SQS queue and downloads new log files from S3.
- Traildash loads the new log files into a local ElasticSearch instace.
- Kibana provides beautiful dashboards to view the logs stored in ElasticSearch.
- Traildash protects access to ElasticSearch, ensuring logs are read-only.
- Turn on CloudTrail in each region, telling CloudTrail to create a new S3 bucket and SNS topic:
- If your Traildash instance will be launched in a different AWS account, you must add a bucket policy to your CloudTrail bucket allowing that account access.
{
"Id": "AllowTraildashAccountAccess",
"Statement": [
{
"Sid": "AllowTraildashBucketAccess",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<your-bucket-name>",
"Principal": {
"AWS": [
"<TRAILDASH ACCOUNT ID>"
]
}
},
{
"Sid": "AllowTraildashObjectAccess",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<your-bucket-name>/*",
"Principal": {
"AWS": [
"<TRAILDASH ACCOUNT ID>"
]
}
}
]
}
- Switch to SNS in your AWS console to view your SNS topic and edit the topic policy:
- Restrict topic access to only allow SQS subscriptions. If you your Traildash instance is launched in the same AWS account, continue on to the next step. If you need Traildash in an outside account to access this topic, allow subscriptions from the AWS account ID that owns your Traildash SQS queue. (If traildash is running the same account, leave "Only me" checked for subscriptions)
- Switch to SQS in your AWS console and create a new SQS queue - okay to stick with default options.
- With your new SQS queue selected, click Queue Actions and "Subscribe Queue to SNS Topic"
- Enter the ARN of your SNS Topic and click Subscribe. Repeat for each CloudTrail SNS Topic you have created. If you encounter any errors in this step, ensure you have created the correct permissions on each SNS topic.
- Create a managed IAM policy with the following policy document, filing in information from the S3 bucket name and SQS queue ARN from above.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowS3BucketAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::[YOUR CLOUDTRAIL S3 BUCKET NAME]/*"
]
},
{
"Sid": "AllowSQS",
"Effect": "Allow",
"Action": [
"sqs:DeleteMessage",
"sqs:ReceiveMessage"
],
"Resource": [
"[YOUR SQS ARN]"
]
}
]
}
- Create a new EC2 instance role in IAM and attach your Traildash policy to it. Note: Use of IAM roles is NOT required however it is strongly recommended for security best practice.
- Be sure to select this role when launching your Traildash instance!
Traildash will only pull in data which is being added after the above has been configured, so if you have logs from before this was configured you will have to backfill that data. To make that easier you can use the backfill.py
Python script provided to notify Traildash of the older data.
The script relies on the same environment variables mentioned above, but also requires a AWS_S3_BUCKET
variable with the name of the S3 bucket that holds your CloudTrail files. The script also requires some extra permissions than the user for CloudTrail requires, as it needs to list the files in the S3 bucket and also add items to the SQS queue.
The only dependency outside of Python itself is the AWS library, Boto3. It can be installed by running pip install boto3
.
- Fork the project
- Add your feature
- If you are adding new functionality, document it in README.md
- Add some tests if able.
- Push the branch up to GitHub.
- Send a pull request to the appliedtrust/traildash project.
This project uses glock for managing 3rd party dependencies. You'll need to install glock into your workspace before hacking on traildash.
$ git clone <your fork>
$ glock sync github.com/appliedtrust/traildash
$ make
To cross-compile, you'll need to follow these steps first: http://dave.cheney.net/2012/09/08/an-introduction-to-cross-compilation-with-go
MIT