Terraform module for configuring lambda functions that export aws logs log groups to s3
The log-related service provided by AWS by default is the AWS CloudWatch Logs service.
There are many advantages to collecting logs generated by each resource in AWS and, if necessary, logs generated by each application using AWS CloudWatch Logs.
However, AWS CloudWatch Logs does not provide a direct way to configure backups for long-term storage in S3.
If you want long-term storage in S3, configure a subscription to the log group and save it as a stream to S3 through Kinesis.
As another method, manual backup method is supported.
This method uses the export function of AWS CloudWatch Logs.
This is a function that extracts files to S3 by setting the desired period for extraction and setting an S3 bucket to store the extracted files.
In the case of this function, files of a desired period can be extracted if necessary, but there are disadvantages in that it is impossible to receive periodic backups and to apply Server Side Encryption (SSE) to files backed up to S3.
So, I created a Lambda Function that can compensate for these shortcomings and periodically extract and back up the contents of AWS CloudWatch Logs.
In addition, this Lambda function is configured to perform centralized log backup in a multi-account environment configuration, rather than operating only for AWS CloudWatch Logs in a single account.
When the CloudWatchLogToS3 Lambda Function is executed, the SQS queue for each collection account and the Consumer Lambda Function that consumes the queue are automatically created according to the execution variable settings.
-
CloudWatchLogsToLogS3Export Function This function is the main function that is called for periodic execution.
-
log-sqs-<each account Profile Name>-collect-fifoSQSConsumer Function This function is a consumer function for extracting logs asynchronously by referring to the information in the queue to process log groups that exceed the execution time limit of the lambda function. This function acts as a consumer for the SQS Queue. In addition, this function is automatically created by Terraform as much as the number of accounts required for initial configuration. The function name for each account is determined by the profile name for each account that is set when running Terraform.
-
AppLogSSEEventForPutObject Function This function performs the role of encrypting the export file that is not encrypted with Server Side Encryption (SSE) with SSE, and is executed by the PutObject event of S3.
-
CloudWatchLogsExportToS3 Consumer Layer Function The consumer function in this lambda function configuration is individually configured to allocate to each account and each SQS queue, but the logic is a function that uses the same logic. Below, the logic for common use in each function is created as a layer function, and the same logic can be used in each consumer function.
- SQS Queue Naming : log-sqs-<each account Profile Name>-collect.fifo When the CloudWatchLogsToS3Export lambda function is invoked, it automatically creates a queue in SQS.
- Consumer Source S3 Bucket : <each account Profile Name>-s3-logs-collector-lambda
- Consumer/lambda_function.zip
- Layer/python.zip
{
"loggingBucketName" : "Integrated log bucket name",
"roleArn" : "Role Arn for log collection by account",
"Prefix" : "Consolidated Log Bucket Object Prefix",
"QueueOptions" : {
"VisibilityTimeout" : SQS VisibilityTimeout,
"MessageRetentionPeriod" : SQS Message Tetention Period,
"KmsMasterKeyId" : "KMS Key Alias for SQS Messages",
"KmsDataKeyReusePeriodSeconds" : SQS KMS Data Key Reuse Period Seconds,
"ConsumerOptions" : {
"BucketName" : "Consumer Lambda layer source bucket",
"ObjectKey" : "Consumer Lambda layer source object key",
"LayerArn" : "Consumer Lambda layer version arn",
"BatchSize" : SQS Batch size
}
}
}
The collection schedule applies the collection schedule by creating a CloudWatch Event Rule for each account where you want to collect CloudWatch Logs.
- Access CloudWatch > Events > Rules screen
- Click the Create rule button
- Select Event Source > Schedule
- After selecting cron expression, input cron expression
- Click Targets > Add target button
- Select Targets > Lambda function
- Select Function > CloudWatchLogsToS3Export function
- Enter the execution variable Json for each account in Configure input > Constant (JSON text) 3.1. See CloudWatchLogsToLogS3Export Lambda Function Execution Variable Definition
- Click the Configure details button
- Enter Rule Name
- Check state Enabled
- Click the Update rule button
- Schedule creation complete
When CloudWatchLogsToLogsS3Export is executed, logs are collected by automatically creating SQS Queue and Consumer Lambda Function for the corresponding account automatically according to the execution settings.
-
CloudWatchLogsToLogsS3Export Lambda can be modified directly from the corresponding Lambda edit screen or by downloading the source locally.
Since the relevant Lambda is set in the Event Rule with the $LATEST version, when the $LATEST version modification is reflected, it is reflected in the schedule in real time.
-
How to modify Layer Function for Consumer?
-
When modifying the layer function, the Consumer Lambda Functions created using the corresponding layer are deleted in advance.
-
Compress the modified Layer Function Lambda source into a python.zip file and upload it as a Layer/python.zip file in the Bucket for Consumer Lambda source.
- s3://<log account Profile Name>-s3-logs-collector-lambda/Layer/python.zip
-
Lambda > Layers screen access
-
Click CloudWatchLogsExportToS3 named Layers
-
Click the Create Version button
-
Enter the necessary description in Description
-
Choose Upload a file from Amazon S3
-
Enter S3 Link URL
- https://s3.<Your Selected Region>.amazonaws.com/<each account Profile Name>-s3-logs-collector-lambda/Layer/python.zip
-
Select python3.8 Compatible runtimes
-
Click the Create button
-
Copy new Layer Version ARN
-
CloudWatch > Events > Targets of the schedule registered in Rules > Lambda function > Configure input > Constant (JSON text) Change QueueOptions > ConsumerOptions > LayerArn information among Json contents to the copied Layer Version ARN
-
For all registered schedules, 12 items are reflected.
-
Layer modification completed
-
-
AWS managed policies
- AmazonSQSFullAccess
- AWSLambdaFullAccess
-
Custom policies
-
AWSLambdabasicExecutionRole
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "logs:CreateLogGroup", "Resource": "arn:aws:logs:[Execution Region]:[Run As Account ID]:*" }, { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:[Execution Region]:[Run As Account ID]:log-group:/aws/lambda/CloudWatchLogsToLogS3Export:*", "arn:aws:logs:[Execution Region]:[Run As Account ID]:log-group:/aws/lambda/CloudWatchLogsToLogS3ExportJob:*" ] } ] }
-
CloudWatchLogsAssumeRolePolicy
{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": [ "arn:aws:iam::[Export target account ID]:role/CollectCloudWatchLogsRole", "arn:aws:iam::[Export target account ID]:role/CollectCloudWatchLogsRole", ... "arn:aws:iam::[Export target account ID]:role/CollectCloudWatchLogsRole" ] } ] }
-
CloudWatchLogsToLogS3ExportPolicy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "logs:CancelExportTask", "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObjectAcl", "s3:GetObject", "logs:CreateExportTask", "s3:AbortMultipartUpload", "logs:DescribeLogGroups", "s3:PutObjectVersionAcl", "logs:DescribeSubscriptionFilters", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::[log bucket name]/*", "arn:aws:logs:[Execution Region]:[Execution account ID]:log-group:*" ] } ] }
-
KMSKeyPolicy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:Encrypt", "kms:GenerateDataKey" ], "Resource": "[KMS Key ARN for SQS]" } ] }
-
-
Trust Relationship Policy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::[Consolidated log account ID]:role/service-role/[Integration log account collection Lambda execution permission Role name]" }, "Action": "sts:AssumeRole", "Condition": {} } ] }
-
Custom Policy
-
CollectCloudWatchLogsPolicy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:DescribeExportTasks", "logs:CancelExportTask" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "logs:CreateExportTask", "s3:PutObject", "s3:GetObjectAcl", "s3:GetObject", "logs:DescribeLogGroups", "s3:AbortMultipartUpload", "logs:DescribeSubscriptionFilters", "s3:PutObjectVersionAcl", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:logs:[Export target Region]:[Export target account ID]:log-group:*", "arn:aws:s3:::[Unified log bucket name]/[Object Prefix]/*" ] } ] }
-
If you use CloudWatch Logs Export to back up logs, Service Side Encryption for S3 objects cannot be applied, so set a separate Lambda function that applies SSE for the object to PutObject Event for the S3 bucket to set SSE and Object Lock applied.
KMS Key Arn information for SSE application should be set in /app/cloudwatch/log/export/s3/kms/arn Key of Parameter Store.
-
Access the CloudWatch > Events > Rules screen of each integrated log account
-
Click the Create rule button
-
Select Event Source > Event Pattern
-
Select Build event pattern to match events by service
-
Select Service Name > Simple Storage Service (S3)
-
Select Event Type > Object Level Operations
-
Specific operation(s) selection
-
Select PutObject
-
Select Specific bucket(s) by name
-
Enter the log bucket name for each integrated log account
- <log account Profile Name>-s3-app-logs
-
Click Targets > Add target button
-
Select Lambda function
-
Select Function > AppLogSSEEventForPutObject Function
-
Click the Configure details button
-
Enter Rule Name
-
Select the State Enabled checkbox
-
Click the Update rule button
-
Event Rule setting complete
- AWSLambdaBasicExecutionRole
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:[Execution Region]:[Execution account ID]:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:[Execution Region]:[Execution account ID]:log-group:/aws/lambda/[AppLogSSEEventForPutObject Lambda Function Name]:*"
]
}
]
}
- AWSLambdaS3ExecutionRole
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::*"
}
]
}
- ParameterStorePolicy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:DescribeParameters"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameter"
],
"Resource": "arn:aws:ssm:[Execution Region]:[Execution account ID]:parameter/app/cloudwatch/log/export/s3/kms/arn"
}
]
}
- S3CopyAndDeletePolicy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:PutObjectVersionAcl",
"s3:GetObjectVersionAcl",
"s3:PutObjectLegalHold",
"s3:GetObjectLegalHold",
"s3:DeleteObject",
"s3:PutObjectAcl",
"s3:GetObjectVersion",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::[log bucket name]",
"arn:aws:s3:::[log bucket name]/*"
]
}
]
}
- SSEKMSPolicy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "[KMS Key ARN for S3 SSE]"
}
]
}