This project creates an AWS Lambda function that sends logs from files stored in S3 bucket, to Logz.io
To deploy this project, click the button that matches the region you wish to deploy your Stack to:
Specify the stack details as per the table below, check the checkboxes and select Create stack.
Parameter | Description | Required/Default |
---|---|---|
logzioListener |
The Logz.io listener URL for your region. (For more details, see the regions page | Required |
logzioToken |
Your Logz.io log shipping token. | Required |
logLevel |
Log level for the Lambda function. Can be one of: debug , info , warn , error , fatal , panic . |
Default: info |
logType |
The log type you'll use with this Lambda. This is shown in your logs under the type field in Kibana. Logz.io applies parsing based on the log type. | Default: s3_hook |
includePathsRegexes |
Comma-seperated list of regexes that match the paths you'd like to pull logs from. That field is mutually exclusive with the excludePathsRegexes field. |
- |
excludePathsRegexes |
Comma-seperated list of regexes that match the paths that won't pull logs from. That field is mutually exclusive with the includePathsRegexes field. |
- |
pathToFields |
Fields from the path to your logs directory that you want to add to the logs. For example, org-id/aws-type/account-id will add each of the fields org-id , aws-type and account-id to the logs that are fetched from the directory that this path refers to. |
- |
Give the stack a few minutes to be deployed.
Once your Lambda function is ready, you'll need to manually add a trigger. This is due to Cloudformation limitations.
Go to the function's page, and click on Add trigger.
Then, choose S3 as a trigger, and fill in:
- Bucket: Your bucket name.
- Event type: Choose option
All object create events
. - Prefix and Suffix should be left empty.
Confirm the checkbox, and click *Add.
That's it. Your function is configured. Once you upload new files to your bucket, it will trigger the function, and the logs will be sent to your Logz.io account.
If you want to pull logs from specific paths within the bucket, use the includePathsRegexes
variable. Conversely, if there are particular paths within the bucket that you don't want to pull logs from, use the excludePathsRegexes
variable. These fields are mutually exclusive.
Both variables should contain a comma-separated list of regular expressions that correspond to the paths from which you want to extract logs (includePathsRegexes
) or the paths you want to exclude when extracting logs (excludePathsRegexes
).
Note: This will still trigger your Lambda function every time a new object is added to your bucket. However, if the key does not match the regexes, the function will quit and won't send the logs.
In case you want to use your objects' path as extra fields in your logs, you can do so by using pathToFields
.
For example, if your objects are under the path: oi-3rfEFA4/AWSLogs/2378194514/file.log
, where oi-3rfEFA4
is org id, AWSLogs
is aws type, and 2378194514
is account id.
Setting pathToFields
with the value: org-id/aws-type/account-id
will add to logs the following fields:
org-id
: oi-3rfEFA4
, aws-type
: AWSLogs
, account-id
: 2378194514
.
Important notes about pathToFields
:
- This will override a field with the same key, if it exists.
- In order for the feature to work, you need to set
pathToFields
from the root of the bucket.
S3 Hook will automatically parse logs in the following cases:
- The object's path contains the phrase
cloudtrail
(case insensitive).
If you want to ship Control Tower logs, after the deployment of the S3 Hook stack, you'll need to deploy an additional stack. For more details click here.
- 0.4.2:
- Bug fix:
- Support all needed file extensions:
.gz
,.zip
,.txt
,.log
,.json
(other file formats are treated as plain text). - Validation on
pathToFields
to ensure correspondence between the number of keys and values.
- Support all needed file extensions:
- Bug fix:
- 0.4.1:
- Upgrading the control tower runtime to
provided.al2023
to ensure compatibility with AWS.
- Upgrading the control tower runtime to
- 0.4.0:
- Extend log processing to include .txt, .log, and .json files; exclude unsupported file types for improved efficiency.
- Update AWS Lambda runtime to
provided.al2023
for enhanced performance and compatibility.
- 0.3.0:
- Add new option to exclude path from where won't pull logs.
- Rename field
pathsRegexes
toincludePathsRegexes
.
- 0.2.1:
- Bug fix - remove redundant usage of Content Type.
- 0.2.0:
- Cloudformation template changes:
- Lambda function name will be the same as the stack name.
- IAM Role will have a unique name.
- IAM Role allows
GetObject
action for all buckets.
- Removal of Parameter
bucketName
.
- Cloudformation template changes:
- 0.1.0:
- Add ability to filter paths with regex list in field
pathsRegexes
. - Add ability to map bucket path as log fields with
pathToFields
. - Add support for Control Tower.
- Automatically detect and parse CloudTrail logs.
- Add ability to filter paths with regex list in field
- 0.0.2:
- Bug fix: Decodes folder names, for folders with special characters.
- 0.0.1: Initial release.