-
Notifications
You must be signed in to change notification settings - Fork 2
Feature/eja eli 304 push cloudwatch alarms to itoc splunk #276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/eja eli 304 push cloudwatch alarms to itoc splunk #276
Conversation
…cific eventbridge feed is in api-layer, so that we can add additional splunk firehose feeds in future if needed (e.g. keep the module generic)
| target_key_id = aws_kms_key.firehose_splunk_cmk.key_id | ||
| } | ||
|
|
||
| resource "aws_kinesis_firehose_delivery_stream" "splunk_delivery_stream" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part of the code is basically setting up a new Firehose Stream with the Splunk endpoint. The module doesn't deal with getting logs/alarms into Firehose, that's dealt with in the main stack (via eventbridge.tf)
| hec_endpoint_type = "Event" | ||
| s3_backup_mode = "FailedEventsOnly" | ||
|
|
||
| s3_configuration { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note this bit - if we fail to deliver a record to Splunk, then we put it in a bucket for further investigation. We could add an alarm for this, so it's called out both on our console and in ITOC splunk, but will leave that to a future ticket.
| role_arn = aws_iam_role.eventbridge_firehose_role.arn | ||
|
|
||
| # Transform the CloudWatch alarm event into a format suitable for Splunk | ||
| input_transformer { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've left the transformation pretty minimal, as I think we'd want ITOC to feed back on 'version 1' of these logs to their Splunk.
…ush-cloudwatch-alarms-to-itoc-splunk
* provisioned concurrency * enable dead letter queue * enhanced monitoring * lambda function versioning * provison concurrancy - alias version fix * removed checkov as we implemented dead letter queue * prod conditions and github roles * github roles * fix for corrupt kms policy * create queue * get the latest function for concurrant provisioning
* lambda versioning for provisioned concurrency * dlq is not for RequestResponse (sync) * checkov skip for dlq
robbailiff2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had a good look through and it looks good to me.
Description
https://nhsd-jira.digital.nhs.uk/browse/ELI-304
Adding the means to forward on Cloudwatch alerts to ITOC Splunk
Context
We want close monitoring of actual issues, without exposing PID / other information to a team who don't need that information
Type of changes
Checklist
Sensitive Information Declaration
To ensure the utmost confidentiality and protect your and others privacy, we kindly ask you to NOT including PII (Personal Identifiable Information) / PID (Personal Identifiable Data) or any other sensitive data in this PR (Pull Request) and the codebase changes. We will remove any PR that do contain any sensitive information. We really appreciate your cooperation in this matter.