diff --git a/README.md b/README.md index ea87563..4ba651e 100644 --- a/README.md +++ b/README.md @@ -61,11 +61,11 @@ The following is the JSON output for a an EC2 instance; see how MetaHub organize # Context -In **MetaHub**, context refers to information about the affected resources like their configuration, logs, tags, organizations, and more. +In **MetaHub**, context refers to information about the affected resources like their configuration, associations, logs, tags, account, and more. MetaHub doesn't stop at the affected resource but analyzes any associated or attached resources. For instance, if there is a security finding on an EC2 instance, MetaHub will not only analyze the instance but also the security groups attached to it, including their rules. MetaHub will examine the IAM roles that the affected resource is using and the policies attached to those roles for any issues. It will analyze the EBS attached to the instance and determine if they are encrypted. It will also analyze the Auto Scaling Groups that the instance is associated with and how. MetaHub will also analyze the VPC, Subnets, and other resources associated with the instance. -The **Context** module has the capability to retrieve information from the affected resources, affected accounts, and other related resources that are connected. The context module has five main parts: `config` (which includes `associations` as well), `tags`, `cloudtrail`, and `account`. By default, `config` and `tags` are enabled. You can choose which modules to enable using the opeion `--context`. Each of these keys will be added under the affected resource in the output with their outpus. +The **Context** module has the capability to retrieve information from the affected resources, affected accounts, and every associated resources. The context module has five main parts: `config` (which includes `associations` as well), `tags`, `cloudtrail`, and `account`. You can choose what you want to query using the option `--context`, by default only `config` and `tags` are enabled. Each of these keys will be added under the affected resource in the output with their outpus. ## Config @@ -79,7 +79,7 @@ Under `associations` key, you will find all the associated resources of the affe Associations are key to understanding the context and impact of your security findings as their exposure. -You can filter your findings based on Config outputs using the option: `--mh-filters-config {True/False}`. See [Config Filtering](#config-filtering). +You can filter your findings based on Associations outputs using the option: `--mh-filters-config {True/False}`. See [Config Filtering](#config-filtering). ## Tags @@ -129,63 +129,63 @@ The following are the impact criteria that MetaHub evaluates by default: **Exposure** evaluates the how the the affected resource is exposed to other networks. For example, if the affected resource is public, if it is part of a VPC, if it has a public IP or if it is protected by a firewall or a security group. -| **Possible Statuses** | **Description** | -| ----------------------- | -------------------------------------------------------------------------------------------------------------- | -| 🔴 effectively-public | The resource is effectively public from the Internet. | -| 🟠 restricted-public | The resource is public, but there is a restriction like a Security Group. | -| 🟠 unrestricted-private | The resource is private but unrestricted, like an open security group. | -| 🟠 launch-public | These are resources that can launch other resources as public. For example, an Auto Scaling group or a Subnet. | -| 🟢 restricted | The resource is restricted. | -| 🔵 unknown | The resource couldn't be checked | +| **Possible Statuses** | **Value** | **Description** | +| ----------------------- | :-------: | -------------------------------------------------------------------------------------------------------------- | +| 🔴 effectively-public | 100% | The resource is effectively public from the Internet. | +| 🟠 restricted-public | 40% | The resource is public, but there is a restriction like a Security Group. | +| 🟠 unrestricted-private | 30% | The resource is private but unrestricted, like an open security group. | +| 🟠 launch-public | 10% | These are resources that can launch other resources as public. For example, an Auto Scaling group or a Subnet. | +| 🟢 restricted | 0% | The resource is restricted. | +| 🔵 unknown | - | The resource couldn't be checked | ## Access **Access** evaluates the resource policy layer. MetaHub checks every available policy including: IAM Managed policies, IAM Inline policies, Resource Policies, Bucket ACLS, and any association to other resources like IAM Roles which its policies are also analyzed . An unrestricted policy is not only an itsue itself of that policy, it afected any other resource which is using it. -| **Possible Statuses** | **Description** | -| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | -| 🔴 unrestricted | The principal is unrestricted, without any condition or restriction. | -| 🔴 untrusted-principal | The principal is an AWS Account, not part of your trusted accounts. | -| 🟠 unrestricted-principal | The principal is not restricted, defined with a wildcard. It could be conditions restricting it or other restrictions like s3 public blocks. | -| 🟠 cross-account-principal | The principal is from another AWS account. | -| 🟠 unrestricted-actions | The actions are defined using wildcards. | -| 🟠 dangerous-actions | Some dangerous actions are defined as part of this policy. | -| 🟠 unrestricted-service | The policy allows an AWS service as principal without restriction. | -| 🟢 restricted | The policy is restricted. | -| 🔵 unknown | The policy couldn't be checked. | +| **Possible Statuses** | **Value** | **Description** | +| -------------------------- | :-------: | -------------------------------------------------------------------------------------------------------------------------------------------- | +| 🔴 unrestricted | 100% | The principal is unrestricted, without any condition or restriction. | +| 🔴 untrusted-principal | 70% | The principal is an AWS Account, not part of your trusted accounts. | +| 🟠 unrestricted-principal | 40% | The principal is not restricted, defined with a wildcard. It could be conditions restricting it or other restrictions like s3 public blocks. | +| 🟠 cross-account-principal | 30% | The principal is from another AWS account. | +| 🟠 unrestricted-actions | 30% | The actions are defined using wildcards. | +| 🟠 dangerous-actions | 30% | Some dangerous actions are defined as part of this policy. | +| 🟠 unrestricted-service | 10% | The policy allows an AWS service as principal without restriction. | +| 🟢 restricted | 0% | The policy is restricted. | +| 🔵 unknown | - | The policy couldn't be checked. | ## Encryption **Encryption** evaluate the different encryption layers based on each resource type. For example, for some resources it evaluates if `at_rest` and `in_transit` encryption configuration are both enabled. -| **Possible Statuses** | **Description** | -| --------------------- | --------------- | -| 🔴 unencrypted | | -| 🟢 encrypted | | -| 🔵 unknown | | +| **Possible Statuses** | **Value** | **Description** | +| --------------------- | :-------: | ------------------------------------------------------------------- | +| 🔴 unencrypted | 100% | The resource is not fully encrypted. | +| 🟢 encrypted | 0% | The resource is fully encrypted including any of it's associations. | +| 🔵 unknown | - | The resource encryption couldn't be checked. | ## Status **Status** evaluate the status of the affected resource in terms of attachment or functioning. For example, for an EC2 Instance we evaluate if the resource is running, stopped, or terminated, but for resources like EBS Volumes and Security Groups, we evaluate if those resources are attached to any other resource. -| **Possible Statuses** | **Description** | -| --------------------- | --------------- | -| 🟠 attached | | -| 🟠 running | | -| 🟢 not-attached | | -| 🟢 not-running | | -| 🔵 unknown | | +| **Possible Statuses** | **Value** | **Description** | +| --------------------- | :-------: | --------------------------------------------------------- | +| 🟠 attached | 100% | The resource supports attachment and is attached. | +| 🟠 running | 100% | The resource supports running and is running. | +| 🟢 not-attached | 0% | The resource supports attachment, and it is not attached. | +| 🟢 not-running | 0% | The resource supports running and it is not running. | +| 🔵 unknown | - | The resource couldn't be checked for status. | ## Environment **Environment** evaluates the environment defined for the affected resource. Supported environments are `production`, `staging`, `development`. MetaHub evaluates the environment based on the tags of the affected resource. You can define your own tagging strategy in the configuration file (See [Customizing Configuration](#customizing-configuration)). -| **Possible Statuses** | **Description** | -| --------------------- | --------------- | -| 🟠 production | | -| 🟢 staging | | -| 🟢 development | | -| 🔵 unknown | | +| **Possible Statuses** | **Value** | **Description** | +| --------------------- | :-------: | ------------------------------------------------ | +| 🟠 production | 100% | It is a production resource. | +| 🟢 staging | 30% | It is a staging resource. | +| 🟢 development | 0% | It is a development resource. | +| 🔵 unknown | - | The resource couldn't be checked for enviroment. | ## Findings Soring @@ -203,6 +203,8 @@ SUM(HIGH (3) / CRITICAL (4) + LOW (0.5) / CRITICAL (4)) = 0.875 # Architecture +**MetaHub** reads your security findings from AWS Security Hub or any ASFF-compatible security scanner. It then queries the affected resources directly in the affected account to provide additional context. Based on that context, it calculates it's impact. Finally, it generates different outputs based on your needs. +

Diagram

@@ -243,7 +245,15 @@ When investigating findings, you may need to update security findings altogether # Customizing Configuration -**MetaHub** uses configuration files that let you customize some checks behaviors, default filters, and more. The configuration files are located in [lib/config/](lib/config). You can edit them using your favorite text editor. +**MetaHub** uses configuration files that let you customize some checks behaviors, default filters, and more. The configuration files are located in [lib/config/](lib/config). + +Things you can customize: + +- [lib/config/configuration.py](lib/config/configuration.py): This file contains the default configuration for MetaHub. You can change the default filters, the default output modes, and more. + +- [lib/config/reources.py](lib/config/resources.py): This file contains definitions for every resource type, like which CloudTrail events to look for. + +- [lib/config/impact.py](lib/config/impact.py): This file contains the definitions for the impact criteria and their values. # Run with Python @@ -253,11 +263,11 @@ Requirements can be installed in your system manually (using pip3) or using a Py ## Run it using Python Virtual Environment -1. Clone the repository: `git clone git@github.com:gabrielsoltz/meta hub.git` +1. Clone the repository: `git clone git@github.com:gabrielsoltz/metahub.git` 2. Change to repostiory dir: `cd metahub` -3. Create a virtual environment for this project: `python3 -m venv venv/meta hub` -4. Activate the virtual environment you just created: `source venv/meta hub/bin/activate` -5. Install meta hub requirements: `pip3 install -r requirements.txt` +3. Create a virtual environment for this project: `python3 -m venv venv/metahub` +4. Activate the virtual environment you just created: `source venv/metahub/bin/activate` +5. Install Metahub requirements: `pip3 install -r requirements.txt` 6. Run: `./metahub -h` 7. Deactivate your virtual environment after you finish with: `deactivate` @@ -277,7 +287,7 @@ The available tagging for MetaHub containers are the following: For running from the public registry, you can run the following command: -``` +```sh docker run -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h ``` @@ -287,7 +297,7 @@ If you are already logged into the AWS host machine, you can seamlessly use the For instance, you can run the following command: -``` +```sh docker run -e AWS_DEFAULT_REGION -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h ``` @@ -297,7 +307,7 @@ On the other hand, if you are not logged in on the host machine, you will need t Or you can also build it locally: -``` +```sh git clone git@github.com:gabrielsoltz/metahub.git cd metahub docker build -t metahub . @@ -325,7 +335,7 @@ The terraform code for deploying the Lambda function is provided under the `terr Just run the following commands: -``` +```sh cd terraform terraform init terraform apply @@ -408,7 +418,7 @@ export AWS_SESSION_TOKEN= "XXXXXXXXX" This is the minimum IAM policy you need to read and write from AWS Security Hub. If you don't want to update your findings with MetaHub, you can remove the `securityhub:BatchUpdateFindings` action. -``` +```sh { "Version": "2012-10-17", "Statement": [ @@ -430,7 +440,7 @@ This is the minimum IAM policy you need to read and write from AWS Security Hub. # Configuring Context -- If you are running MetaHub for a multiple AWS Account setup (AWS Security Hub is aggregating findings from multiple AWS Accounts), you must provide the role to assume for Context queries because the affected resources are not in the same AWS Account that the AWS Security Hub findings. The `--mh-assume-role` will be used to connect with the affected resources directly in the affected account. This role needs to have enough policies for being able to describe resources. +If you are running MetaHub for a multiple AWS Account setup (AWS Security Hub is aggregating findings from multiple AWS Accounts), you must provide the role to assume for Context queries because the affected resources are not in the same AWS Account that the AWS Security Hub findings. The `--mh-assume-role` will be used to connect with the affected resources directly in the affected account. This role needs to have enough policies for being able to describe resources. ## IAM Policy for Context @@ -459,16 +469,16 @@ If you want to read from an input ASFF file, you need to use the options: You also can combine AWS Security Hub findings with input ASFF files specifying both inputs: ```sh -./metahub.py --inputs file-asff security hub --input-asff path/to/the/file.json.asff +./metahub.py --inputs file-asff securityhub --input-asff path/to/the/file.json.asff ``` When using a file as input, you can't use the option `--sh-filters` for filter findings, as this option relies on AWS API for filtering. You can't use the options `--update-findings` or `--enrich-findings` as those findings are not in the AWS Security Hub. If you are reading from both sources at the same time, only the findings from AWS Security Hub will be updated. # Output Modes -**MetaHub** can store the outputs in different formats. By default, all output modes are enabled: `json-short`, `json-full`, `json-statistics`, `json-inventory`, `html`, `CSV`, and `xlsx`. +**MetaHub** can generate different programmatic and visual outputs. By default, all output modes are enabled: `json-short`, `json-full`, `json-statistics`, `json-inventory`, `html`, `csv`, and `xlsx`. -The Outputs will be saved in the folder `metahub/outputs` with the execution date. +The outputs will be saved in the `outputs/` folder with the execution date. If you want only to generate a specific output mode, you can use the option `--output-modes` with the desired output mode. @@ -496,16 +506,22 @@ If you want to generate `json-short` and `json-full` outputs, you can use: Show all findings titles together under each affected resource and the `AwsAccountId`, `Region`, and `ResourceType`: ``` - "arn:aws:sagemaker:us-east-1:ofuscated:notebook-instance/obfuscated": { - "findings": [ - "SageMaker.2 SageMaker notebook instances should be launched in a custom VPC", - "SageMaker.3 Users should not have root access to SageMaker notebook instances", - "SageMaker.1 Amazon SageMaker notebook instances should not have direct internet access" - ], - "AwsAccountId": "obfuscated", - "Region": "us-east-1", - "ResourceType": "AwsSageMakerNotebookInstance" - }, +"arn:aws:sagemaker:us-east-1:ofuscated:notebook-instance/obfuscated": { + "findings": [ + "SageMaker.2 SageMaker notebook instances should be launched in a custom VPC", + "SageMaker.3 Users should not have root access to SageMaker notebook instances", + "SageMaker.1 Amazon SageMaker notebook instances should not have direct internet access" + ], + "AwsAccountId": "obfuscated", + "Region": "us-east-1", + "ResourceType": "AwsSageMakerNotebookInstance", + "config: {}, + "associations: {}, + "tags: {}, + "cloudtrail: {}, + "account: {} + "impact: {} +}, ``` ### JSON-Full @@ -513,55 +529,61 @@ Show all findings titles together under each affected resource and the `AwsAccou Show all findings with all data. Findings are organized by ResourceId (ARN). For each finding, you will also get: `SeverityLabel,` `Workflow,` `RecordState,` `Compliance,` `Id`, and `ProductArn`: ``` - "arn:aws:sagemaker:eu-west-1:ofuscated:notebook-instance/obfuscated": { - "findings": [ - { - "SageMaker.3 Users should not have root access to SageMaker notebook instances": { - "SeverityLabel": "HIGH", - "Workflow": { - "Status": "NEW" - }, - "RecordState": "ACTIVE", - "Compliance": { - "Status": "FAILED" - }, - "Id": "arn:aws:security hub:eu-west-1:ofuscated:subscription/aws-foundational-security-best-practices/v/1.0.0/SageMaker.3/finding/12345-0193-4a97-9ad7-bc7c1730eec6", - "ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub" - } - }, - { - "SageMaker.2 SageMaker notebook instances should be launched in a custom VPC": { - "SeverityLabel": "HIGH", - "Workflow": { - "Status": "NEW" - }, - "RecordState": "ACTIVE", - "Compliance": { - "Status": "FAILED" - }, - "Id": "arn:aws:security hub:eu-west-1:ofuscated:subscription/aws-foundational-security-best-practices/v/1.0.0/SageMaker.2/finding/12345-e8e1-4915-9881-965104b0aabf", - "ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub" - } - }, - { - "SageMaker.1 Amazon SageMaker notebook instances should not have direct internet access": { - "SeverityLabel": "HIGH", - "Workflow": { - "Status": "NEW" - }, - "RecordState": "ACTIVE", - "Compliance": { - "Status": "FAILED" - }, - "Id": "arn:aws:security hub:eu-west-1:ofuscated:subscription/aws-foundational-security-best-practices/v/1.0.0/SageMaker.1/finding/12345-3a21-4016-a8e5-f5173b44e90a", - "ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub" - } +"arn:aws:sagemaker:eu-west-1:ofuscated:notebook-instance/obfuscated": { + "findings": [ + { + "SageMaker.3 Users should not have root access to SageMaker notebook instances": { + "SeverityLabel": "HIGH", + "Workflow": { + "Status": "NEW" + }, + "RecordState": "ACTIVE", + "Compliance": { + "Status": "FAILED" + }, + "Id": "arn:aws:security hub:eu-west-1:ofuscated:subscription/aws-foundational-security-best-practices/v/1.0.0/SageMaker.3/finding/12345-0193-4a97-9ad7-bc7c1730eec6", + "ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub" } - ], - "AwsAccountId": "obfuscated", - "Region": "eu-west-1", - "ResourceType": "AwsSageMakerNotebookInstance" - }, + }, + { + "SageMaker.2 SageMaker notebook instances should be launched in a custom VPC": { + "SeverityLabel": "HIGH", + "Workflow": { + "Status": "NEW" + }, + "RecordState": "ACTIVE", + "Compliance": { + "Status": "FAILED" + }, + "Id": "arn:aws:security hub:eu-west-1:ofuscated:subscription/aws-foundational-security-best-practices/v/1.0.0/SageMaker.2/finding/12345-e8e1-4915-9881-965104b0aabf", + "ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub" + } + }, + { + "SageMaker.1 Amazon SageMaker notebook instances should not have direct internet access": { + "SeverityLabel": "HIGH", + "Workflow": { + "Status": "NEW" + }, + "RecordState": "ACTIVE", + "Compliance": { + "Status": "FAILED" + }, + "Id": "arn:aws:security hub:eu-west-1:ofuscated:subscription/aws-foundational-security-best-practices/v/1.0.0/SageMaker.1/finding/12345-3a21-4016-a8e5-f5173b44e90a", + "ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub" + } + } + ], + "AwsAccountId": "obfuscated", + "Region": "eu-west-1", + "ResourceType": "AwsSageMakerNotebookInstance", + "config: {}, + "associations: {}, + "tags: {}, + "cloudtrail: {}, + "account: {} + "impact: {} +}, ``` ### JSON-Inventory @@ -637,8 +659,7 @@ HTML Reports are interactive in many ways: ## CSV -You can create rich HT -ML reports of your findings, adding your context as part of them. +You can create CSV reports of your findings, adding your context as part of them.

csv-example @@ -664,7 +685,7 @@ For example, you can generate an HTML output with Tags and add "Owner" and "Envi # Filters -You can filter the findings and resources that you get from Security Hub in different ways and combine all of them to get exactly what you are looking for, then re-use those filters to create alerts. +You can filter the security findings and resources that you get from your source in different ways and combine all of them to get exactly what you are looking for, then re-use those filters to create alerts. - [Security Hub Filtering](#security-hub-filtering) - [Security Hub Filtering using YAML templates](#security-hub-filtering-using-yaml-templates) diff --git a/lib/AwsHelpers.py b/lib/AwsHelpers.py index 3e88c58..ecec463 100644 --- a/lib/AwsHelpers.py +++ b/lib/AwsHelpers.py @@ -7,8 +7,10 @@ ProfileNotFound, ) +from lib.config.configuration import assume_role_duration -def assume_role(logger, aws_account_number, role_name, duration=3600): + +def assume_role(logger, aws_account_number, role_name, duration=assume_role_duration): """ Assumes the provided role in each account and returns the session :param aws_account_number: AWS Account Number diff --git a/lib/config/configuration.py b/lib/config/configuration.py index 18483e9..d860323 100644 --- a/lib/config/configuration.py +++ b/lib/config/configuration.py @@ -53,18 +53,18 @@ # tag_ENVIRONMENT = {"TAG-KEY": ["TAG-VALUE1", "TAG-VALUE1", "TAG-VALUE3"]} tags_production = { "Environment": ["Production", "production", "prd"], - "Env": ["production"], - "environment": ["prd"], + "Env": ["Production", "production", "prd"], + "environment": ["Production", "production", "prd"], } tags_staging = { "Environment": ["Staging", "staging", "stg"], - "Env": ["stg"], - "environment": ["stg"], + "Env": ["Staging", "staging", "stg"], + "environment": ["Staging", "staging", "stg"], } tags_development = { "Environment": ["Development", "development", "dev"], - "Env": ["dev"], - "environment": ["dev"], + "Env": ["Development", "development", "dev"], + "environment": ["Development", "development", "dev"], } @@ -83,3 +83,16 @@ # Decide if you want to output as part of the findings the whole json resource policy output_resource_policy = True + +# Output directory +outputs_dir = "outputs/" + +# Output file name date format +outputs_time_str = "%Y%m%d-%H%M%S" + +# ---------------------------------- # +# Other Configurations # +# ---------------------------------- # + +# Assume role duration in seconds +assume_role_duration = 3600 diff --git a/lib/config/impact.yaml b/lib/config/impact.yaml index 7b87ca3..95f8cc1 100644 --- a/lib/config/impact.yaml +++ b/lib/config/impact.yaml @@ -17,8 +17,8 @@ status: score: 1 - not-running: score: 0 - - unknown: - score: 0 + # - unknown: + # score: 0 exposure: weight: 25 @@ -33,8 +33,8 @@ exposure: score: 0.1 - restricted: score: 0 - - unknown: - score: 0 + # - unknown: + # score: 0 access: weight: 25 @@ -55,8 +55,8 @@ access: score: 0.1 - restricted: score: 0 - - unknown: - score: 0 + # - unknown: + # score: 0 encryption: weight: 10 @@ -65,8 +65,8 @@ encryption: score: 1 - encrypted: score: 0 - - unknown: - score: 0 + # - unknown: + # score: 0 environment: weight: 15 @@ -77,5 +77,5 @@ environment: score: 0.3 - development: score: 0 - - unknown: - score: 0 + # - unknown: + # score: 0 diff --git a/lib/context/context.py b/lib/context/context.py index 2bacf4f..973a1fb 100644 --- a/lib/context/context.py +++ b/lib/context/context.py @@ -11,12 +11,21 @@ class Context: - def __init__(self, logger, finding, mh_filters_config, mh_filters_tags, mh_role): + def __init__( + self, + logger, + finding, + mh_filters_config, + mh_filters_tags, + mh_role, + cached_associated_resources, + ): self.logger = logger self.parse_finding(finding) self.get_session(mh_role) self.mh_filters_config = mh_filters_config self.mh_filters_tags = mh_filters_tags + self.cached_associated_resources = cached_associated_resources # Move to Config: self.drilled_down = True @@ -95,7 +104,7 @@ def get_context_config(self): # Execute Drilled if self.drilled_down: try: - hnld.execute_drilled_metachecks() + hnld.execute_drilled_metachecks(self.cached_associated_resources) except (AttributeError, Exception) as err: if "should return None" in str(err): self.logger.info( diff --git a/lib/context/resources/AwsCloudFrontDistribution.py b/lib/context/resources/AwsCloudFrontDistribution.py index 5a187d6..4a97f6a 100644 --- a/lib/context/resources/AwsCloudFrontDistribution.py +++ b/lib/context/resources/AwsCloudFrontDistribution.py @@ -38,7 +38,7 @@ def parse_finding(self, finding, drilled): self.resource_id = ( finding["Resources"][0]["Id"].split("/")[-1] if not drilled - else drilled.split("/")[-11] + else drilled.split("/")[-1] ) self.resource_arn = finding["Resources"][0]["Id"] if not drilled else drilled diff --git a/lib/context/resources/AwsEc2Instance.py b/lib/context/resources/AwsEc2Instance.py index 16cc890..4b08e1f 100644 --- a/lib/context/resources/AwsEc2Instance.py +++ b/lib/context/resources/AwsEc2Instance.py @@ -41,8 +41,12 @@ def parse_finding(self, finding, drilled): self.account = finding["AwsAccountId"] self.partition = finding["Resources"][0]["Id"].split(":")[1] self.resource_type = finding["Resources"][0]["Type"] - self.resource_arn = finding["Resources"][0]["Id"] - self.resource_id = finding["Resources"][0]["Id"].split("/")[1] + self.resource_id = ( + finding["Resources"][0]["Id"].split("/")[-1] + if not drilled + else drilled.split("/")[-1] + ) + self.resource_arn = finding["Resources"][0]["Id"] if not drilled else drilled # Describe Functions diff --git a/lib/context/resources/Base.py b/lib/context/resources/Base.py index f453cbb..feb8b53 100644 --- a/lib/context/resources/Base.py +++ b/lib/context/resources/Base.py @@ -56,148 +56,254 @@ def output_checks(self): return mh_values_checks, mh_matched_checks - def execute_drilled_metachecks(self): + def execute_drilled_metachecks(self, cached_associated_resources): # Optimize drilled context by keeping a cache of drilled resources - self.drilled_cache = {} - - def execute(resources, MetaCheck): - for resource in resources: - if resource not in self.drilled_cache: - self.logger.info( - "Running Drilled Context for resource {} from resource: {}".format( - resource, self.resource_arn - ) + self.drilled_cache = cached_associated_resources + + def execute(r, MetaCheck): + if r not in self.drilled_cache: + self.logger.info( + "Running Drilled Context for resource {} from resource: {}".format( + r, self.resource_arn ) - try: - resource_drilled = MetaCheck( - self.logger, - self.finding, - False, - self.sess, - drilled=resource, - ) - resource_drilled_output = ( - resource_drilled.output_checks_drilled() + ) + try: + resource_drilled = MetaCheck( + self.logger, + self.finding, + False, + self.sess, + drilled=r, + ) + resource_drilled_output = resource_drilled.output_checks_drilled() + self.drilled_cache[r] = resource_drilled_output + + except (AttributeError, Exception) as err: + if "should return None" in str(err): + self.logger.info( + "Not Found Drilled resource %s from resource: %s", + r, + self.resource_arn, ) - resources[resource] = resource_drilled_output - self.drilled_cache[resource] = resource_drilled_output - - # Double Drill (IAM Roles >> IAM Policies) - if ( - hasattr(self, "iam_roles") - and self.iam_roles - and hasattr(resource_drilled, "iam_policies") - and resource_drilled.iam_policies - and self.resource_type != "AwsIamPolicy" - ): - from lib.context.resources.AwsIamPolicy import ( - Metacheck as IamPolicyMetacheck, - ) - - execute(resource_drilled.iam_policies, IamPolicyMetacheck) - - # Double Drill (Subnets >> Route Table) - if ( - hasattr(self, "subnets") - and self.subnets - and hasattr(resource_drilled, "route_tables") - and resource_drilled.route_tables - ): - from lib.context.resources.AwsEc2RouteTable import ( - Metacheck as RouteTableMetacheck, - ) - - execute(resource_drilled.route_tables, RouteTableMetacheck) - - except (AttributeError, Exception) as err: - if "should return None" in str(err): - self.logger.info( - "Not Found Drilled resource %s from resource: %s", - resource, - self.resource_arn, - ) - else: - self.logger.error( - "Error Running Drilled MetaChecks for resource %s from resource: %s - %s", - resource, - self.resource_arn, - err, - ) - resources[resource] = False - self.drilled_cache[resource] = False - else: - self.logger.info( - "Ignoring (already checked) Drilled MetaChecks for resource {} from resource: {}".format( - resource, self.resource_arn + else: + self.logger.error( + "Error Running Drilled MetaChecks for resource %s from resource: %s - %s", + r, + self.resource_arn, + err, ) + resource_drilled = False + resource_drilled_output = False + self.drilled_cache[r] = False + else: + self.logger.info( + "Ignoring (already checked) Drilled MetaChecks for resource {} from resource: {}".format( + r, self.resource_arn ) - resources[resource] = self.drilled_cache[resource] - - # Security Groups - if hasattr(self, "security_groups") and self.security_groups: - from lib.context.resources.AwsEc2SecurityGroup import ( - Metacheck as SecurityGroupMetacheck, - ) - - execute(self.security_groups, SecurityGroupMetacheck) - - # IAM Roles - if hasattr(self, "iam_roles") and self.iam_roles: - from lib.context.resources.AwsIamRole import ( - Metacheck as AwsIamRoleMetaCheck, - ) - - execute(self.iam_roles, AwsIamRoleMetaCheck) - - # IAM Policies - if hasattr(self, "iam_policies") and self.iam_policies: - from lib.context.resources.AwsIamPolicy import ( - Metacheck as IamPolicyMetacheck, - ) - - execute(self.iam_policies, IamPolicyMetacheck) - - # AutoScaling Groups - if hasattr(self, "autoscaling_groups") and self.autoscaling_groups: - from lib.context.resources.AwsAutoScalingAutoScalingGroup import ( - Metacheck as AwsAutoScalingAutoScalingGroupMetacheck, - ) + ) + resource_drilled = False + resource_drilled_output = self.drilled_cache[r] + + return resource_drilled_output, resource_drilled + + def check_associated_resources(resource, level): + # print ("Level: {}, Resource: {}".format(level, resource.resource_arn)) + + # Security Groups + if ( + hasattr(resource, "iam_users") + and resource.iam_users + and self.resource_type != "AwsIamUser" + ): + from lib.context.resources.AwsIamUser import ( + Metacheck as AwsIamUserMetacheck, + ) - execute(self.autoscaling_groups, AwsAutoScalingAutoScalingGroupMetacheck) + for r, v in list(resource.iam_users.items()): + resource_drilled_output, resource_drilled = execute( + r, AwsIamUserMetacheck + ) + resource.iam_users[r] = resource_drilled_output + if level < 1 and resource_drilled: + check_associated_resources(resource_drilled, level + 1) + + # Security Groups + if ( + hasattr(resource, "security_groups") + and resource.security_groups + and self.resource_type != "AwsEc2SecurityGroup" + ): + from lib.context.resources.AwsEc2SecurityGroup import ( + Metacheck as SecurityGroupMetacheck, + ) - # Volumes - if hasattr(self, "volumes") and self.volumes: - from lib.context.resources.AwsEc2Volume import Metacheck as VolumeMetacheck + for r, v in list(resource.security_groups.items()): + resource_drilled_output, resource_drilled = execute( + r, SecurityGroupMetacheck + ) + resource.security_groups[r] = resource_drilled_output + if level < 1 and resource_drilled: + check_associated_resources(resource_drilled, level + 1) + + # IAM Roles + if ( + hasattr(resource, "iam_roles") + and resource.iam_roles + and self.resource_type != "AwsIamRole" + ): + from lib.context.resources.AwsIamRole import ( + Metacheck as AwsIamRoleMetaCheck, + ) - execute(self.volumes, VolumeMetacheck) + for r, v in list(resource.iam_roles.items()): + resource_drilled_output, resource_drilled = execute( + r, AwsIamRoleMetaCheck + ) + resource.iam_roles[r] = resource_drilled_output + if level < 2 and resource_drilled: + check_associated_resources(resource_drilled, level + 1) + + # IAM Policies + if ( + hasattr(resource, "iam_policies") + and resource.iam_policies + and self.resource_type != "AwsIamPolicy" + ): + from lib.context.resources.AwsIamPolicy import ( + Metacheck as IamPolicyMetacheck, + ) - # VPC - if hasattr(self, "vpcs") and self.vpcs: - from lib.context.resources.AwsEc2Vpc import Metacheck as VpcMetacheck + for r, v in list(resource.iam_policies.items()): + resource_drilled_output, resource_drilled = execute( + r, IamPolicyMetacheck + ) + resource.iam_policies[r] = resource_drilled_output + if level < 1 and resource_drilled: + check_associated_resources(resource_drilled, level + 1) + + # AutoScaling Groups + if ( + hasattr(resource, "autoscaling_groups") + and resource.autoscaling_groups + and self.resource_type != "AwsAutoScalingAutoScalingGroup" + ): + from lib.context.resources.AwsAutoScalingAutoScalingGroup import ( + Metacheck as AwsAutoScalingAutoScalingGroupMetacheck, + ) - execute(self.vpcs, VpcMetacheck) + for r, v in list(resource.autoscaling_groups.items()): + resource_drilled_output, resource_drilled = execute( + r, AwsAutoScalingAutoScalingGroupMetacheck + ) + resource.autoscaling_groups[r] = resource_drilled_output + if level < 1 and resource_drilled: + check_associated_resources(resource_drilled, level + 1) + + # Volumes + if ( + hasattr(resource, "volumes") + and resource.volumes + and self.resource_type != "AwsEc2Volume" + ): + from lib.context.resources.AwsEc2Volume import ( + Metacheck as VolumeMetacheck, + ) - # Subnets - if hasattr(self, "subnets") and self.subnets: - from lib.context.resources.AwsEc2Subnet import Metacheck as SubnetMetacheck + for r, v in list(resource.volumes.items()): + resource_drilled_output, resource_drilled = execute( + r, VolumeMetacheck + ) + resource.volumes[r] = resource_drilled_output + if level < 1 and resource_drilled: + check_associated_resources(resource_drilled, level + 1) + + # VPC + if ( + hasattr(resource, "vpcs") + and resource.vpcs + and self.resource_type != "AwsEc2Vpc" + ): + from lib.context.resources.AwsEc2Vpc import Metacheck as VpcMetacheck + + for r, v in list(resource.vpcs.items()): + resource_drilled_output, resource_drilled = execute(r, VpcMetacheck) + resource.vpcs[r] = resource_drilled_output + if level < 1 and resource_drilled: + check_associated_resources(resource_drilled, level + 1) + + # Subnets + if ( + hasattr(resource, "subnets") + and resource.subnets + and self.resource_type != "AwsEc2Subnet" + ): + from lib.context.resources.AwsEc2Subnet import ( + Metacheck as SubnetMetacheck, + ) - execute(self.subnets, SubnetMetacheck) + for r, v in list(resource.subnets.items()): + resource_drilled_output, resource_drilled = execute( + r, SubnetMetacheck + ) + resource.subnets[r] = resource_drilled_output + if level < 1 and resource_drilled: + check_associated_resources(resource_drilled, level + 1) + + # Route Tables + if ( + hasattr(resource, "route_tables") + and resource.route_tables + and self.resource_type != "AwsEc2RouteTable" + ): + from lib.context.resources.AwsEc2RouteTable import ( + Metacheck as RouteTableMetacheck, + ) - # Route Tables - if hasattr(self, "route_tables") and self.route_tables: - from lib.context.resources.AwsEc2RouteTable import ( - Metacheck as RouteTableMetacheck, - ) + for r, v in list(resource.route_tables.items()): + resource_drilled_output, resource_drilled = execute( + r, RouteTableMetacheck + ) + resource.route_tables[r] = resource_drilled_output + if level < 1 and resource_drilled: + check_associated_resources(resource_drilled, level + 1) + + # Api Gateway V2 Api + if ( + hasattr(resource, "api_gwv2_apis") + and resource.api_gwv2_apis + and self.resource_type != "AwsApiGatewayV2Api" + ): + from lib.context.resources.AwsApiGatewayV2Api import ( + Metacheck as ApiGatewayV2ApiMetacheck, + ) - execute(self.route_tables, RouteTableMetacheck) + for r, v in list(resource.api_gwv2_apis.items()): + resource_drilled_output, resource_drilled = execute( + r, ApiGatewayV2ApiMetacheck + ) + resource.api_gwv2_apis[r] = resource_drilled_output + if level < 1 and resource_drilled: + check_associated_resources(resource_drilled, level + 1) + + # Instances + if ( + hasattr(resource, "instances") + and resource.instances + and self.resource_type != "AwsEc2Instance" + ): + from lib.context.resources.AwsEc2Instance import ( + Metacheck as AwsEc2InstanceMetacheck, + ) - # Api Gateway V2 Api - if hasattr(self, "api_gwv2_apis") and self.api_gwv2_apis: - from lib.context.resources.AwsApiGatewayV2Api import ( - Metacheck as ApiGatewayV2ApiMetacheck, - ) + for r, v in list(resource.instances.items()): + resource_drilled_output, resource_drilled = execute( + r, AwsEc2InstanceMetacheck + ) + resource.instances[r] = resource_drilled_output + if level < 1 and resource_drilled: + check_associated_resources(resource_drilled, level + 1) - execute(self.api_gwv2_apis, ApiGatewayV2ApiMetacheck) + check_associated_resources(self, 0) def output_checks_drilled(self): mh_values_checks = {} diff --git a/lib/findings.py b/lib/findings.py index 9d29992..659869f 100644 --- a/lib/findings.py +++ b/lib/findings.py @@ -1,5 +1,206 @@ +from concurrent.futures import CancelledError, ThreadPoolExecutor, as_completed +from threading import Lock + +from alive_progress import alive_bar + from lib.context.context import Context -from lib.securityhub import parse_finding +from lib.helpers import confirm_choice, print_table +from lib.impact.impact import Impact +from lib.securityhub import SecurityHub, parse_finding +from lib.statistics import generate_statistics + + +def update_findings( + logger, + mh_findings, + update, + sh_account, + sh_role, + sh_region, + update_filters, + sh_profile, + actions_confirmation, +): + sh = SecurityHub(logger, sh_region, sh_account, sh_role, sh_profile) + if confirm_choice( + "Are you sure you want to update all findings?", actions_confirmation + ): + update_multiple = sh.update_findings_workflow(mh_findings, update_filters) + update_multiple_ProcessedFinding = [] + update_multiple_UnprocessedFindings = [] + for update in update_multiple: + for ProcessedFinding in update["ProcessedFindings"]: + logger.info("Updated Finding : " + ProcessedFinding["Id"]) + update_multiple_ProcessedFinding.append(ProcessedFinding) + for UnprocessedFinding in update["UnprocessedFindings"]: + logger.error( + "Error Updating Finding: " + + UnprocessedFinding["FindingIdentifier"]["Id"] + + " Error: " + + UnprocessedFinding["ErrorMessage"] + ) + update_multiple_UnprocessedFindings.append(UnprocessedFinding) + return update_multiple_ProcessedFinding, update_multiple_UnprocessedFindings + return [], [] + + +def enrich_findings( + logger, + mh_findings, + sh_account, + sh_role, + sh_region, + sh_profile, + actions_confirmation, +): + sh = SecurityHub(logger, sh_region, sh_account, sh_role, sh_profile) + if confirm_choice( + "Are you sure you want to enrich all findings?", actions_confirmation + ): + update_multiple = sh.update_findings_meta(mh_findings) + update_multiple_ProcessedFinding = [] + update_multiple_UnprocessedFindings = [] + for update in update_multiple: + for ProcessedFinding in update["ProcessedFindings"]: + logger.info("Updated Finding : " + ProcessedFinding["Id"]) + update_multiple_ProcessedFinding.append(ProcessedFinding) + for UnprocessedFinding in update["UnprocessedFindings"]: + logger.error( + "Error Updating Finding: " + + UnprocessedFinding["FindingIdentifier"]["Id"] + + " Error: " + + UnprocessedFinding["ErrorMessage"] + ) + update_multiple_UnprocessedFindings.append(UnprocessedFinding) + return update_multiple_ProcessedFinding, update_multiple_UnprocessedFindings + return [], [] + + +def generate_findings( + logger, + sh_filters, + sh_region, + sh_account, + sh_profile, + sh_role, + context, + mh_role, + mh_filters_config, + mh_filters_tags, + inputs, + asff_findings, + banners, +): + mh_findings = {} + mh_findings_not_matched_findings = {} + mh_findings_short = {} + mh_inventory = [] + AwsAccountData = {} + + # We keep a dictionary to avoid to process the same resource more than once + cached_associated_resources = {} + + findings = [] + if "file-asff" in inputs and asff_findings: + findings.extend(asff_findings) + print_table("Input ASFF findings found: ", len(asff_findings), banners=banners) + if "securityhub" in inputs: + sh = SecurityHub(logger, sh_region, sh_account, sh_role, sh_profile) + sh_findings = sh.get_findings(sh_filters) + findings.extend(sh_findings) + print_table("Security Hub findings found: ", len(sh_findings), banners=banners) + + resource_locks = {} + + def process_finding(finding): + # Get the resource_arn from the finding + resource_arn, finding_parsed = parse_finding(finding) + # Get the lock for this resource + # To Do: If more than one finding for the same account, account_context could execute more than once for the same account + # Split the findings by account and execute the account_context only once per account + lock = resource_locks.get(resource_arn) + + # If the lock does not exist, create it + if lock is None: + lock = Lock() + resource_locks[resource_arn] = lock + + # Acquire the lock for this resource + with lock: + # Now process the finding + return evaluate_finding( + logger, + finding, + mh_findings, + mh_findings_not_matched_findings, + mh_inventory, + mh_findings_short, + AwsAccountData, + mh_role, + mh_filters_config, + mh_filters_tags, + context, + cached_associated_resources, + ) + + with alive_bar(title="-> Analizing findings...", total=len(findings)) as bar: + try: + executor = ( + ThreadPoolExecutor() + ) # create executor outside of context manager + # Create future tasks + futures = { + executor.submit(process_finding, finding) for finding in findings + } + + try: + # Process futures as they complete + for future in as_completed(futures): + ( + mh_findings, + mh_findings_not_matched_findings, + mh_findings_short, + mh_inventory, + AwsAccountData, + ) = future.result() + bar() + + except KeyboardInterrupt: + print( + "Keyboard interrupt detected, shutting down all tasks, please wait..." + ) + for future in futures: + future.cancel() # cancel each future + + # Wait for all futures to be cancelled + for future in as_completed(futures): + try: + future.result() # this will raise a CancelledError if the future was cancelled + except CancelledError: + pass + + except KeyboardInterrupt: + print("Keyboard interrupt detected during shutdown. Exiting...") + finally: + executor.shutdown( + wait=False + ) # shutdown executor without waiting for all threads to finish + + mh_statistics = generate_statistics(mh_findings) + + # Add Impact + imp = Impact(logger) + for resource_arn, resource_values in mh_findings.items(): + impact_checks = imp.generate_impact_checks(resource_arn, resource_values) + mh_findings[resource_arn]["impact"] = mh_findings_short[resource_arn][ + "impact" + ] = impact_checks + for resource_arn, resource_values in mh_findings.items(): + impact_scoring = imp.generate_impact_scoring(resource_arn, resource_values) + mh_findings[resource_arn]["impact"]["score"] = mh_findings_short[resource_arn][ + "impact" + ]["score"] = impact_scoring + return mh_findings, mh_findings_short, mh_inventory, mh_statistics def evaluate_finding( @@ -14,6 +215,7 @@ def evaluate_finding( mh_filters_config, mh_filters_tags, context_options, + cached_associated_resources, ): mh_matched = False resource_arn, finding_parsed = parse_finding(finding) @@ -31,9 +233,19 @@ def evaluate_finding( elif resource_arn in mh_findings_not_matched_findings: mh_matched = False elif context_options: - context = Context(logger, finding, mh_filters_config, mh_filters_tags, mh_role) + context = Context( + logger, + finding, + mh_filters_config, + mh_filters_tags, + mh_role, + cached_associated_resources, + ) if "config" in context_options: mh_config, mh_checks_matched = context.get_context_config() + # Get and Cache the associations for this resource + if mh_config: + cached_associated_resources.update(get_associations(mh_config)) else: mh_config = False mh_checks_matched = True @@ -88,22 +300,23 @@ def evaluate_finding( mh_findings[resource_arn]["AwsAccountId"] = mh_findings_short[resource_arn][ "AwsAccountId" ] = finding["AwsAccountId"] - # Add Context - if mh_config: - mh_findings[resource_arn].update(mh_config) - mh_findings_short[resource_arn].update(mh_config) - else: - mh_findings[resource_arn]["config"] = False - mh_findings_short[resource_arn]["config"] = False - mh_findings[resource_arn]["tags"] = mh_findings_short[resource_arn][ - "tags" - ] = mh_tags - mh_findings[resource_arn]["account"] = mh_findings_short[resource_arn][ - "account" - ] = mh_account - mh_findings[resource_arn]["cloudtrail"] = mh_findings_short[resource_arn][ - "cloudtrail" - ] = mh_trails + if context_options: + # Add Context + if mh_config: + mh_findings[resource_arn].update(mh_config) + mh_findings_short[resource_arn].update(mh_config) + else: + mh_findings[resource_arn]["config"] = False + mh_findings_short[resource_arn]["config"] = False + mh_findings[resource_arn]["tags"] = mh_findings_short[resource_arn][ + "tags" + ] = mh_tags + mh_findings[resource_arn]["account"] = mh_findings_short[resource_arn][ + "account" + ] = mh_account + mh_findings[resource_arn]["cloudtrail"] = mh_findings_short[ + resource_arn + ]["cloudtrail"] = mh_trails # Add Findings mh_findings_short[resource_arn]["findings"].append( @@ -118,3 +331,24 @@ def evaluate_finding( mh_inventory, AwsAccountData, ) + + +# From each resource, get the associations, so we can cache them and avoid to get them again +def get_associations(resource): + associations_all = {} + + def get_associations_recursively(dictionary, parent_key=""): + for key, value in dictionary.items(): + if isinstance(value, dict): + if key == "associations": + for atype, associations in value.items(): + if isinstance(associations, dict): + for association, association_values in associations.items(): + if association_values: + associations_all[association] = association_values + get_associations_recursively( + value, f"{parent_key}.{key}" if parent_key else key + ) + + get_associations_recursively(resource) + return associations_all diff --git a/lib/main.py b/lib/main.py index b2425e0..e8c4eb4 100755 --- a/lib/main.py +++ b/lib/main.py @@ -1,18 +1,13 @@ import json -from concurrent.futures import CancelledError, ThreadPoolExecutor, as_completed from sys import argv, exit -from threading import Lock -from time import strftime -from alive_progress import alive_bar from rich.columns import Columns from rich.console import Console from lib.AwsHelpers import get_account_alias, get_account_id, get_region from lib.config.configuration import sh_default_filters -from lib.findings import evaluate_finding +from lib.findings import enrich_findings, generate_findings, update_findings from lib.helpers import ( - confirm_choice, generate_rich, get_logger, get_parser, @@ -22,202 +17,7 @@ print_title_line, test_python_version, ) -from lib.impact.impact import Impact from lib.outputs import generate_outputs -from lib.securityhub import SecurityHub, parse_finding -from lib.statistics import generate_statistics - -OUTPUT_DIR = "outputs/" -TIMESTRF = strftime("%Y%m%d-%H%M%S") - - -def generate_findings( - logger, - sh_filters, - sh_region, - sh_account, - sh_profile, - sh_role, - context, - mh_role, - mh_filters_config, - mh_filters_tags, - inputs, - asff_findings, - banners, -): - mh_findings = {} - mh_findings_not_matched_findings = {} - mh_findings_short = {} - mh_inventory = [] - AwsAccountData = {} - - findings = [] - if "file-asff" in inputs and asff_findings: - findings.extend(asff_findings) - print_table("Input ASFF findings found: ", len(asff_findings), banners=banners) - if "securityhub" in inputs: - sh = SecurityHub(logger, sh_region, sh_account, sh_role, sh_profile) - sh_findings = sh.get_findings(sh_filters) - findings.extend(sh_findings) - print_table("Security Hub findings found: ", len(sh_findings), banners=banners) - - resource_locks = {} - - def process_finding(finding): - # Get the resource_arn from the finding - resource_arn, finding_parsed = parse_finding(finding) - # Get the lock for this resource - # To Do: If more than one finding for the same account, account_context could execute more than once for the same account - # Split the findings by account and execute the account_context only once per account - lock = resource_locks.get(resource_arn) - - # If the lock does not exist, create it - if lock is None: - lock = Lock() - resource_locks[resource_arn] = lock - - # Acquire the lock for this resource - with lock: - # Now process the finding - return evaluate_finding( - logger, - finding, - mh_findings, - mh_findings_not_matched_findings, - mh_inventory, - mh_findings_short, - AwsAccountData, - mh_role, - mh_filters_config, - mh_filters_tags, - context, - ) - - with alive_bar(title="-> Analizing findings...", total=len(findings)) as bar: - try: - executor = ( - ThreadPoolExecutor() - ) # create executor outside of context manager - # Create future tasks - futures = { - executor.submit(process_finding, finding) for finding in findings - } - - try: - # Process futures as they complete - for future in as_completed(futures): - ( - mh_findings, - mh_findings_not_matched_findings, - mh_findings_short, - mh_inventory, - AwsAccountData, - ) = future.result() - bar() - - except KeyboardInterrupt: - print( - "Keyboard interrupt detected, shutting down all tasks, please wait..." - ) - for future in futures: - future.cancel() # cancel each future - - # Wait for all futures to be cancelled - for future in as_completed(futures): - try: - future.result() # this will raise a CancelledError if the future was cancelled - except CancelledError: - pass - - except KeyboardInterrupt: - print("Keyboard interrupt detected during shutdown. Exiting...") - finally: - executor.shutdown( - wait=False - ) # shutdown executor without waiting for all threads to finish - - mh_statistics = generate_statistics(mh_findings) - - # Add Impact - imp = Impact(logger) - for resource_arn, resource_values in mh_findings.items(): - impact_checks = imp.generate_impact_checks(resource_arn, resource_values) - mh_findings[resource_arn]["impact"] = mh_findings_short[resource_arn][ - "impact" - ] = impact_checks - for resource_arn, resource_values in mh_findings.items(): - impact_scoring = imp.generate_impact_scoring(resource_arn, resource_values) - mh_findings[resource_arn]["impact"]["score"] = mh_findings_short[resource_arn][ - "impact" - ]["score"] = impact_scoring - return mh_findings, mh_findings_short, mh_inventory, mh_statistics - - -def update_findings( - logger, - mh_findings, - update, - sh_account, - sh_role, - sh_region, - update_filters, - sh_profile, - actions_confirmation, -): - sh = SecurityHub(logger, sh_region, sh_account, sh_role, sh_profile) - if confirm_choice( - "Are you sure you want to update all findings?", actions_confirmation - ): - update_multiple = sh.update_findings_workflow(mh_findings, update_filters) - update_multiple_ProcessedFinding = [] - update_multiple_UnprocessedFindings = [] - for update in update_multiple: - for ProcessedFinding in update["ProcessedFindings"]: - logger.info("Updated Finding : " + ProcessedFinding["Id"]) - update_multiple_ProcessedFinding.append(ProcessedFinding) - for UnprocessedFinding in update["UnprocessedFindings"]: - logger.error( - "Error Updating Finding: " - + UnprocessedFinding["FindingIdentifier"]["Id"] - + " Error: " - + UnprocessedFinding["ErrorMessage"] - ) - update_multiple_UnprocessedFindings.append(UnprocessedFinding) - return update_multiple_ProcessedFinding, update_multiple_UnprocessedFindings - return [], [] - - -def enrich_findings( - logger, - mh_findings, - sh_account, - sh_role, - sh_region, - sh_profile, - actions_confirmation, -): - sh = SecurityHub(logger, sh_region, sh_account, sh_role, sh_profile) - if confirm_choice( - "Are you sure you want to enrich all findings?", actions_confirmation - ): - update_multiple = sh.update_findings_meta(mh_findings) - update_multiple_ProcessedFinding = [] - update_multiple_UnprocessedFindings = [] - for update in update_multiple: - for ProcessedFinding in update["ProcessedFindings"]: - logger.info("Updated Finding : " + ProcessedFinding["Id"]) - update_multiple_ProcessedFinding.append(ProcessedFinding) - for UnprocessedFinding in update["UnprocessedFindings"]: - logger.error( - "Error Updating Finding: " - + UnprocessedFinding["FindingIdentifier"]["Id"] - + " Error: " - + UnprocessedFinding["ErrorMessage"] - ) - update_multiple_UnprocessedFindings.append(UnprocessedFinding) - return update_multiple_ProcessedFinding, update_multiple_UnprocessedFindings - return [], [] def count_mh_findings(mh_findings): @@ -432,7 +232,7 @@ def main(args): print_table("Log Level: ", str(args.log_level), banners=banners) # Generate Findings - print_title_line("Generating Findings", banners=banners) + print_title_line("Reading Findings", banners=banners) ( mh_findings, mh_findings_short, diff --git a/lib/outputs.py b/lib/outputs.py index f7761d0..ab4e6b1 100644 --- a/lib/outputs.py +++ b/lib/outputs.py @@ -5,16 +5,16 @@ import jinja2 import xlsxwriter +from lib.config.configuration import outputs_dir, outputs_time_str from lib.helpers import print_table -OUTPUT_DIR = "outputs/" -TIMESTRF = strftime("%Y%m%d-%H%M%S") +TIMESTRF = strftime(outputs_time_str) def generate_output_json( mh_findings_short, mh_findings, mh_inventory, mh_statistics, json_mode, args ): - WRITE_FILE = f"{OUTPUT_DIR}metahub-{json_mode}-{TIMESTRF}.json" + WRITE_FILE = f"{outputs_dir}metahub-{json_mode}-{TIMESTRF}.json" with open(WRITE_FILE, "w", encoding="utf-8") as f: json.dump( { @@ -32,7 +32,7 @@ def generate_output_json( def generate_output_csv( output, config_columns, tag_columns, account_columns, impact_columns, args ): - WRITE_FILE = f"{OUTPUT_DIR}metahub-{TIMESTRF}.csv" + WRITE_FILE = f"{outputs_dir}metahub-{TIMESTRF}.csv" with open(WRITE_FILE, "w", encoding="utf-8", newline="") as output_file: colums = [ "Resource ID", @@ -110,7 +110,7 @@ def generate_output_csv( def generate_output_xlsx( output, config_columns, tag_columns, account_columns, impact_columns, args ): - WRITE_FILE = f"{OUTPUT_DIR}metahub-{TIMESTRF}.xlsx" + WRITE_FILE = f"{outputs_dir}metahub-{TIMESTRF}.xlsx" # Create a workbook and add a worksheet workbook = xlsxwriter.Workbook(WRITE_FILE) worksheet = workbook.add_worksheet("findings") @@ -223,7 +223,7 @@ def generate_output_html( impact_columns, args, ): - WRITE_FILE = f"{OUTPUT_DIR}metahub-{TIMESTRF}.html" + WRITE_FILE = f"{outputs_dir}metahub-{TIMESTRF}.html" templateLoader = jinja2.FileSystemLoader(searchpath="./") templateEnv = jinja2.Environment(loader=templateLoader, autoescape=True) TEMPLATE_FILE = "lib/html/template.html"