Skip to content

Commit

Permalink
Merge d8b66bc into 41da6b5
Browse files Browse the repository at this point in the history
  • Loading branch information
ryandeivert committed Apr 3, 2020
2 parents 41da6b5 + d8b66bc commit 8d952d5
Show file tree
Hide file tree
Showing 64 changed files with 572 additions and 879 deletions.
3 changes: 3 additions & 0 deletions conf/global.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,9 @@
],
"scheduled_query_locations": [
"scheduled_queries"
],
"publisher_locations": [
"publishers"
]
},
"infrastructure": {
Expand Down
4 changes: 3 additions & 1 deletion conf/lambda.json
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,10 @@
"subnet_ids": []
}
},
"athena_partition_refresh_config": {
"athena_partitioner_config": {
"concurrency_limit": 10,
"memory": 128,
"timeout": 300,
"file_format": null,
"log_level": "info"
},
Expand Down
2 changes: 1 addition & 1 deletion docs/source/apps.rst
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ The recommended process is to deploy both the `apps` function and the `classifie

.. code-block:: bash
python manage.py deploy --function classifier apps
python manage.py deploy --functions classifier apps
Authorizing the Slack App
Expand Down
2 changes: 1 addition & 1 deletion docs/source/architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ configured `outputs <outputs.html>`_. All alerts implicitly include a Firehose o
an S3 bucket that can be queried with Athena. Alerts will be retried indefinitely until they are
successfully delivered, at which point they will be removed from the DynamoDB table.

6. An "athena partition refresh" Lambda function runs periodically to onboard new StreamAlert data
6. An Athena Partitioner Lambda function runs periodically to onboard new StreamAlert data
and alerts into their respective Athena databases for historical search.

Other StreamAlert components include DynamoDB tables and Lambda functions for optional rule
Expand Down
4 changes: 4 additions & 0 deletions docs/source/config-global.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,9 @@ Configuration
],
"scheduled_query_locations": [
"scheduled_queries"
],
"publisher_locations": [
"publishers"
]
}
}
Expand All @@ -82,6 +85,7 @@ Options
``matcher_locations`` Yes ``["matchers"]`` List of local paths where ``matchers`` are defined
``rule_locations`` Yes ``["rules"]`` List of local paths where ``rules`` are defined
``scheduled_query_locations`` Yes ``["scheduled_queries"]`` List of local paths where ``scheduled_queries`` are defined
``publisher_locations`` Yes ``["publishers"]`` List of local paths where ``publishers`` are defined
============================= ============= ========================= ===============


Expand Down
24 changes: 12 additions & 12 deletions docs/source/deployment.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,20 +35,20 @@ To deploy new changes for all AWS Lambda functions:

.. code-block:: bash
python manage.py deploy --function all
python manage.py deploy
Optionally, to deploy changes for only a specific AWS Lambda function:

.. code-block:: bash
python manage.py deploy --function alert
python manage.py deploy --function alert_merger
python manage.py deploy --function apps
python manage.py deploy --function athena
python manage.py deploy --function classifier
python manage.py deploy --function rule
python manage.py deploy --function rule_promo
python manage.py deploy --function threat_intel_downloader
python manage.py deploy --functions alert
python manage.py deploy --functions alert_merger
python manage.py deploy --functions apps
python manage.py deploy --functions athena
python manage.py deploy --functions classifier
python manage.py deploy --functions rule
python manage.py deploy --functions rule_promo
python manage.py deploy --functions threat_intel_downloader
To apply infrastructure level changes (additional Kinesis Shards, new CloudTrails, etc), run:

Expand Down Expand Up @@ -95,8 +95,8 @@ to point to the previous version:

.. code-block:: bash
python manage.py rollback --function rule
python manage.py rollback --function alert
python manage.py rollback --function all
python manage.py rollback --functions rule
python manage.py rollback --functions alert
python manage.py rollback
This is helpful to quickly revert changes to Lambda functions, e.g. if a bad rule was deployed.
6 changes: 3 additions & 3 deletions docs/source/getting-started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ Deploy

.. code-block:: bash
"athena_partition_refresh_config": {
"athena_partitioner_config": {
"concurrency_limit": 10,
"file_format": "parquet",
"log_level": "info"
Expand Down Expand Up @@ -237,7 +237,7 @@ alerts on any usage of the root AWS account. Change the rule decorator to:
python manage.py build
# Deploy a new version of all of the Lambda functions with the updated rule and config files
python manage.py deploy --function all
python manage.py deploy
.. note:: Use ``build`` and ``deploy`` to apply any changes to StreamAlert's
configuration or Lambda functions, respectively. Some changes (like this example) require both.
Expand Down Expand Up @@ -284,7 +284,7 @@ dropdown on the left and preview the ``alerts`` table:
:target: _images/athena-alerts-search.png

(Here, my name prefix is ``testv2``.) If no records are returned, look for errors
in the Athena Partition Refresh function or try invoking it directly.
in the Athena Partitioner function or try invoking it directly.

And there you have it! Ingested log data is parsed, classified, and scanned by the rules engine.
Any resulting alerts are delivered to your configured output(s) within a matter of minutes.
26 changes: 13 additions & 13 deletions docs/source/historical-search.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ StreamAlert historical search feature is backed by Amazon S3 and `Athena <https:
By default, StreamAlert will send all alerts to S3 and those alerts will be searchable in Athena table. StreamAlert
users have option to enable historical search feature for data as well.

As of StreamAlert v3.1.0, a new field, ``file_format``, has been added to ``athena_partition_refresh_config``
As of StreamAlert v3.1.0, a new field, ``file_format``, has been added to ``athena_partitioner_config``
in ``conf/lamba.json``, defaulting to ``null``. This field allows users to configure how the data processed
by the Classifier is stored in S3 bucket, either in ``parquet`` or ``json``.

Expand Down Expand Up @@ -39,7 +39,7 @@ The pipeline is:

#. StreamAlert creates an Athena Database, alerts kinesis Firehose and ``alerts`` table during initial deployment
#. Optionally create Firehose resources and Athena tables for historical data retention
#. S3 events will be sent to an SQS that is mapped to the Athena Partition Refresh Lambda function
#. S3 events will be sent to an SQS that is mapped to the Athena Partitioner Lambda function
#. The Lambda function adds new partitions when there are new alerts or data saved in S3 bucket via Firehose
#. Alerts, and optionally data, are available for searching via Athena console or the Athena API

Expand All @@ -50,30 +50,30 @@ Alerts Search
*************

* Review the settings for the :ref:`Alerts Firehose Configuration <alerts_firehose_configuration>` and
the :ref:`Athena Partition Refresh<configure_athena_partition_refresh_lambda>` function. Note that
the :ref:`Athena Partitioner<configure_athena_partitioner_lambda>` function. Note that
the Athena database and alerts table are created automatically when you first deploy StreamAlert.
* If the ``file_format`` value within the :ref:`Athena Partition Refresh<configure_athena_partition_refresh_lambda>`
* If the ``file_format`` value within the :ref:`Athena Partitioner<configure_athena_partitioner_lambda>`
function config is set to ``parquet``, you can run the ``MSCK REPAIR TABLE alerts`` command in
Athena to load all available partitions and then alerts can be searchable. Note, however, that the
``MSCK REPAIR`` command cannot load new partitions automatically.
* StreamAlert includes a Lambda function to automatically add new partitions for Athena tables when
the data arrives in S3. See :ref:`configure_athena_partition_refresh_lambda`
the data arrives in S3. See :ref:`configure_athena_partitioner_lambda`

.. code-block:: bash
{
"athena_partition_refresh_config": {
"athena_partitioner_config": {
"concurrency_limit": 10,
"file_format": "parquet",
"log_level": "info"
}
}
* Deploy the Athena Partition Refresh Lambda function
* Deploy the Athena Partitioner Lambda function

.. code-block:: bash
python manage.py deploy --function athena
python manage.py deploy --functions athena
* Search alerts in `Athena Console <https://console.aws.amazon.com/athena>`_

Expand All @@ -99,7 +99,7 @@ It is optional to store data in S3 bucket and available for search in Athena tab

.. code-block:: bash
python manage.py deploy --function classifier
python manage.py deploy --functions classifier
* Search data `Athena Console <https://console.aws.amazon.com/athena>`_

Expand All @@ -109,7 +109,7 @@ It is optional to store data in S3 bucket and available for search in Athena tab
.. image:: ../images/athena-data-search.png


.. _configure_athena_partition_refresh_lambda:
.. _configure_athena_partitioner_lambda:

*************************
Configure Lambda Settings
Expand All @@ -120,8 +120,8 @@ Open ``conf/lambda.json``, and fill in the following options:
=================================== ======== ==================== ===========
Key Required Default Description
----------------------------------- -------- -------------------- -----------
``enabled`` Yes ``true`` Enables/Disables the Athena Partition Refresh Lambda function
``enable_custom_metrics`` No ``false`` Enables/Disables logging of metrics for the Athena Partition Refresh Lambda function
``enabled`` Yes ``true`` Enables/Disables the Athena Partitioner Lambda function
``enable_custom_metrics`` No ``false`` Enables/Disables logging of metrics for the Athena Partitioner Lambda function
``log_level`` No ``info`` The log level for the Lambda function, can be either ``info`` or ``debug``. Debug will help with diagnosing errors with polling SQS or sending Athena queries.
``memory`` No ``128`` The amount of memory (in MB) allocated to the Lambda function
``timeout`` No ``60`` The maximum duration of the Lambda function (in seconds)
Expand All @@ -134,7 +134,7 @@ Key Required Default Descriptio
.. code-block:: json
{
"athena_partition_refresh_config": {
"athena_partitioner_config": {
"log_level": "info",
"memory": 128,
"buckets": {
Expand Down
2 changes: 1 addition & 1 deletion docs/source/rule-promotion.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ function code.

.. code-block:: bash
python manage.py deploy --function rule_promo
python manage.py deploy --functions rule_promo
.. note::

Expand Down
2 changes: 1 addition & 1 deletion docs/source/rule-staging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ staged during a deploy. To allow for this, the Rules Engine can be deployed with

.. code-block:: bash
python manage.py deploy --function rule --skip-rule-staging
python manage.py deploy --functions rule --skip-rule-staging
This will force all new rules to send to user-defined outputs immediately upon deploy, bypassing
the default staging period. Alternatively, the ``--stage-rules`` and ``--unstage-rules`` flags
Expand Down
12 changes: 10 additions & 2 deletions manage.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,9 @@
import sys

from streamalert import __version__ as version
from streamalert_cli.config import DEFAULT_CONFIG_PATH
from streamalert_cli.runner import cli_runner, StreamAlertCLICommandRepository
from streamalert_cli.utils import generate_subparser
from streamalert_cli.utils import DirectoryType, generate_subparser


def build_parser():
Expand All @@ -51,7 +52,6 @@ def build_parser():
{} [command] --help
"""

parser = ArgumentParser(
formatter_class=RawDescriptionHelpFormatter,
prog=__file__
Expand All @@ -71,6 +71,14 @@ def build_parser():
action='store_true'
)

parser.add_argument(
'-c',
'--config-dir',
default=DEFAULT_CONFIG_PATH,
help='Path to directory containing configuration files',
type=DirectoryType()
)

# Dynamically generate subparsers, and create a 'commands' block for the prog description
command_block = []
subparsers = parser.add_subparsers(dest='command', required=True)
Expand Down
File renamed without changes.
7 changes: 4 additions & 3 deletions rules/matchers/matchers.py → matchers/default.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
is specific for the `prod` environment, we can define a matcher
and add it to our rules' `matchers` keyword argument:
from rules.matchers import matchers
from matchers import default
@rule('root_logins', logs=['osquery:differential'], matchers=[matchers.prod],
outputs=['pagerduty:sample-integration'])
Expand All @@ -14,13 +14,16 @@
@rule('root_logins', logs=['osquery:differential'],
matchers=[matchers.prod, matchers.pci], outputs=['pagerduty:sample-integration'])
"""


class AwsGuardDutyMatcher:
"""A class contains matchers for AWS GuardDuty service"""

@classmethod
def guard_duty(cls, rec):
return rec['detail-type'] == 'GuardDuty Finding'


class OsqueryMatcher:
"""A class defines contains matchers for Osquery events"""

Expand All @@ -33,12 +36,10 @@ class OsqueryMatcher:
'runlevel'
}


@classmethod
def added(cls, rec):
return rec['action'] == 'added'


@classmethod
def user_login(cls, rec):
"""Capture user logins from the osquery last table
Expand Down
2 changes: 1 addition & 1 deletion rules/community/cloudtrail/cloudtrail_aws_config.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""Alert on AWS Config"""
from rules.matchers.matchers import AwsConfigMatcher
from matchers.default import AwsConfigMatcher
from streamalert.shared.rule import rule


Expand Down
2 changes: 1 addition & 1 deletion rules/community/guardduty/guard_duty_all.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""Alert on GuardDuty"""
from rules.matchers.matchers import AwsGuardDutyMatcher
from matchers.default import AwsGuardDutyMatcher
from streamalert.shared.rule import rule


Expand Down
2 changes: 1 addition & 1 deletion rules/community/osquery/ssh_login_activity.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""Detect ssh login activity based on osquery last table"""
from rules.matchers.matchers import OsqueryMatcher
from matchers.default import OsqueryMatcher
from streamalert.shared.rule import rule


Expand Down
2 changes: 1 addition & 1 deletion streamalert/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
"""StreamAlert version."""
__version__ = '3.1.2'
__version__ = '3.2.0'
File renamed without changes.
Loading

0 comments on commit 8d952d5

Please sign in to comment.