Skip to content

Commit

Permalink
Merge pull request #963 from gravwell/dev
Browse files Browse the repository at this point in the history
v5.4.7: Dev to Master
  • Loading branch information
david-fritz-gravwell committed Apr 4, 2024
2 parents 267873e + 8c98209 commit 9b94344
Show file tree
Hide file tree
Showing 19 changed files with 257 additions and 247 deletions.
304 changes: 96 additions & 208 deletions 3rdparty-frontend-licenses.txt

Large diffs are not rendered by default.

8 changes: 6 additions & 2 deletions _static/versions.json
@@ -1,10 +1,14 @@
[
{
"name": "v5.4.6 (latest)",
"version": "v5.4.6",
"name": "v5.4.7 (latest)",
"version": "v5.4.7",
"url": "https://docs.gravwell.io/",
"preferred": true
},
{
"version": "v5.4.6",
"url": "https://docs.gravwell.io/v5.4.6/"
},
{
"version": "v5.4.5",
"url": "https://docs.gravwell.io/v5.4.5/"
Expand Down
6 changes: 1 addition & 5 deletions alerts/alerts.md
Expand Up @@ -41,11 +41,7 @@ If the owner of the *alert* does not have permission to ingest (either via the `

The "Max Events" configuration option is an important safeguard against accidentally sending yourself thousands of emails. Basically, when a dispatcher fires, Gravwell will only process *up to* Max Events results from the search. Suppose you have a scheduled search dispatcher which normally generates one or two results, which are emailed out via a flow consumer. If a new data source is added and the scheduled search suddenly returns thousands of results each time, you could be getting thousands of emails -- unless you've been cautious and set Max Events to a low value!

Gravwell sets a very low default for Max Events, because it is extremely easy to misjudge your dispatchers and generate too many events! The option can go up to 8192, which should be more than enough; if you need more events per dispatcher trigger, alerts might not be the right solution for that particular use case.

```{note}
Setting Max Events to 0 is equivalent to setting it to 8192, the max value
```
Gravwell recommends setting a low value (e.g. 16) for Max Events because it is extremely easy to misjudge your dispatchers and generate too many events! The option can go up to 8192, which should be more than enough; if you need more events per dispatcher trigger, alerts might not be the right solution for that particular use case.

### Search Retention

Expand Down
55 changes: 55 additions & 0 deletions changelog/5.4.7.md
@@ -0,0 +1,55 @@
# Changelog for version 5.4.7

## Released 04 April 2024

## Gravwell

### Additions

* Added a new option to the [HTTP Flow node](/flows/nodes/http) to allow interpretation of `Content-Type` and response casting.
* Added optional keys to the [Throttle Flow node](/flows/nodes/throttle) so that users can throttle based on a value in an Alert.
* Added the ability for non-admin users to mass delete Alerts that they own.
* Added icons for Alerts to improve sharing visibility.
* Added icons for Alerts to improve visibility when associating with Scheduled Searches and Flows.
* Added repair for indexer storage header that failed.
* Allowed duplicated structure data in syslog.
* Introduced new option to sort certain charts by field or magnitude.


### Bug Fixes

* Fixed an issue with sorting Persistent Searches.
* Fixed an issue where stale searches would be displayed in Persistent Searches.
* Fixed an issue where filters were not persisted for Persistent Searches.
* Fixed an issue where replication would fail if there was a folder in the storage location that was not named as expected.
* Fixed an issue where word cracking requests in Query Studio would fail after websocket encountered an error or closed.
* Fixed an issue where the association between a Scheduled Search and an Alert was lost after editing the Scheduled Search.
* Fixed an issue where Query Library would prompt the user to save even if no changes had been made.
* Fixed an issue where a Flow would incorrectly indicate it had been edited after saving.
* Fixed an issue where debugging a Flow would prompt the user unnecessarily.
* Fixed an issue where the cursor position sometimes appeared incorrect for text input.
* Fixed an issue where a websocket was still available after the Search capability was removed from a user.
* Fixed an issue where a non-admin user was able to make an Extractor global via an API request.
* Improved behavior in memory-limited environments.
* Improved error handling and logging for impersonation failures when debugging a Flow owned by another user.
* Improved performance on the search History page.
* Improved performance related to ingest reader timeouts when there is a large number of endpoints with dead connections.
* Improved startup time when lots of replicated shards are present.
* Improved shard restoration logic.
* Improved logging around Scheduled Search retries.
* Improved logging for indexer startup and shutdown.


## Ingesters

### Bug Fixes

* Fixed an issue where the Attach directive was missing some entries if those entries were cached.
* Improved logging when negotiating tags.


## Kits

### Bug Fixes

* Fixed a syntax error in the GlobalProtect dashboard for the PaloAlto kit.
3 changes: 2 additions & 1 deletion changelog/list.md
Expand Up @@ -7,7 +7,7 @@
maxdepth: 1
caption: Current Release
---
5.4.6 <5.4.6>
5.4.7 <5.4.7>
```

## Previous Versions
Expand All @@ -18,6 +18,7 @@ maxdepth: 1
caption: Previous Releases
---
5.4.6 <5.4.6>
5.4.5 <5.4.5>
5.4.4 <5.4.4>
5.4.3 <5.4.3>
Expand Down
2 changes: 1 addition & 1 deletion conf.py
Expand Up @@ -21,7 +21,7 @@
project = "Gravwell"
copyright = f"Gravwell, Inc. {date.today().year}"
author = "Gravwell, Inc."
release = "v5.4.6"
release = "v5.4.7"

# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
Expand Down
10 changes: 10 additions & 0 deletions flows/nodes/http.md
Expand Up @@ -11,6 +11,7 @@ This node can perform HTTP requests with considerable flexibility. It can send i
* `Max Result Size`: If set, the node will read no more than this many *bytes* in the HTTP response.
* `Skip TLS Verify`: If set to true, the node will not validate the TLS certificate of the HTTP server. Use with caution!
* `Request Headers`: May be used to set headers on the outgoing HTTP request, such as authentication tokens.
* `Decode Body`: If set to true, the node will attempt to decode the response body into a structure.

## Output

Expand All @@ -20,6 +21,7 @@ The HTTP node sets several items in the outgoing payload:
* `status`: A string representation of the HTTP status, e.g. "200 OK".
* `statuscode`: The numeric HTTP response code, e.g. 200.


## Example

This example runs a Gravwell query (`tag=gravwell limit 2`) and sends the results to a simple HTTP server listening for POST requests.
Expand All @@ -39,3 +41,11 @@ Body: <15>1 2022-03-15T23:27:16.315639Z web1.floren.lan webserver 0 - [gw@1 host
```

Note the headers; the `Accept-Encoding` and `User-Agent` headers are automatically set by the search agent.

### Example With Decode Body

The HTTP node can return data of any type. By default, the node will store the response data as an array of bytes; when working with binary data or grabbing a remote file to use as a resource, this is the most useful form. Other use cases for the HTTP node may want to natively process an HTTP response so that the flow can more easily handle the response data; examples may include better display of text data or accessing fields in a JSON response.

This example performs an HTTP GET request on a remote API which returns JSON data using the HTTP node with `Decode Body` set to true; because the `Decode Body` value is set to true, the HTTP node will look at the `Content-Type` returned by the HTTP request and determine if it can decode the response into an object. Because the remote endpoint returns a JSON object, we can decode that into the payload as structured data which can then be processed in the flow. The decoded response payloads make it easy to take the response from one API and use it to create a request to another without having to manually decode the HTTP response.

![](http_api_example.png)
Binary file added flows/nodes/http_api_example.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added flows/nodes/throttle-alert.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added flows/nodes/throttle-flow1.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added flows/nodes/throttle-flow2.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added flows/nodes/throttle-keyed-output.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added flows/nodes/throttle-keyed-output2.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added flows/nodes/throttle-scheduled-search.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
58 changes: 57 additions & 1 deletion flows/nodes/throttle.md
Expand Up @@ -2,15 +2,26 @@

The Throttle node allows you to control how often certain nodes within a flow are executed. For instance, suppose you want to run a query every minute to check for a particular event, but you don't want to send out an *email* about that event more than once an hour. Injecting a Throttle node in front of the Email node accomplishes that.

The throttle node can operate in *basic* mode, where it allows execution exactly once in a given duration, or it can operate in *keyed* mode, where it allows execution once per duration for **each different combination of key values**. Keyed mode is explained further below.

## Configuration

* `Duration`, required: how long to wait between executions. The node will block any downstream nodes from executing if it has been less than Duration since the last time it allowed execution.
* `Keys`, optional: a list of variables to use as keys. The contents of the specified variables will be checked at run time; execution will only be allowed to continue if that particular set of values has *not* been seen in the specified duration.

## Output

The node does not modify the payload.

## Example
## Throttling Modes

To operate in **basic mode**, set only the `Duration` config. In this mode, the Throttle node will allow execution to continue to "downstream" nodes once per duration. This can be useful when your flow runs frequently, perhaps every minute, to check for rare events, but you don't want to take *action* on those events more than once in a given time period.

To operate in **keyed mode**, set the `Duration` and then select one or more variables in the `Keys` config. At runtime, the Throttle node will read the values of each of those variables. It will then check when that particular combination was last seen. If the time delta exceeds `Duration`, execution is allowed to continue. This mode is especially useful when working with [Alerts](/alerts/alerts).

## Examples

### Basic Throttling

This example runs a query which checks for ingesters disconnecting; if any are found, it generates a message listing them and sends that message to a Mattermost channel. The flow is configured to run *once a minute*, but to avoid spamming Mattermost we will only send a message *hourly* at most.

Expand Down Expand Up @@ -44,3 +55,48 @@ Bounced ingesters:
And the [Mattermost Message](mattermost) node sends the results to a Mattermost channel:

![](throttle-output.png)

### Keyed Throttling

The previous example has a weakness: it notifies no more than once per hour, meaning if one ingester goes offline shortly after another, we won't find out about that second ingester's problem until nearly an hour later.

We can make up for this deficiency by combining [Alerts](/alerts/alerts) and *keyed throttling*.

First, we create a scheduled search using a modified version of the query above, configuring it to run every minute over the last hour of data. This query returns one entry for each unique ingester that goes offline:

```gravwell
tag=gravwell syslog Hostname Message~"Ingest routine exiting" Structured.ingester Structured.ingesterversion Structured.ingesteruuid Structured.client
| alias Hostname indexer
| unique ingesteruuid
| table ingester ingesteruuid
```

![](throttle-scheduled-search.png)

Then we create an alert with that search as a dispatcher, and define a schema with some of the fields we care about:

![](throttle-alert.png)

Last, we create a flow which consumes the output of that alert. Recall that when an alert triggers a flow, it triggers it once per line from the scheduled search results, meaning this flow will run once for every ingester. We create the flow and associate it with the alert:

![](throttle-flow1.png)

Then lay out the nodes as seen below:

![](throttle-flow2.png)

Note that we have referenced `event.Contents.ingester` and `event.Contents.ingesteruuid` in the Keys configuration for the Throttle node. This tells it to allow execution once per hour for every combination seen in the those two variables (which come from the alert we defined above).

The [Text Template](template) node generates a simple message using those same two variables:

```
Ingester {{ .event.Contents.ingester }} ( {{ .event.Contents.ingesteruuid }} ) went down!
```

And the [Mattermost Message](mattermost) node sends the results to a Mattermost channel:

![](throttle-keyed-output.png)

Note that there are two separate messages there, one for each ingester that went down. If another ingester goes down -- either a different type of ingester, like File Follower, or another Network Capture or Simple Relay ingester with a different UUID -- an alert will be sent to that effect immediately:

![](throttle-keyed-output2.png)
2 changes: 1 addition & 1 deletion ingesters/win_file_follow.md
Expand Up @@ -14,7 +14,7 @@ Download the Gravwell Windows File Follower installer:

| Ingester Name | Installer | More Info |
| :------------ | :----------- | :-------- |
| Windows File Follower | <a data-bs-custom-class="hash-popover" href="https://update.gravwell.io/archive/5.4.6/installers/gravwell_file_follow_5.4.6.1.msi">Download <i class="fa-solid fa-download"></i></a>&nbsp;&nbsp;&nbsp;<a data-bs-custom-class="hash-popover" href="javascript:void(0);" data-bs-toggle="popover" data-bs-placement="bottom" data-bs-html="true" data-bs-content='<code class="docutils literal notranslate"><span class="pre">91557fd236f3ed84c138ad3b7d2c1efc2897c080ac37a395de9fdcabd78d41d5</span></code>'>(SHA256)</a> | [Documentation](/ingesters/win_file_follow) |
| Windows File Follower | <a data-bs-custom-class="hash-popover" href="https://update.gravwell.io/archive/5.4.7/installers/gravwell_file_follow_5.4.7.1.msi">Download <i class="fa-solid fa-download"></i></a>&nbsp;&nbsp;&nbsp;<a data-bs-custom-class="hash-popover" href="javascript:void(0);" data-bs-toggle="popover" data-bs-placement="bottom" data-bs-html="true" data-bs-content='<code class="docutils literal notranslate"><span class="pre">07073f53dd83774f5af410102ef95687e35ac934bb3544e13e23f5b2aa212d61</span></code>'>(SHA256)</a> | [Documentation](/ingesters/win_file_follow) |

The Gravwell Windows file follower is installed using a signed MSI package. Gravwell signs both the Windows executable and MSI installer with our private key pairs, but depending on download volumes, you may see a warning about the MSI being untrusted. This is due to the way Microsoft "weighs" files. Basically, as they see more people download and install a given package, it becomes more trustworthy. Don't worry though, we have a well audited build pipeline and we sign every package.

Expand Down
2 changes: 1 addition & 1 deletion ingesters/winevent.md
Expand Up @@ -49,7 +49,7 @@ Download the Gravwell Windows Events installer:

| Ingester Name | Installer | More Info |
| :------------ | :----------- | :-------- |
| Windows Events | <a data-bs-custom-class="hash-popover" href="https://update.gravwell.io/archive/5.4.6/installers/gravwell_win_events_5.4.6.1.msi">Download <i class="fa-solid fa-download"></i></a>&nbsp;&nbsp;&nbsp;<a data-bs-custom-class="hash-popover" href="javascript:void(0);" data-bs-toggle="popover" data-bs-placement="bottom" data-bs-html="true" data-bs-content='<code class="docutils literal notranslate"><span class="pre">b7608c2d1e4d7eea35e844fbf1d0507e8a7489e56d9b7c2caa24dcda445881bd</span></code>'>(SHA256)</a> | [Documentation](/ingesters/winevent) |
| Windows Events | <a data-bs-custom-class="hash-popover" href="https://update.gravwell.io/archive/5.4.7/installers/gravwell_win_events_5.4.7.1.msi">Download <i class="fa-solid fa-download"></i></a>&nbsp;&nbsp;&nbsp;<a data-bs-custom-class="hash-popover" href="javascript:void(0);" data-bs-toggle="popover" data-bs-placement="bottom" data-bs-html="true" data-bs-content='<code class="docutils literal notranslate"><span class="pre">aa6e58b7527a79f5d5d0346d2edc767aa876e00499fbcc44f2536beb7b1a2897</span></code>'>(SHA256)</a> | [Documentation](/ingesters/winevent) |

Run the .msi installation wizard to install the Gravwell events service. On first installation the installation wizard will prompt to configure the indexer endpoint and ingest secret. Subsequent installations and/or upgrades will identify a resident configuration file and will not prompt.

Expand Down

0 comments on commit 9b94344

Please sign in to comment.