Skip to content

Commit

Permalink
output documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
mosajjal committed Apr 16, 2022
1 parent 65d0172 commit bd4c105
Show file tree
Hide file tree
Showing 10 changed files with 261 additions and 11 deletions.
2 changes: 1 addition & 1 deletion docs/content/en/docs/Outputs/_index.md
Expand Up @@ -3,7 +3,7 @@ title: "Outputs"
linkTitle: "Outputs"
weight: 4
description: >
Set up output(s) and metric gathering
Set up output(s) and gather metrics
---

`dnsmonster` follows a pipeline architecture for each individual packet. After the Capture and filter, each processed packet arrives at the output dispatcher. The dispatcher sends a copy of the output to each individual output module that have been configured to produce output. For instance, if you specify `stdoutOutputType=1` and `--fileOutputType=1 --fileOutputPath=/dev/stdout`, you'll see each processed output twice in your stdout. One coming from the stdout output type, and the other from the file output type which happens to have the same address (`/dev/stdout`).
Expand Down
29 changes: 29 additions & 0 deletions docs/content/en/docs/Outputs/elastic.md
Expand Up @@ -3,3 +3,32 @@ title: "Elasticsearch/OpenSearch"
linkTitle: "Elasticsearch/OpenSearch"
weight: 4
---

Elasticsearch is a full-text search engine and it's used widely across a lot of security tools. `dnsmonster` supports Elastic 7.x out of the box. The support for 6.x and 8.x has not been tested.

There is also a fork of Elasticsearch called Opendistro, later renamed to Opensearch. Both are compatible with 7.10.x Elastic, so it should also be supported too.

## Configuration parameters

```ini
[elastic_output]
; What should be written to elastic. options:
; 0: Disable Output
; 1: Enable Output without any filters
; 2: Enable Output and apply skipdomains logic
; 3: Enable Output and apply allowdomains logic
; 4: Enable Output and apply both skip and allow domains logic
ElasticOutputType = 0

; elastic endpoint address, example: http://127.0.0.1:9200. Used if elasticOutputType is not none
ElasticOutputEndpoint =

; elastic index
ElasticOutputIndex = default

; Send data to Elastic in batch sizes
ElasticBatchSize = 1000

; Interval between sending results to Elastic if Batch size is not filled
ElasticBatchDelay = 1s
```
33 changes: 33 additions & 0 deletions docs/content/en/docs/Outputs/influx.md
Expand Up @@ -3,3 +3,36 @@ title: "InfluxDB"
linkTitle: "InfluxDB"
weight: 4
---

InfluxDB is a time series database used to store logs and metrics with high ingestion rate.


## Configuration options
```ini
[influx_output]
; What should be written to influx. options:
; 0: Disable Output
; 1: Enable Output without any filters
; 2: Enable Output and apply skipdomains logic
; 3: Enable Output and apply allowdomains logic
; 4: Enable Output and apply both skip and allow domains logic
InfluxOutputType = 0

; influx Server address, example: http://localhost:8086. Used if influxOutputType is not none
InfluxOutputServer =

; Influx Server Auth Token
InfluxOutputToken = dnsmonster

; Influx Server Bucket
InfluxOutputBucket = dnsmonster

; Influx Server Org
InfluxOutputOrg = dnsmonster

; Minimum capacity of the cache array used to send data to Influx
InfluxOutputWorkers = 8

; Minimum capacity of the cache array used to send data to Influx
InfluxBatchSize = 1000
```
44 changes: 44 additions & 0 deletions docs/content/en/docs/Outputs/kafka.md
Expand Up @@ -3,3 +3,47 @@ title: "Apache Kafka"
linkTitle: "Apache Kafka"
weight: 4
---

Possibly the most versatile output supported by `dnsmonster`. Kafka output allows you to connect to endless list of supported sinks. It is the recommended output module for enterprise designs since it offers fault tolerance and it can sustain outages to the sink. `dnsmonster`'s Kafka output supports compression, TLS, and multiple brokers. In order to provide multiple brokers, you need to specify it multiple times.

## Configuration Parameters
```ini
[kafka_output]
; What should be written to kafka. options:
; 0: Disable Output
; 1: Enable Output without any filters
; 2: Enable Output and apply skipdomains logic
; 3: Enable Output and apply allowdomains logic
; 4: Enable Output and apply both skip and allow domains logic
KafkaOutputType = 0

; kafka broker address(es), example: 127.0.0.1:9092. Used if kafkaOutputType is not none
KafkaOutputBroker =

; Kafka topic for logging
KafkaOutputTopic = dnsmonster

; Minimum capacity of the cache array used to send data to Kafka
KafkaBatchSize = 1000

; Kafka connection timeout in seconds
KafkaTimeout = 3

; Interval between sending results to Kafka if Batch size is not filled
KafkaBatchDelay = 1s

; Compress Kafka connection
KafkaCompress = false

; Use TLS for kafka connection
KafkaSecure = false

; Path of CA certificate that signs Kafka broker certificate
KafkaCACertificatePath =

; Path of TLS certificate to present to broker
KafkaTLSCertificatePath =

; Path of TLS certificate key
KafkaTLSKeyPath =
```
24 changes: 24 additions & 0 deletions docs/content/en/docs/Outputs/metrics.md
Expand Up @@ -3,3 +3,27 @@ title: "Metrics"
linkTitle: "Dnsmonster Metrics"
weight: 400
---

Each enabled input and output comes with a set of metrics in order to monitor performance and troubleshoot your running instance. `dnsmonster` uses the [go-metrics](https://github.com/rcrowley/go-metrics) library which makes it easy to register metrics on the fly and in a modular way.

currently, three metric outputs are supported:
- `stderr`
- `statsd`
- `prometheus`

## Configuration parameters

```ini
[metric]
; Metric Endpoint Service. Choices: stderr, statsd, prometheus
MetricEndpointType = stderr

; Statsd endpoint. Example: 127.0.0.1:8125
MetricStatsdAgent =

; Prometheus Registry endpoint. Example: http://0.0.0.0:2112/metric
MetricPrometheusEndpoint =

; Interval between sending results to Metric Endpoint
MetricFlushInterval = 10s
```
35 changes: 35 additions & 0 deletions docs/content/en/docs/Outputs/sentinel.md
Expand Up @@ -3,3 +3,38 @@ title: "Microsoft Sentinel"
linkTitle: "Microsoft Sentinel"
weight: 4
---

Microsoft Sentinel output module is designed to send `dnsmonster` logs to Sentinel. In addition to that, this module supports sending the logs to any Log Analytics workspace no matter if they are connected to Sentinel or not.

Please take a look at Microsoft's official documentation to see how Customer ID and Shared key are obtained.


## Configuration Parameters
```ini
[sentinel_output]
; What should be written to Microsoft Sentinel. options:
; 0: Disable Output
; 1: Enable Output without any filters
; 2: Enable Output and apply skipdomains logic
; 3: Enable Output and apply allowdomains logic
; 4: Enable Output and apply both skip and allow domains logic
SentinelOutputType = 0

; Sentinel Shared Key, either the primary or secondary, can be found in Agents Management page under Log Analytics workspace
SentinelOutputSharedKey =

; Sentinel Customer Id. can be found in Agents Management page under Log Analytics workspace
SentinelOutputCustomerId =

; Sentinel Output LogType
SentinelOutputLogType = dnsmonster

; Sentinel Output Proxy in URI format
SentinelOutputProxy =

; Sentinel Batch Size
SentinelBatchSize = 100

; Interval between sending results to Sentinel if Batch size is not filled
SentinelBatchDelay = 1s
```
39 changes: 39 additions & 0 deletions docs/content/en/docs/Outputs/splunk.md
Expand Up @@ -3,3 +3,42 @@ title: "Splunk HEC"
linkTitle: "Splunk HEC"
weight: 4
---

Splunk HTTP Event Collector is a widely used component of Splunk to ingest raw and JSON data. `dnsmonster` uses the JSON output to push the logs into a Splunk index. various configurations are also supported. You can also use multiple HEC endpoints to have load balancing and fault tolerance across multiple index heads. Note that the token and other settings are shared between multiple endpoints.

## Configuration Parameters

```ini
[splunk_output]
; What should be written to HEC. options:
; 0: Disable Output
; 1: Enable Output without any filters
; 2: Enable Output and apply skipdomains logic
; 3: Enable Output and apply allowdomains logic
; 4: Enable Output and apply both skip and allow domains logic
SplunkOutputType = 0

; splunk endpoint address, example: http://127.0.0.1:8088. Used if splunkOutputType is not none, can be specified multiple times for load balanace and HA
SplunkOutputEndpoint =

; Splunk HEC Token
SplunkOutputToken = 00000000-0000-0000-0000-000000000000

; Splunk Output Index
SplunkOutputIndex = temp

; Splunk Output Proxy in URI format
SplunkOutputProxy =

; Splunk Output Source
SplunkOutputSource = dnsmonster

; Splunk Output Sourcetype
SplunkOutputSourceType = json

; Send data to HEC in batch sizes
SplunkBatchSize = 1000

; Interval between sending results to HEC if Batch size is not filled
SplunkBatchDelay = 1s
```
56 changes: 56 additions & 0 deletions docs/content/en/docs/Outputs/stdout-file-syslog.md
@@ -0,0 +1,56 @@
---
title: "Stdout, syslog or Log File"
linkTitle: "Stdout, syslog, or Log File"
weight: 4
---

Stdout, syslog and file are supported outputs for `dnsmonster` out of the box. They are useful specially if you have a SIEM agent reading the files as they come in. Note that `dnsmonster` does not provide support for log rotation and the capacity of the hard drive while writing into a file. You can use a tool like `logrotate` to perform cleanups on the log files. The signalling on log rotation (SIGHUP) has not been tested with `dnsmonster`.

Currently, Syslog output is only supported on Linux.

## Configuration parameters

```ini
[file_output]
; What should be written to file. options:
; 0: Disable Output
; 1: Enable Output without any filters
; 2: Enable Output and apply skipdomains logic
; 3: Enable Output and apply allowdomains logic
; 4: Enable Output and apply both skip and allow domains logic
FileOutputType = 0

; Path to output file. Used if fileOutputType is not none
FileOutputPath =

; Output format for file. options:json,csv. note that the csv splits the datetime format into multiple fields
FileOutputFormat = json


[stdout_output]
; What should be written to stdout. options:
; 0: Disable Output
; 1: Enable Output without any filters
; 2: Enable Output and apply skipdomains logic
; 3: Enable Output and apply allowdomains logic
; 4: Enable Output and apply both skip and allow domains logic
StdoutOutputType = 0

; Output format for stdout. options:json,csv. note that the csv splits the datetime format into multiple fields
StdoutOutputFormat = json

; Number of workers
StdoutOutputWorkerCount = 8

[syslog_output]
; What should be written to Syslog server. options:
; 0: Disable Output
; 1: Enable Output without any filters
; 2: Enable Output and apply skipdomains logic
; 3: Enable Output and apply allowdomains logic
; 4: Enable Output and apply both skip and allow domains logic
SyslogOutputType = 0

; Syslog endpoint address, example: udp://127.0.0.1:514, tcp://127.0.0.1:514. Used if syslogOutputType is not none
SyslogOutputEndpoint = udp://127.0.0.1:514
```
5 changes: 0 additions & 5 deletions docs/content/en/docs/Outputs/stdout-file.md

This file was deleted.

5 changes: 0 additions & 5 deletions docs/content/en/docs/Outputs/syslog.md

This file was deleted.

0 comments on commit bd4c105

Please sign in to comment.