Skip to content

Commit

Permalink
Fix a bunch of code tags
Browse files Browse the repository at this point in the history
  • Loading branch information
Jeffail committed Jan 14, 2021
1 parent cbe7a36 commit 01d9abb
Show file tree
Hide file tree
Showing 29 changed files with 1,287 additions and 1,318 deletions.
2 changes: 1 addition & 1 deletion lib/input/broker.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ field to specify how many copies of the list of inputs should be created.
Adding more input types allows you to merge streams from multiple sources into
one. For example, reading from both RabbitMQ and Kafka:
` + "``` yaml" + `
` + "```yaml" + `
input:
broker:
copies: 1
Expand Down
6 changes: 3 additions & 3 deletions lib/output/aws_dynamodb.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ where the values are
batch. This allows you to populate string columns of an item by extracting
fields within the document payload or metadata like follows:
` + "``` yaml" + `
` + "```yaml" + `
string_columns:
id: ${!json("id")}
title: ${!json("body.title")}
Expand All @@ -40,15 +40,15 @@ converted into a map value. Both an empty path and the path ` + "`.`" + ` are
interpreted as the root of the document. This allows you to populate map columns
of an item like follows:
` + "``` yaml" + `
` + "```yaml" + `
json_map_columns:
user: path.to.user
whole_document: .
` + "```" + `
A column name can be empty:
` + "``` yaml" + `
` + "```yaml" + `
json_map_columns:
"": .
` + "```" + `
Expand Down
4 changes: 2 additions & 2 deletions lib/output/azure_table_storage.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ are marshaled and stored in the table, which will be created if it does not exis
The ` + "`object`" + ` and ` + "`array`" + ` fields are marshaled as strings. e.g.:
The JSON message:
` + "``` yaml" + `
` + "```json" + `
{
"foo": 55,
"bar": {
Expand All @@ -43,7 +43,7 @@ The JSON message:
` + "```" + `
Will store in the table the following properties:
` + "``` yaml" + `
` + "```yaml" + `
foo: '55'
bar: '{ "baz": "a", "bez": "b" }'
diz: '["a", "b"]'
Expand Down
2 changes: 1 addition & 1 deletion lib/output/cache.go
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ and can target any of the following types:
` + cachesList + `
The ` + "`target`" + ` field must point to a configured cache like follows:
` + "``` yaml" + `
` + "```yaml" + `
output:
cache:
target: foo
Expand Down
2 changes: 1 addition & 1 deletion lib/output/drop_on_error.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ For example, the following configuration attempts to send to a hypothetical
output type ` + "`foo`" + ` three times, but if all three attempts fail the
message is dropped entirely:
` + "``` yaml" + `
` + "```yaml" + `
output:
drop_on_error:
retry:
Expand Down
2 changes: 1 addition & 1 deletion lib/output/resource.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ allows you to run the same configured output resource in multiple places.`,
Resource outputs also have the advantage of name based metrics and logging. For
example, the config:
` + "``` yaml" + `
` + "```yaml" + `
output:
broker:
pattern: fan_out
Expand Down
2 changes: 1 addition & 1 deletion lib/output/sync_response.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ It is safe to combine this output with others using broker types. For example,
with the ` + "`http_server`" + ` input we could send the payload to a Kafka
topic and also send a modified payload back with:
` + "``` yaml" + `
` + "```yaml" + `
input:
http_server:
path: /post
Expand Down
2 changes: 1 addition & 1 deletion lib/output/try.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ targets have broken. For example, if you had an output type ` + "`http_client`"
but wished to reroute messages whenever the endpoint becomes unreachable you
could use this pattern:
` + "``` yaml" + `
` + "```yaml" + `
output:
try:
- http_client:
Expand Down
10 changes: 6 additions & 4 deletions lib/processor/dedupe.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,12 @@ For example, the following config would deduplicate based on the concatenated
values of the metadata field ` + "`kafka_key`" + ` and the value of the JSON
path ` + "`id`" + ` within the message contents:
` + "``` yaml" + `
dedupe:
cache: foocache
key: ${! meta("kafka_key") }-${! json("id") }
` + "```yaml" + `
pipeline:
processors:
- dedupe:
cache: foocache
key: ${! meta("kafka_key") }-${! json("id") }
` + "```" + `
Caches should be configured as a resource, for more information check out the
Expand Down
2 changes: 1 addition & 1 deletion lib/processor/group_by_value.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ If we were consuming Kafka messages and needed to group them by their key,
archive the groups, and send them to S3 with the key as part of the path we
could achieve that with the following:
` + "``` yaml" + `
` + "```yaml" + `
pipeline:
processors:
- group_by_value:
Expand Down
28 changes: 16 additions & 12 deletions lib/processor/log.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,13 @@ pipeline in order to expose the JSON field ` + "`foo.bar`" + ` as well as the
metadata field ` + "`kafka_partition`" + ` we can achieve that with the
following config:
` + "``` yaml" + `
for_each:
- log:
level: DEBUG
message: 'field: ${! json("foo.bar") }, part: ${! meta("kafka_partition") }'
` + "```yaml" + `
pipeline:
processors:
- for_each:
- log:
level: DEBUG
message: 'field: ${! json("foo.bar") }, part: ${! meta("kafka_partition") }'
` + "```" + `
The ` + "`level`" + ` field determines the log level of the printed events and
Expand All @@ -51,13 +53,15 @@ the service log is set to output as JSON. The field values are function
interpolated, meaning it's possible to output structured fields containing
message contents and metadata, e.g.:
` + "``` yaml" + `
log:
level: DEBUG
message: "foo"
fields:
id: '${! json("id") }'
kafka_topic: '${! meta("kafka_topic") }'
` + "```yaml" + `
pipeline:
processors:
- log:
level: DEBUG
message: "foo"
fields:
id: '${! json("id") }'
kafka_topic: '${! meta("kafka_topic") }'
` + "```" + ``,
FieldSpecs: docs.FieldSpecs{
docs.FieldCommon("level", "The log level to use.").HasOptions("FATAL", "ERROR", "WARN", "INFO", "DEBUG", "TRACE", "ALL"),
Expand Down
2 changes: 1 addition & 1 deletion lib/processor/resource.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ places.`,
Resource processors also have the advantage of name based metrics and logging.
For example, the config:
` + "``` yaml" + `
` + "```yaml" + `
pipeline:
processors:
- bloblang: |
Expand Down
10 changes: 6 additions & 4 deletions lib/processor/sleep.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,12 @@ This processor executes once per message batch. In order to execute once for
each message of a batch place it within a
` + "[`for_each`](/docs/components/processors/for_each)" + ` processor:
` + "``` yaml" + `
for_each:
- sleep:
duration: ${! meta("sleep_for") }
` + "```yaml" + `
pipeline:
processors:
- for_each:
- sleep:
duration: ${! meta("sleep_for") }
` + "```" + ``,
FieldSpecs: docs.FieldSpecs{
docs.FieldCommon("duration", "The duration of time to sleep for each execution."),
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/inputs/broker.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ field to specify how many copies of the list of inputs should be created.
Adding more input types allows you to merge streams from multiple sources into
one. For example, reading from both RabbitMQ and Kafka:

``` yaml
```yaml
input:
broker:
copies: 1
Expand Down
6 changes: 3 additions & 3 deletions website/docs/components/outputs/aws_dynamodb.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ where the values are
batch. This allows you to populate string columns of an item by extracting
fields within the document payload or metadata like follows:

``` yaml
```yaml
string_columns:
id: ${!json("id")}
title: ${!json("body.title")}
Expand All @@ -102,15 +102,15 @@ converted into a map value. Both an empty path and the path `.` are
interpreted as the root of the document. This allows you to populate map columns
of an item like follows:

``` yaml
```yaml
json_map_columns:
user: path.to.user
whole_document: .
```

A column name can be empty:

``` yaml
```yaml
json_map_columns:
"": .
```
Expand Down
4 changes: 2 additions & 2 deletions website/docs/components/outputs/azure_table_storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ are marshaled and stored in the table, which will be created if it does not exis
The `object` and `array` fields are marshaled as strings. e.g.:

The JSON message:
``` yaml
```json
{
"foo": 55,
"bar": {
Expand All @@ -100,7 +100,7 @@ The JSON message:
```

Will store in the table the following properties:
``` yaml
```yaml
foo: '55'
bar: '{ "baz": "a", "bez": "b" }'
diz: '["a", "b"]'
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/outputs/cache.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ and can target any of the following types:

The `target` field must point to a configured cache like follows:

``` yaml
```yaml
output:
cache:
target: foo
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/outputs/drop_on_error.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ For example, the following configuration attempts to send to a hypothetical
output type `foo` three times, but if all three attempts fail the
message is dropped entirely:

``` yaml
```yaml
output:
drop_on_error:
retry:
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/outputs/resource.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ output:
Resource outputs also have the advantage of name based metrics and logging. For
example, the config:

``` yaml
```yaml
output:
broker:
pattern: fan_out
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/outputs/sync_response.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ It is safe to combine this output with others using broker types. For example,
with the `http_server` input we could send the payload to a Kafka
topic and also send a modified payload back with:

``` yaml
```yaml
input:
http_server:
path: /post
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/outputs/try.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ targets have broken. For example, if you had an output type `http_client`
but wished to reroute messages whenever the endpoint becomes unreachable you
could use this pattern:

``` yaml
```yaml
output:
try:
- http_client:
Expand Down
10 changes: 6 additions & 4 deletions website/docs/components/processors/dedupe.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,10 +67,12 @@ For example, the following config would deduplicate based on the concatenated
values of the metadata field `kafka_key` and the value of the JSON
path `id` within the message contents:

``` yaml
dedupe:
cache: foocache
key: ${! meta("kafka_key") }-${! json("id") }
```yaml
pipeline:
processors:
- dedupe:
cache: foocache
key: ${! meta("kafka_key") }-${! json("id") }
```

Caches should be configured as a resource, for more information check out the
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/processors/group_by_value.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ If we were consuming Kafka messages and needed to group them by their key,
archive the groups, and send them to S3 with the key as part of the path we
could achieve that with the following:

``` yaml
```yaml
pipeline:
processors:
- group_by_value:
Expand Down
28 changes: 16 additions & 12 deletions website/docs/components/processors/log.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,11 +37,13 @@ pipeline in order to expose the JSON field `foo.bar` as well as the
metadata field `kafka_partition` we can achieve that with the
following config:

``` yaml
for_each:
- log:
level: DEBUG
message: 'field: ${! json("foo.bar") }, part: ${! meta("kafka_partition") }'
```yaml
pipeline:
processors:
- for_each:
- log:
level: DEBUG
message: 'field: ${! json("foo.bar") }, part: ${! meta("kafka_partition") }'
```

The `level` field determines the log level of the printed events and
Expand All @@ -54,13 +56,15 @@ the service log is set to output as JSON. The field values are function
interpolated, meaning it's possible to output structured fields containing
message contents and metadata, e.g.:

``` yaml
log:
level: DEBUG
message: "foo"
fields:
id: '${! json("id") }'
kafka_topic: '${! meta("kafka_topic") }'
```yaml
pipeline:
processors:
- log:
level: DEBUG
message: "foo"
fields:
id: '${! json("id") }'
kafka_topic: '${! meta("kafka_topic") }'
```

## Fields
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/processors/resource.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ resource: ""
Resource processors also have the advantage of name based metrics and logging.
For example, the config:

``` yaml
```yaml
pipeline:
processors:
- bloblang: |
Expand Down
10 changes: 6 additions & 4 deletions website/docs/components/processors/sleep.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,12 @@ This processor executes once per message batch. In order to execute once for
each message of a batch place it within a
[`for_each`](/docs/components/processors/for_each) processor:

``` yaml
for_each:
- sleep:
duration: ${! meta("sleep_for") }
```yaml
pipeline:
processors:
- for_each:
- sleep:
duration: ${! meta("sleep_for") }
```

## Fields
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/tracers/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ a work in progress and should eventually expand so that all inputs have a way of

A tracer config section looks like this:

``` yaml
```yaml
tracer:
jaeger:
agent_address: localhost:6831
Expand Down

0 comments on commit 01d9abb

Please sign in to comment.