Skip to content

Commit

Permalink
Merge pull request #1526 from mihaitodor/rename-bloblang-to-mapping
Browse files Browse the repository at this point in the history
Update docs to reference the mapping processor
  • Loading branch information
Jeffail committed Oct 21, 2022
2 parents 52a24dc + 9a34510 commit d34e2a7
Show file tree
Hide file tree
Showing 57 changed files with 149 additions and 146 deletions.
2 changes: 1 addition & 1 deletion README.md
Expand Up @@ -19,7 +19,7 @@ input:

pipeline:
processors:
- bloblang: |
- mapping: |
root.message = this
root.meta.link_count = this.links.length()
root.user.age = this.user.age.number()
Expand Down
2 changes: 1 addition & 1 deletion cmd/tools/benthos_docs_gen/cue_test/expected.yml
Expand Up @@ -7,7 +7,7 @@ testCases:
pipeline:
processors:
- label: sample_transform
bloblang: root = this.uppercase()
mapping: root = this.uppercase()
output:
switch:
cases:
Expand Down
2 changes: 1 addition & 1 deletion cmd/tools/benthos_docs_gen/cue_test/test.cue
Expand Up @@ -7,7 +7,7 @@ testCases:

pipeline: processors: [{
label: "sample_transform"
bloblang: "root = this.uppercase()"
mapping: "root = this.uppercase()"
}]

output: #Guarded & {
Expand Down
2 changes: 1 addition & 1 deletion config/test/deduplicate_by_batch.yaml
@@ -1,6 +1,6 @@
pipeline:
processors:
- bloblang: |
- mapping: |
meta batch_tag = if batch_index() == 0 {
nanoid(10)
}
Expand Down
2 changes: 1 addition & 1 deletion config/test/unit_test_example.yaml
Expand Up @@ -6,7 +6,7 @@ input:

pipeline:
processors:
- bloblang: 'root = "%vend".format(content().uppercase().string())'
- mapping: 'root = "%vend".format(content().uppercase().string())'

output:
aws_s3:
Expand Down
6 changes: 3 additions & 3 deletions internal/cli/test/docs/docs.go
Expand Up @@ -53,15 +53,15 @@ It is also possible to target processors in a separate file by prefixing the tar
).HasDefault(""),
docs.FieldAnything(
"mocks",
"An optional map of processors to mock. Keys should contain either a label or a JSON pointer of a processor that should be mocked. Values should contain a processor definition, which will replace the mocked processor. Most of the time you'll want to use a `bloblang` processor here, and use it to create a result that emulates the target processor.",
"An optional map of processors to mock. Keys should contain either a label or a JSON pointer of a processor that should be mocked. Values should contain a processor definition, which will replace the mocked processor. Most of the time you'll want to use a [`mapping` processor][processors.mapping] here, and use it to create a result that emulates the target processor.",
map[string]any{
"get_foobar_api": map[string]any{
"bloblang": "root = content().string() + \" this is some mock content\"",
"mapping": "root = content().string() + \" this is some mock content\"",
},
},
map[string]any{
"/pipeline/processors/1": map[string]any{
"bloblang": "root = content().string() + \" this is some mock content\"",
"mapping": "root = content().string() + \" this is some mock content\"",
},
},
).Map().Optional(),
Expand Down
15 changes: 8 additions & 7 deletions internal/cli/test/docs/docs.md
Expand Up @@ -32,7 +32,7 @@ input:

pipeline:
processors:
- bloblang: '"%vend".format(content().uppercase().string())'
- mapping: '"%vend".format(content().uppercase().string())'

output:
aws_s3:
Expand Down Expand Up @@ -255,30 +255,30 @@ Sometimes you'll want to write tests for a series of processors, where one or mo
```yaml
pipeline:
processors:
- bloblang: 'root = "simon says: " + content()'
- mapping: 'root = "simon says: " + content()'
- label: get_foobar_api
http:
url: http://example.com/foobar
verb: GET
- bloblang: 'root = content().uppercase()'
- mapping: 'root = content().uppercase()'
```

Rather than create a fake service for the `http` processor to interact with we can define a mock in our test definition that replaces it with a `bloblang` processor. Mocks are configured as a map of labels that identify a processor to replace and the config to replace it with:
Rather than create a fake service for the `http` processor to interact with we can define a mock in our test definition that replaces it with a [`mapping` processor][processors.mapping]. Mocks are configured as a map of labels that identify a processor to replace and the config to replace it with:

```yaml
tests:
- name: mocks the http proc
target_processors: '/pipeline/processors'
mocks:
get_foobar_api:
bloblang: 'root = content().string() + " this is some mock content"'
mapping: 'root = content().string() + " this is some mock content"'
input_batch:
- content: "hello world"
output_batches:
- - content_equals: "SIMON SAYS: HELLO WORLD THIS IS SOME MOCK CONTENT"
```

With the above test definition the `http` processor will be swapped out for `bloblang: 'root = content().string() + " this is some mock content"'`. For the purposes of mocking it is recommended that you use a `bloblang` processor that simply mutates the message in a way that you would expect the mocked processor to.
With the above test definition the `http` processor will be swapped out for `mapping: 'root = content().string() + " this is some mock content"'`. For the purposes of mocking it is recommended that you use a [`mapping` processor][processors.mapping] that simply mutates the message in a way that you would expect the mocked processor to.

> Note: It's not currently possible to mock components that are imported as separate resource files (using `--resource`/`-r`). It is recommended that you mock these by maintaining separate definitions for test purposes (`-r "./test/*.yaml"`).
Expand All @@ -292,7 +292,7 @@ tests:
target_processors: '/pipeline/processors'
mocks:
/pipeline/processors/1:
bloblang: 'root = content().string() + " this is some mock content"'
mapping: 'root = content().string() + " this is some mock content"'
input_batch:
- content: "hello world"
output_batches:
Expand All @@ -308,3 +308,4 @@ The schema of a template file is as follows:
[json-pointer]: https://tools.ietf.org/html/rfc6901
[bloblang]: /docs/guides/bloblang/about
[logger]: /docs/components/logger/about
[processors.mapping]: /docs/components/processors/mapping
4 changes: 2 additions & 2 deletions internal/impl/gcp/output_pubsub.go
Expand Up @@ -39,7 +39,7 @@ If you are blocked by this issue then a work around is to delete either the spec
`+"```yaml"+`
pipeline:
processors:
- bloblang: |
- mapping: |
meta kafka_key = deleted()
`+"```"+`
Expand All @@ -48,7 +48,7 @@ Or delete all keys with:
`+"```yaml"+`
pipeline:
processors:
- bloblang: meta = deleted()
- mapping: meta = deleted()
`+"```"+``),
Config: docs.FieldComponent().WithChildren(
docs.FieldString("project", "The project ID of the topic to publish to."),
Expand Down
2 changes: 1 addition & 1 deletion internal/impl/pure/buffer_system_window.go
Expand Up @@ -106,7 +106,7 @@ pipeline:
# Reduce each batch to a single message by deleting indexes > 0, and
# aggregate the car and passenger counts.
- bloblang: |
- mapping: |
root = if batch_index() == 0 {
{
"traffic_light": this.traffic_light,
Expand Down
2 changes: 1 addition & 1 deletion internal/impl/pure/cache_multilevel.go
Expand Up @@ -26,7 +26,7 @@ pipeline:
operator: get
key: ${! json("key") }
- catch:
- bloblang: 'root = {"err":error()}'
- mapping: 'root = {"err":error()}'
result_map: 'root.result = this'
cache_resources:
Expand Down
2 changes: 1 addition & 1 deletion internal/impl/pure/input_broker.go
Expand Up @@ -39,7 +39,7 @@ input:
# Optional list of input specific processing steps
processors:
- bloblang: |
- mapping: |
root.message = this
root.meta.link_count = this.links.length()
root.user.age = this.user.age.number()
Expand Down
2 changes: 1 addition & 1 deletion internal/impl/pure/output_fallback.go
Expand Up @@ -35,7 +35,7 @@ output:
retries: 3
retry_period: 1s
processors:
- bloblang: 'root = "failed to send this message to foo: " + content()'
- mapping: 'root = "failed to send this message to foo: " + content()'
- file:
path: /usr/local/benthos/everything_failed.jsonl
` + "```" + `
Expand Down
2 changes: 1 addition & 1 deletion internal/impl/pure/output_sync_response.go
Expand Up @@ -42,7 +42,7 @@ output:
topic: foo_topic
- sync_response: {}
processors:
- bloblang: 'root = content().uppercase()'
- mapping: 'root = content().uppercase()'
` + "```" + `
Using the above example and posting the message 'hello world' to the endpoint
Expand Down
2 changes: 1 addition & 1 deletion internal/impl/pure/processor_jmespath.go
Expand Up @@ -31,7 +31,7 @@ Executes a [JMESPath query](http://jmespath.org/) on JSON documents and replaces
the message with the resulting document.`,
Description: `
:::note Try out Bloblang
For better performance and improved capabilities try out native Benthos mapping with the [bloblang processor](/docs/components/processors/bloblang).
For better performance and improved capabilities try out native Benthos mapping with the [` + "`mapping`" + ` processor](/docs/components/processors/mapping).
:::
`,
Examples: []docs.AnnotatedExample{
Expand Down
2 changes: 1 addition & 1 deletion internal/impl/pure/processor_jq.go
Expand Up @@ -32,7 +32,7 @@ func init() {
Transforms and filters messages using jq queries.`,
Description: `
:::note Try out Bloblang
For better performance and improved capabilities try out native Benthos mapping with the [bloblang processor](/docs/components/processors/bloblang).
For better performance and improved capabilities try out native Benthos mapping with the [` + "`mapping`" + ` processor](/docs/components/processors/mapping).
:::
The provided query is executed on each message, targeting either the contents
Expand Down
2 changes: 1 addition & 1 deletion internal/impl/pure/processor_jsonschema.go
Expand Up @@ -74,7 +74,7 @@ pipeline:
- log:
level: ERROR
message: "Schema validation failed due to: ${!error()}"
- bloblang: 'root = deleted()' # Drop messages that fail
- mapping: 'root = deleted()' # Drop messages that fail
` + "```" + `
If a payload being processed looked like:
Expand Down
4 changes: 2 additions & 2 deletions internal/impl/pure/processor_resource.go
Expand Up @@ -27,7 +27,7 @@ This processor allows you to reference the same configured processor resource in
` + "```yaml" + `
pipeline:
processors:
- bloblang: |
- mapping: |
root.message = this
root.meta.link_count = this.links.length()
root.user.age = this.user.age.number()
Expand All @@ -42,7 +42,7 @@ pipeline:
processor_resources:
- label: foo_proc
bloblang: |
mapping: |
root.message = this
root.meta.link_count = this.links.length()
root.user.age = this.user.age.number()
Expand Down
4 changes: 2 additions & 2 deletions internal/template/docs.md
Expand Up @@ -67,7 +67,7 @@ input:

pipeline:
processors:
- bloblang: |
- mapping: |
root.id = uuid_v4()
root.foo = this.inner.foo
root.body = this.outter
Expand All @@ -89,7 +89,7 @@ input:

pipeline:
processors:
- bloblang: |
- mapping: |
root.id = uuid_v4()
root.foo = this.inner.foo
root.body = this.outter
Expand Down
30 changes: 15 additions & 15 deletions website/cookbooks/custom_metrics.md
Expand Up @@ -94,7 +94,7 @@ pipeline:
source: homebrew
value: ${! json("analytics.install.30d.benthos") }

- bloblang: root = deleted()
- mapping: root = deleted()

metrics:
mapping: if this != "downloads" { deleted() }
Expand All @@ -105,7 +105,7 @@ With the above config we have selected the [`prometheus` metrics type][metrics.p

We have also specified a [`path_mapping`][metrics.prometheus.path_mapping] that deletes any internal metrics usually emitted by Benthos by filtering on our custom metric name.

Finally, there's also a [`bloblang` processor][processors.bloblang] added to the end of our pipeline that deletes all messages since we're not interested in sending the raw data anywhere after this point anyway.
Finally, there's also a [`mapping` processor][processors.mapping] added to the end of our pipeline that deletes all messages since we're not interested in sending the raw data anywhere after this point anyway.

While running this config you can verify that our custom metric is emitted with `curl`:

Expand Down Expand Up @@ -133,8 +133,8 @@ Easy! The Dockerhub API is also pretty simple, and adding it to our pipeline is
```diff
source: homebrew
value: ${! json("analytics.install.30d.benthos") }
+ - bloblang: root = ""

+ - mapping: root = ""
+
+ - http:
+ url: https://hub.docker.com/v2/repositories/jeffail/benthos/
Expand All @@ -147,7 +147,7 @@ Easy! The Dockerhub API is also pretty simple, and adding it to our pipeline is
+ source: dockerhub
+ value: ${! json("pull_count") }
+
- bloblang: root = deleted()
- mapping: root = deleted()
```
</TabItem>

Expand Down Expand Up @@ -175,7 +175,7 @@ pipeline:
source: homebrew
value: ${! json("analytics.install.30d.benthos") }

- bloblang: root = ""
- mapping: root = ""

- http:
url: https://hub.docker.com/v2/repositories/jeffail/benthos/
Expand All @@ -188,7 +188,7 @@ pipeline:
source: dockerhub
value: ${! json("pull_count") }

- bloblang: root = deleted()
- mapping: root = deleted()

metrics:
mapping: if this != "downloads" { deleted() }
Expand Down Expand Up @@ -217,7 +217,7 @@ So that's the basics covered. Next, we're going to target the Github releases AP
]
```

It's an array of objects, one for each tagged release, with a field `assets` which is an array of objects representing each release asset, of which we want to emit a separate download gauge. In order to do this we're going to use a [`bloblang` processor][processors.bloblang] to remap the payload from Github into an array of objects of the following form:
It's an array of objects, one for each tagged release, with a field `assets` which is an array of objects representing each release asset, of which we want to emit a separate download gauge. In order to do this we're going to use a [`mapping` processor][processors.mapping] to remap the payload from Github into an array of objects of the following form:

```json
[
Expand Down Expand Up @@ -247,7 +247,7 @@ pipeline:
url: https://api.github.com/repos/benthosdev/benthos/releases
verb: GET

- bloblang: |
- mapping: |
root = this.map_each(release -> release.assets.map_each(asset -> {
"source": "github",
"dist": asset.name.re_replace_all("^benthos-?((lambda_)|_)[0-9\\.]+(-rc[0-9]+)?_([^\\.]+).*", "$2$4"),
Expand All @@ -266,7 +266,7 @@ pipeline:
source: ${! json("source") }
value: ${! json("download_count") }

- bloblang: root = deleted()
- mapping: root = deleted()

metrics:
mapping: if this != "downloads" { deleted() }
Expand Down Expand Up @@ -305,7 +305,7 @@ processor_resources:
- http:
url: https://hub.docker.com/v2/repositories/jeffail/benthos/
verb: GET
- bloblang: |
- mapping: |
root.source = "docker"
root.dist = "docker"
root.download_count = this.pull_count
Expand All @@ -320,7 +320,7 @@ processor_resources:
- http:
url: https://api.github.com/repos/benthosdev/benthos/releases
verb: GET
- bloblang: |
- mapping: |
root = this.map_each(release -> release.assets.map_each(asset -> {
"source": "github",
"dist": asset.name.re_replace_all("^benthos-?((lambda_)|_)[0-9\\.]+(-rc[0-9]+)?_([^\\.]+).*", "$2$4"),
Expand All @@ -330,7 +330,7 @@ processor_resources:
- unarchive:
format: json_array
- resource: metric_gauge
- bloblang: 'root = if batch_index() != 0 { deleted() }'
- mapping: 'root = if batch_index() != 0 { deleted() }'

- label: homebrew
branch:
Expand All @@ -340,7 +340,7 @@ processor_resources:
- http:
url: https://formulae.brew.sh/api/formula/benthos.json
verb: GET
- bloblang: |
- mapping: |
root.source = "homebrew"
root.dist = "homebrew"
root.download_count = this.analytics.install.30d.benthos
Expand Down Expand Up @@ -369,7 +369,7 @@ metrics:
[processors.workflow]: /docs/components/processors/workflow
[processors.branch]: /docs/components/processors/branch
[processors.unarchive]: /docs/components/processors/unarchive
[processors.bloblang]: /docs/components/processors/bloblang
[processors.mapping]: /docs/components/processors/mapping
[processors.http]: /docs/components/processors/http
[processors.metric]: /docs/components/processors/metric
[rate_limits]: /docs/components/rate_limits/about
Expand Down

0 comments on commit d34e2a7

Please sign in to comment.