Skip to content

Files

Latest commit

 

History

History

routingconnector

Routing Connector

Status
Distributions contrib, k8s
Issues Open issues Closed issues
Code Owners @jpkrohling, @mwear

Supported Pipeline Types

Exporter Pipeline Type Receiver Pipeline Type Stability Level
traces traces alpha
metrics metrics alpha
logs logs alpha

Routes logs, metrics or traces based on resource attributes to specific pipelines using OpenTelemetry Transformation Language (OTTL) statements as routing conditions.

Configuration

If you are not already familiar with connectors, you may find it helpful to first visit the Connectors README.

The following settings are available:

  • table (required): the routing table for this connector.
  • table.context (optional, default: resource): the OTTL Context in which the statement will be evaluated. Currently, only resource, span, metric, datapoint, log, and request are supported.
  • table.statement: the routing condition provided as the OTTL statement. Required if table.condition is not provided. May not be used for request context.
  • table.condition: the routing condition provided as the OTTL condition. Required if table.statement is not provided. Required for request context.
  • table.pipelines (required): the list of pipelines to use when the routing condition is met.
  • default_pipelines (optional): contains the list of pipelines to use when a record does not meet any of specified conditions.
  • error_mode (optional): determines how errors returned from OTTL statements are handled. Valid values are propagate, ignore and silent. If ignore or silent is used and a statement's condition has an error then the payload will be routed to the default pipelines. When silent is used the error is not logged. If not supplied, propagate is used.

Limitations

  • The request context requires use of the condition setting, and relies on a very limited grammar. Conditions must be in the form of request["key"] == "value" or request["key"] != "value". (In the future, this grammar may be expanded to support more complex conditions.)

Supported OTTL functions

Additional Settings

The full list of settings exposed for this connector are documented in config.go with detailed sample configuration files:

Examples

Route logs based on tenant:

receivers:
    otlp:

exporters:
  file/other:
    path: ./other.log
  file/acme:
    path: ./acme.log
  file/ecorp:
    path: ./ecorp.log

connectors:
  routing:
    default_pipelines: [logs/other]
    table:
      - context: request
        condition: request["X-Tenant"] == "acme"
        pipelines: [logs/acme]
      - context: request
        condition: request["X-Tenant"] == "ecorp"
        pipelines: [logs/ecorp]

service:
  pipelines:
    logs/in:
      receivers: [otlp]
      exporters: [routing]
    logs/acme:
      receivers: [routing]
      exporters: [file/acme]
    logs/ecorp:
      receivers: [routing]
      exporters: [file/ecorp]
    logs/other:
      receivers: [routing]
      exporters: [file/other]

Route logs based on region:

receivers:
    otlp:

exporters:
  file/other:
    path: ./other.log
  file/east:
    path: ./east.log
  file/west:
    path: ./west.log

connectors:
  routing:
    default_pipelines: [logs/other]
    table:
      - context: log
        condition: attributes["region"] == "east"
        pipelines: [logs/east]
      - context: log
        condition: attributes["region"] == "west"
        pipelines: [logs/west]

service:
  pipelines:
    logs/in:
      receivers: [otlp]
      exporters: [routing]
    logs/east:
      receivers: [routing]
      exporters: [file/east]
    logs/west:
      receivers: [routing]
      exporters: [file/west]
    logs/other:
      receivers: [routing]
      exporters: [file/other]

Route all low level logs to cheap storage. Route the remainder based on service name:

receivers:
    otlp:

exporters:
  file/cheap:
    path: ./cheap.log
  file/service1:
    path: ./service1-important.log
  file/ecorp:
    path: ./service2-important.log

connectors:
  routing:
    table:
      - context: log
        condition: severity_number < SEVERITY_NUMBER_ERROR
        pipelines: [logs/cheap]
      - context: resource
        condition: attributes["service.name"] == "service1"
        pipelines: [logs/service1]
      - context: resource
        condition: attributes["service.name"] == "service2"
        pipelines: [logs/service2]

service:
  pipelines:
    logs/in:
      receivers: [otlp]
      exporters: [routing]
    logs/cheap:
      receivers: [routing]
      exporters: [file/cheap]
    logs/service1:
      receivers: [routing]
      exporters: [file/service1]
    logs/service2:
      receivers: [routing]
      exporters: [file/service2]

Route all low level logs to cheap storage. Route the remainder based on tenant:

receivers:
    otlp:

exporters:
  file/cheap:
    path: ./cheap.log
  file/acme:
    path: ./acme.log
  file/ecorp:
    path: ./ecorp.log

connectors:
  routing:
    table:
      - context: log
        condition: severity_number < SEVERITY_NUMBER_ERROR
        pipelines: [logs/cheap]
      - context: request
        condition: request["X-Tenant"] == "acme"
        pipelines: [logs/acme]
      - context: request
        condition: request["X-Tenant"] == "ecorp"
        pipelines: [logs/ecorp]

service:
  pipelines:
    logs/in:
      receivers: [otlp]
      exporters: [routing]
    logs/cheap:
      receivers: [routing]
      exporters: [file/cheap]
    logs/acme:
      receivers: [routing]
      exporters: [file/acme]
    logs/ecorp:
      receivers: [routing]
      exporters: [file/ecorp]

match_once

The match_once field was deprecated as of v0.116.0 and removed in v0.120.0.

The following examples demonstrate some strategies for migrating a configuration from match_once.

Example without default_pipelines

If not using default_pipelines, you may be able to split the router into multiple parallel routers. In the following example, the "env" and "region" are not directly related.

routing:
  match_once: false
  table:
    - condition: attributes["env"] == "prod"
       pipelines: [ logs/prod ]
    - condition: attributes["env"] == "dev"
       pipelines: [ logs/dev ]
    - condition: attributes["region"] == "east"
       pipelines: [ logs/east ]
    - condition: attributes["region"] == "west"
       pipelines: [ logs/west ]

service:
  pipelines:
    logs/in::exporters: [routing]
    logs/prod::receivers: [routing]
    logs/dev::receivers: [routing]
    logs/east::receivers: [routing]
    logs/west::receivers: [routing]

Therefore, the same behavior can be achieved using separate routers. Listing both routers in the pipeline configuration will result in each receiving an independent handle to the data. The same data can then match routes in both routers.

routing/env:
  table:
    - condition: attributes["env"] == "prod"
       pipelines: [ logs/prod ]
    - condition: attributes["env"] == "dev"
       pipelines: [ logs/dev ]
routing/region:
  table:
    - condition: attributes["region"] == "east"
       pipelines: [ logs/east ]
    - condition: attributes["region"] == "west"
       pipelines: [ logs/west ]

service:
  pipelines:
    logs/in::exporters: [routing/env, routing/region]
    logs/prod::receivers: [routing/env]
    logs/dev::receivers: [routing/env]
    logs/east::receivers: [routing/region]
    logs/west::receivers: [routing/region]

Example with default_pipelines

The following example demonstrates strategies for migrating to match_once: true while using default_pipelines.

routing:
  match_once: false
  default_pipelines: [ logs/default ]
  table:
    - condition: attributes["env"] == "prod"
       pipelines: [ logs/prod ]
    - condition: attributes["env"] == "dev"
       pipelines: [ logs/dev ]
    - condition: attributes["region"] == "east"
       pipelines: [ logs/east ]
    - condition: attributes["region"] == "west"
       pipelines: [ logs/west ]

service:
  pipelines:
    logs/in::exporters: [routing]
    logs/default::receivers: [routing]
    logs/prod::receivers: [routing]
    logs/dev::receivers: [routing]
    logs/east::receivers: [routing]
    logs/west::receivers: [routing]

If the number of routes are limited, you may be able to articulate a route for each combination of conditions. This avoids the need to change any pipelines.

routing:
  default_pipelines: [ logs/default ]
  table:
    - condition: attributes["env"] == "prod" and attributes["region"] == "east"
       pipelines: [ logs/prod, logs/east ]
    - condition: attributes["env"] == "prod" and attributes["region"] == "west"
       pipelines: [ logs/prod, logs/west ]
    - condition: attributes["env"] == "dev" and attributes["region"] == "east"
       pipelines: [ logs/dev, logs/east ]
    - condition: attributes["env"] == "dev" and attributes["region"] == "west"
       pipelines: [ logs/dev, logs/west ]

service:
  pipelines:
    logs/in::exporters: [routing]
    logs/default::receivers: [routing]
    logs/prod::receivers: [routing]
    logs/dev::receivers: [routing]
    logs/east::receivers: [routing]
    logs/west::receivers: [routing]

A more general solution is to use a layered approach. In this design, the first layer is a single router that sorts data according to whether it matches any route or no route. This allows the second layer to work without default_pipelines. The downside to this approach is that the set of conditions in the first and second layers must be kept in sync.

# First layer separates logs that match no routes
routing:
  default_pipelines: [ logs/default ]
  table: # all routes forward to second layer
    - condition: attributes["env"] == "prod"
       pipelines: [ logs/env, logs/region ] 
    - condition: attributes["env"] == "dev"
       pipelines: [ logs/env, logs/region ]
    - condition: attributes["region"] == "east"
       pipelines: [ logs/env, logs/region ]
    - condition: attributes["region"] == "west"
       pipelines: [ logs/env, logs/region ]

# Second layer routes logs based on environment and region
routing/env:
  table:
    - condition: attributes["env"] == "prod"
       pipelines: [ logs/prod ]
    - condition: attributes["env"] == "dev"
       pipelines: [ logs/dev ]
routing/region:
  table:
    - condition: attributes["region"] == "east"
       pipelines: [ logs/east ]
    - condition: attributes["region"] == "west"
       pipelines: [ logs/west ]

service:
  pipelines:
    logs/in::exporters: [routing]
    logs/prod::receivers: [routing/env]
    logs/dev::receivers: [routing/env]
    logs/east::receivers: [routing/region]
    logs/west::receivers: [routing/region]