Skip to content
OpenTelemetry Service
Go Other
  1. Go 99.6%
  2. Other 0.4%
Branch: master
Clone or download


Type Name Latest commit message Commit time
Failed to load latest commit information.
client Added client package to help identify RPC/HTTP clients (#326) Jan 15, 2020
cmd/otelcol Print version number and git commit on startup in the logs (#393) Oct 16, 2019
component Make component interfaces uniform (#488) Jan 10, 2020
compression Rename all github paths from opentelemtry-service to opentelemetry-co… Sep 27, 2019
config Remove unused MetricsSource() and TraceSource() methods (#514) Jan 22, 2020
defaults Enable span processor (#523) Jan 29, 2020
docs Use obsreport package on receivers (#578) Feb 28, 2020
examples Fix binary path and update to latest docker image (#585) Feb 28, 2020
exporter Use obsreport package on receivers (#578) Feb 28, 2020
internal Rename Processor Metrics (#551) Feb 20, 2020
observability Fix Collector Metrics Prefix (#530) Feb 7, 2020
obsreport Use obsreport package on receivers (#578) Feb 28, 2020
processor Use obsreport package on receivers (#578) Feb 28, 2020
receiver Use obsreport package on receivers (#578) Feb 28, 2020
scripts Include all sources in the coverage tests. (#466) Mar 4, 2019
testbed Use obsreport package on receivers (#578) Feb 28, 2020
testutils Make testbed usable from other repos (#449) Dec 9, 2019
translator Add OC to OTLP translation functions (#546) Feb 18, 2020
.gitignore Update PR template and add local/ to .gitignore (#545) Feb 13, 2020
.golangci.yml Integrate golangci-lint and enable scope linting (#428) Dec 2, 2019
.travis.yml Migrate to Go1.13 (#509) Jan 23, 2020
CODEOWNERS Add @rghetia as a code owner (#389) Oct 9, 2019 Add long-term Roadmap document (#487) Jan 9, 2020
Makefile Migrate to Go1.13 (#509) Jan 23, 2020 Use obsreport package on receivers (#578) Feb 28, 2020
go.mod Bump Jaeger dependencies to 1.17.0 (#570) Feb 26, 2020
go.sum Update PR template and add local/ to .gitignore (#545) Feb 13, 2020

OpenTelemetry Collector

Build Status GoDoc Gitter chat Coverage Status

We hold regular meetings. See details at community page.

Table of contents:


The OpenTelemetry Collector can receive traces and metrics from processes instrumented by OpenTelemetry or other monitoring/tracing libraries (e.g. Jaeger, Prometheus, etc.), can pre-process received data including adding or removing attributes, handles aggregation and smart sampling, and exports traces and metrics to one or more open-soource or commercial monitoring/tracing backends.

Some frameworks and ecosystems are now providing out-of-the-box instrumentation by using OpenTelemetry, but the user is still expected to register an exporter to send data. This is a problem during an incident. Even though users can benefit from having more diagnostics data coming out of services already instrumented with OpenTelemetry, they have to modify their code to register an exporter and redeploy. Asking users to recompile and redeploy is not ideal during an incident. In addition, current users need to decide which service backend they want to export to before they distribute their binary instrumented by OpenTelemetry.

The OpenTelemetry Collector is trying to eliminate these requirements. With the OpenTelemetry Collector, users do not need to redeploy or restart their applications as long as it has the OpenTelemetry exporter. All they need to do is configure and deploy the OpenTelemetry Collector separately. The OpenTelemetry Collector will then automatically receive traces and metrics and export to any backend of the user's choice.

Some supplemental documents to review include:


The OpenTelemetry Collector can be deployed in a variety of different ways depending on requirements. Currently, the OpenTelemetry Collector consists of a single binary and two deployment methods:

  1. An agent running with the application or on the same host as the application (e.g. binary, sidecar, or daemonset)
  2. A collector running as a standalone service (e.g. container or deployment)

While the same binary is used for either deployment method, the configuration between the two may differ depending on requirements (e.g. queue size and feature-set enabled).


Getting Started


Instructions for setting up an end-to-end demo environment can be found here


Apply the sample YAML file:

$ kubectl apply -f example/k8s.yaml


Create an Agent configuration file based on the example below. Build the Agent and start it with the example configuration:

$ ./bin/$(go env GOOS)/otelcol  --config ./examples/demo/otel-agent-config.yaml
$ 2018/10/08 21:38:00 Running OpenTelemetry receiver as a gRPC service at "localhost:55678"

Create a Collector configuration file based on the example below. Build the Collector and start it with the example configuration:

$ make otelcol
$ ./bin/$($GOOS)/otelcol --config ./examples/demo/otel-collector-config.yaml

Run the demo application:

$ go run "$(go env GOPATH)/src/"

You should be able to see the traces in your exporter(s) of choice. If you stop otelcol, the example application will stop exporting. If you run it again, exporting will resume.


The OpenTelemetry Collector is configured via a YAML file. In general, at least one enabled receiver and one enabled exporter needs to be configured.

The configuration consists of the following sections:



A receiver is how data gets into the OpenTelemetry Collector. One or more receivers must be configured. By default, no receivers are configured.

A basic example of all available receivers is provided below. For detailed receiver configuration, please see the receiver

    address: "localhost:55678"

    address: "localhost:9411"


        - job_name: "caching_cluster"
          scrape_interval: 5s
            - targets: ["localhost:8889"]


Processors are run on data between being received and being exported. Processors are optional though some are recommended.

A basic example of all available processors is provided below. For detailed processor configuration, please see the processor

      - key: db.statement
        action: delete
    timeout: 5s
    send_batch_size: 1024
    disabled: true
      from_attributes: ["db.svc", "operation"]
      separator: "::"
  queued_retry: {}
      - name: policy1
        type: rate_limiting
          spans_per_second: 100


An exporter is how you send data to one or more backends/destinations. One or more exporters must be configured. By default, no exporters are configured.

A basic example of all available exporters is provided below. For detailed exporter configuration, please see the exporter

    headers: {"X-test-header": "test-header"}
    compression: "gzip"
    cert_pem_file: "server-ca-public.pem" # optional to enable TLS
    endpoint: "localhost:55678"
    reconnection_delay: 2s

    loglevel: debug

    endpoint: "http://localhost:14250"

    headers: {"X-test-header": "test-header"}
    timeout: 5
    url: "http://localhost:14268/api/traces"

    url: "http://localhost:9411/api/v2/spans"

    endpoint: "localhost:8889"
    namespace: "default"


Extensions are provided to monitor the health of the OpenTelemetry Collector. Extensions are optional. By default, no extensions are configured.

A basic example of all available extensions is provided below. For detailed extension configuration, please see the extension

  health_check: {}
  pprof: {}
  zpages: {}


The service section is used to configure what features are enabled in the OpenTelemetry Collector based on the configuration found in the receivers, processors, exporters, and extensions sections. The service section consists of two sub-sections:

  • extensions
  • pipelines

Extensions consist of a list of all extensions to enable. For example:

      extensions: [health_check, pprof, zpages]

Pipelines can be of two types:

  • metrics: collects and processes metrics data.
  • traces: collects and processes trace data.

A pipeline consists of a set of receivers, processors, and exporters. Each receiver/processor/exporter must be specified in the configuration to be included in a pipeline and each receiver/processor/exporter can be used in more than one pipeline.

Note: For processor(s) referenced in multiple pipelines, each pipeline will get a separate instance of that processor(s). This is in contrast to receiver(s)/exporter(s) referenced in multiple pipelines, where only one instance of a receiver/exporter is used for all pipelines.

The following is an example pipeline configuration. For more information, refer to pipeline documentation.

      receivers: [opencensus, jaeger]
      processors: [batch, queued_retry]
      exporters: [opencensus, zipkin]


By default, the OpenTelemetry Collector exposes Prometheus metrics and logs for monitoring and troubleshooting. When troubleshooting live issues it is recommended to use zpages extension.

The zpages extension provides live information about receivers and exporters. By default, zpages is available at http://localhost:55679/debug/tracez. Click on the links of the displayed operations to see information about each individual operation. Operations that encountered errors are reported on the right most column.


Other Information

Extending the Collector

The OpenTelemetry collector can be extended or embedded into other applications.

The list of applications extending the collector:


Approvers (@open-telemetry/collector-approvers):

Find more about the approver role in community repository.

Maintainers (@open-telemetry/collector-maintainers):

Find more about the maintainer role in community repository.

You can’t perform that action at this time.