Skip to content


Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


A Foxx microservice that acts as a collector agent for OpenTracing spans emitted by foxx-tracer.

Why another collector agent?

One might ask why, in the presence of so many available collectors (Zipkin, OpenCensus Collector, Jaeger, etc.) do we need another collector service.

Listed below are some reasons why foxx-tracer-collector was created:

  1. Traces (a trace is a collection of interrelated spans) generated inside a Foxx Microservice have to be shipped out to an external system where they can be stored and analyzed. Unfortunately, the Foxx runtime is extremely restrictive about how its mounted services can communicate with the outside world.
  2. The only allowed networking option is to use Foxx's built-in request module, but that too is entirely synchronous - meaning we cannot insert trace-posting requests in the main execution path without severely degrading latency and throughput metrics.
  3. All existing CommonJS OpenTracing client libraries are naturally asynchronous in order to avoid this penalty, but this also means they cannot run within a Foxx V8 context. Therefore, a 100% synchronous client library is also required. This requirement is fulfilled by foxx-tracer - a companion client library for foxx-tracer-collector.
  4. One way to achieve asynchronous execution in Foxx is to use the Task API, but that would mean burdening the service with continually generating asynchronous tasks just to push out trace data, which is not its primary function.
  5. There are other, more nuanced limitations, imposed by the 100% synchronous runtime (that distributes a service deployment across multiple, isolated V8 contexts in singular instances as well as in clusters), which make an in-memory trace-buffering mechanism (with periodic, task-based flush) unfeasible.
  6. Due to the reasons stated above, every trace that is initiated by an incoming request has to be flushed immediately once the request has been served. Naturally, this must be done with as little lag as possible.
  7. This is where foxx-tracer-collector comes to the rescue. It runs beside your traced service in the same DB instance, and works in tandem with the foxx-tracer client (installed in your service) to record spans by means of an exported function that is directly invoked (in-process) and returns instantly, with no blocking I/O performed.

How it works

Once a trace has been offloaded to the collector, your service is free to go back to its primary business, which is to serve user requests.

The collector, in turn, asynchronously persists the trace to DB (via the Tasks API), and then periodically pushes all pending traces to designated endpoints (via pluggable reporters). Recorded traces are assigned a configurable TTL, after which they are expunged from the database, thereby keeping things light and nimble.


foxx-tracer-collector uses a system of pluggable reporters to push its received traces to various destinations. More than one reporter can be active at a time, in which case all of them are invoked to push the traces to their respective endpoints.

The collector comes with two reporters pre-installed:

  1. A noop reporter that does nothing. This reporter is baked in and cannot be removed, but it can be disabled.
  2. A console reporter that prints traces to the ArangoDB log. This reporter is pluggable and can be removed or disabled if required.

Neither of these reporters is particularly useful for production setups, but they are useful for debugging purposes.

More useful (for actual trace capture and analysis) reporters can be found by searching the NPM registry for the keyword "foxx-tracer-reporter". At the time of this writing, there is a production-ready reporter available for the Datadog Cloud Monitoring Service. It is named foxx-tracer-reporter-datadog.

Custom Reporters

If you don't find a reporter for your specific endpoint, you can easily write one yourself!

Installation and Configuration

See the wiki for instructions.