The project provides both a server called "gostatsd" which works much like Etsy's version, but also provides a library for developing customized servers.
Backends are pluggable and only need to support the backend interface.
Being written in Go, it is able to use all cores which makes it easy to scale up the server based on load. The server can also be run HA and be scaled out, see Load balancing and scaling out.
Building the server
Gostatsd currently targets Go 1.10.2. There are no known hard dependencies in the code beween 1.9 and 1.10.2, but some may be introduced in future.
gostatsd directory run
make build. The binary will be built in
You will need to install the Golang build dependencies by running
make setup in the
gostatsd directory. This must be done before the first build,
and again if the dependencies change. A protobuf installation is expected to be found in the
directory. Managing this in a platform agnostic way is difficult, but PRs are welcome. Hopefully it will be sufficient to use the generated protobuf
files in the majority of cases.
If you are unable to build
gostatsd please try running
make setup again before reporting a bug.
Running the server
gostatsd --help gives a complete description of available options and their
defaults. You can use
make run to run the server with just the
to display info on screen.
You can also run through
docker by running
make run-docker which will use
gostatsd with a graphite backend and a grafana dashboard.
While not generally tested on Windows, it should work. Maximum throughput is likely to be better on a linux system, however.
Configuring the server mode
The server can currently run in two modes:
forwarder. It is configured through the top level
server-mode configuration setting. The default is
standalone mode, raw metrics are processed and aggregated as normal, and aggregated data is submitted to
configured backends (see below)
forwarder mode, raw metrics are collected from a frontend, and instead of being aggregated they are sent via http
to another gostatsd server after passing through the processing pipeline (cloud provider, static tags, filtering, etc).
forwarder server is intended to run on-host and collect metrics, forwarding them on to a central aggregation
service. At present the central aggregation service can only scale vertically, but horizontal scaling through
clustering is planned.
forwarder mode requires a configuration file, with a section named
http-transport. The raw version
spoken is not configurable per server (see [HTTP.md] for version guarantees). The configuration section allows the
following configuration options:
client-timeout: duration for the http client timeout. Defaults to
compress: boolean indicating if the payload should be compressed. Defaults to
enable-http2: boolean to enable the usage of http2 on the request. There seems to be some incompatibility with the golang http2 implementation and AWS ELB/ALBs. If you experience strange timeouts and hangs, this should be the first thing to disable. Defaults to
api-endpoint: configures the endpoint to submit raw metrics to. This setting should be just a base URL, for example
https://statsd-aggregator.private, with no path. Required, no default
max-requests: maximum number of requests in flight. Defaults to
1000(which is probably too high)
max-request-elapsed-time: duration for the maximum amount of time to try submitting data before giving up. This includes retries. Defaults to
30s(which is probably too high)
network: the network type to use, probably
tcp6. Defaults to
consolidator-slots: number of slots in the metric consolidator. Memory usage is a function of this. Lower values may cause blocking in the pipeline (back pressure). A UDP only receiver will never use more than the number of configured parsers (
--max-parsersoption). Defaults to the value of
--max-parsers, but may require tuning for HTTP based servers.
flush-interval: duration for how long to batch metrics before flushing. Should be an order of magnitude less than the upstream flush interval. Defaults to
Configuring HTTP servers
The service supports multiple HTTP servers, with different configurations for different requirements. All http servers
are named in the top level
http-servers setting. It should be a space separated list of names. Each server is then
configured by creating a section in the configuration file named
http.<servername>. An http server section has the
following configuration options:
address: the address to bind to
enable-prof: boolean indicating if profiler endpoints should be enabled. Default
enable-expvar: boolean indicating if expvar endpoints should be enabled. Default
enable-ingestion: boolean indicating if ingestion should be enabled. Default
enable-healthcheck: boolean indicating if healthchecks should be enabled. Default
For example, to configure a server with a localhost only diagnostics endpoint, and a regular ingestion endpoint that can sit behind an ELB, the following configuration could be used:
backends='stdout' http-servers='receiver profiler' [http.receiver] address='0.0.0.0:8080' enable-ingestion=true [http.profiler] address='127.0.0.1:6060' enable-expvar=true enable-prof=true
There is no capability to run an https server at this point in time, and no auth (which is why you might want different addresses). You could also put a reverse proxy in front of the service. Documentation for the endpoints can be found under HTTP.md
Configuring backends and cloud providers
Backends and cloud providers are configured using
yaml configuration file
passed via the
--config-path flag. For all configuration options see source code of the backends you
are interested in. A cloudprovider should not be used on the aggregation server when forwarding data to
it, as the source IP address is not propagated. A cloudprovider can be used on the forwarder host, however.
Configuration file might look like this:
[graphite] address = "192.168.99.100:2003" [datadog] api_key = "my-secret-key" # Datadog API key required. [statsdaemon] address = "docker.local:8125" disable_tags = false [aws] max_retries = 4 [newrelic] address = "http://localhost:8001/v1/data" event-type = "GoStatsD" #see full configuration options further below
New Relic Backend
Supports two routes for flushing metrics to New Relic.
- Directly to the Insights Collector - Insights Event API
- Via the Infrastructure Agent's inbuilt HTTP Server
Sending directly to the Event API alleviates the requirement of needing to have the New Relic Infrastructure Agent. Therefore you can run this from nearly anywhere for maximum flexibility. This also becomes a shorter data path with less resource requirements becoming a simpler setup.
To use this method, create an Insert API Key from here: https://insights.newrelic.com/accounts/YOUR_ACCOUNT_ID/manage/api_keys
#Example configuration [newrelic] address = "https://insights-collector.newrelic.com/v1/accounts/YOUR_ACCOUNT_ID/events" api-key = "yourEventAPIInsertKey"
Sending via the Infrastructure Agent's inbuilt HTTP server provides additional features, such as automatically applying additional metadata to the event the host may have such as AWS tags, instance type, host information, labels etc.
The payload structure required to be accepted by the agent can be viewed here.
To enable the HTTP server, modify /etc/newrelic.yml to include the below, and restart the agent (Step 1.2).
http_server_enabled: true http_server_host: 127.0.0.1 #(default host) http_server_port: 8001 #(default port)
Additional options are available to rename attributes if required.
[newrelic] tag-prefix = "" metric-name = "name" metric-type = "type" per-second = "per_second" value = "value" timer-min = "min" timer-max = "max" timer-count = "samples_count" timer-mean = "samples_mean" timer-median = "samples_median" timer-stddev = "samples_std_dev" timer-sum = "samples_sum" timer-sumsquare = "samples_sum_squares"
Configuring timer sub-metrics
By default, timer metrics will result in aggregated metrics of the form (exact name varies by backend):
<base>.Count <base>.CountPerSecond <base>.Mean <base>.Median <base>.Lower <base>.Upper <base>.StdDev <base>.Sum <base>.SumSquares
In addition, the following aggregated metrics will be emitted for each configured percentile:
<base>.Count_XX <base>.Mean_XX <base>.Sum_XX <base>.SumSquares_XX <base>.Upper_XX - for positive only <base>.Lower_-XX - for negative only
These can be controlled through the
disabled-sub-metrics configuration section:
[disabled-sub-metrics] # Regular metrics count=false count-per-second=false mean=false median=false lower=false upper=false stddev=false sum=false sum-squares=false # Percentile metrics count-pct=false mean-pct=false sum-pct=false sum-squares-pct=false lower-pct=false upper-pct=false
By default (for compatibility), they are all false and the metrics will be emitted.
The server listens for UDP packets on the address given by the
aggregates them, then sends them to the backend servers given by the
flag (space separated list of backend names).
Currently supported backends are:
The format of each metric is:
<bucket name>is a string like
abc.def.g, just like a graphite bucket name
<value>is a string representation of a floating point number
<type>is one of
msfor "counter", "gauge", and "timer" respectively.
A single packet can contain multiple metrics, each ending with a newline.
gostatsd supports sample rates (for simple counters, and for timer counters) and tags:
<bucket name>:<value>|c|@<sample rate>\nwhere
sample rateis a float between 0 and 1
<bucket name>:<value>|c|@<sample rate>|#<tags>\nwhere
tagsis a comma separated list of tags
tagsis a comma separated list of tags
Tags format is:
A simple way to test your installation or send metrics from a script is to use
echo and the netcat utility
echo 'abc.def.g:10|c' | nc -w1 -u localhost 8125
Many metrics for the internal processes are emitted. See METRICS.md for details. Go expvar is also
exposed if the
--profile flag is used.
Memory allocation for read buffers
gostatsd will batch read multiple packets to optimise read performance. The amount of memory allocated
for these read buffers is determined by the config options:
max-readers * receive-batch-size * 64KB (max packet size)
avg_packets_in_batch can be used to track the average number of datagrams received per batch, and the
--receive-batch-size flag used to tune it. There may be some benefit to tuning the
--max-readers flag as well.
Using the library
In your source code:
Documentation can be found via
go doc github.com/atlassian/gostatsd/pkg/statsd or at
Pull requests, issues and comments welcome. For pull requests:
- Add tests for new features and bug fixes
- Follow the existing style
- Separate unrelated changes into multiple pull requests
See the existing issues for things to start contributing.
For bigger changes, make sure you start a discussion first by creating an issue and explaining the intended change.
Atlassian requires contributors to sign a Contributor License Agreement, known as a CLA. This serves as a record stating that the contributor is entitled to contribute the code/documentation/translation to the project and is willing to have it used in distributions and derivative works (or is willing to transfer ownership).
Prior to accepting your contributions we ask that you please follow the appropriate link below to digitally sign the CLA. The Corporate CLA is for those who are contributing as a member of an organization and the individual CLA is for those contributing as an individual.
Copyright (c) 2012 Kamil Kisiel. Copyright @ 2016-2017 Atlassian Pty Ltd and others.
Licensed under the MIT license. See LICENSE file.