Skip to content

Commit

Permalink
Merge pull request #485 from grafana/fl/optional-aggregation-file
Browse files Browse the repository at this point in the history
Make aggregationFile an optional setting
  • Loading branch information
Dieterbe committed Feb 25, 2022
2 parents 7dc3752 + e4294f0 commit 6970b8c
Show file tree
Hide file tree
Showing 15 changed files with 116 additions and 116 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
* adopt the term "blocklist" for blocking metrics. The existing config options "blacklist" and commands "addBlack" are replaced
with "blocklist" and "addBlock" respectively. The old option and command will keep working for the time being, but users are recommended
to update their configuration because the old options will be removed at the next major release. #473

* grafanaNet route: make publishing storage-aggregation.conf optional. This is a breaking change for the `addRoute grafanaNet ...` command, as `aggregationFile=` now needs to be prepended to the aggregation file value. (Note that this command is only used by the experimental tcp admin interface, and the deprecated config init commands.) #485
# v1.1: July 7, 2021

* improve error messages, especially for storage-aggregation.conf #470
Expand Down
6 changes: 2 additions & 4 deletions cfg/table_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ func TestTomlToGrafanaNetRoute(t *testing.T) {

var testCases []testCase

cfg, err := route.NewGrafanaNetConfig("http://foo/metrics", "apiKey", schemasFile.Name(), aggregationFile.Name())
cfg, err := route.NewGrafanaNetConfig("http://foo/metrics", "apiKey", schemasFile.Name(), "")
if err != nil {
t.Fatal(err) // should never happen
}
Expand All @@ -41,9 +41,7 @@ key = 'routeKey'
type = 'grafanaNet'
addr = 'http://foo/metrics'
apikey = 'apiKey'
schemasFile = '` + schemasFile.Name() + `'
aggregationFile = '` + aggregationFile.Name() + `'
`,
schemasFile = '` + schemasFile.Name() + `'`,
expCfg: cfg,
expErr: false,
})
Expand Down
2 changes: 1 addition & 1 deletion docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@ key | Y | string | N/A | string to identify this rou
addr | Y | string | N/A | http url to connect to
apiKey | Y | string | N/A | API key to use (taken from grafana cloud portal)
schemasFile | Y | string | N/A | storage-schemas.conf file that describes your metrics (the [storage-schemas.conf from Graphite](http://graphite.readthedocs.io/en/latest/config-carbon.html#storage-schemas-conf)
aggregationFile| Y | string | N/A | storage-aggregation.conf file that describes your metrics (the [storage-aggregation.conf from Graphite](https://graphite.readthedocs.io/en/latest/config-carbon.html#storage-aggregation-conf)
aggregationFile| N | string | "" | storage-aggregation.conf file that describes your metrics (the [storage-aggregation.conf from Graphite](https://graphite.readthedocs.io/en/latest/config-carbon.html#storage-aggregation-conf). if the value is empty, the most recent aggregation file uploaded to grafana cloud will be used instead. if no aggregation files have ever been uploaded, the default grafana cloud aggregations will be applied.
prefix | N | string | "" | only route metrics that start with this
notPrefix | N | string | "" | only route metrics that do not start with this
sub | N | string | "" | only route metrics that contain this in their name
Expand Down
2 changes: 1 addition & 1 deletion docs/deploying-on-k8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ local crng = import 'carbon-relay-ng/crng.libsonnet';

Tanka can generate the resource definitions, the output should yield 4 resources:
* a secret with the base64 encoded api key
* a configmap with the carbon-relay-ng.ini, storage-aggregation.conf and storage-schemas.conf
* a configmap with the carbon-relay-ng.ini and storage-schemas.conf, and, optionally, storage-aggregation.conf
* a service which forwards to the carbon-relay-ng pod
* a deployment creating the carbon-relay-ng pod

Expand Down
19 changes: 11 additions & 8 deletions docs/grafana-net.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,15 @@
The GrafanaNet route is a special route within carbon-relay-ng.

It takes graphite (carbon) input and submits it to a grafanaCloud metrics store in encrypted form.
It requires you to provide a storage-schemas.conf and storage-aggregation.conf file,
which on Grafana Cloud Graphite v5 are used to render data and generate your rollups. Typically,
you will have these files already set up if you use Graphite.
(See Graphite docs for [storage-schemas.conf](http://graphite.readthedocs.io/en/latest/config-carbon.html#storage-schemas-conf) and [storage-aggregation.conf from Graphite](https://graphite.readthedocs.io/en/latest/config-carbon.html#storage-aggregation-conf))
(Note that unlike Graphite, you may change these files as necessary to describe your data and desired rollups,
after a carbon-relay-ng restart they will take effect immediately, even on historical data without having to run any data conversion)
It takes graphite (carbon) input and submits it to a Grafana Cloud metrics store in encrypted form.
There are two files you can provide to this route: storage-schemas.conf (required) and storage-aggregation.conf (optional).
On Grafana Cloud Graphite v5, these files are used to render data and generate your rollups.
Typically, you will have these files already set up if you use Graphite.
If the storage-aggregation.conf file is not provided, the default Grafana Cloud aggregations will be used when rendering data.

See Graphite docs for [storage-schemas.conf](http://graphite.readthedocs.io/en/latest/config-carbon.html#storage-schemas-conf) and [storage-aggregation.conf from Graphite](https://graphite.readthedocs.io/en/latest/config-carbon.html#storage-aggregation-conf).

(Note that unlike Graphite, you may change these files as necessary to describe your data and desired rollups.
After a carbon-relay-ng restart they will take effect immediately, even on historical data without having to run any data conversion.)


**Note: For more details on GrafanaCloud, head over to https://grafana.com/cloud/metrics**
Expand All @@ -23,7 +26,7 @@ The config there is the best starting point.
note:
* it requires a grafana.com api key and a url for ingestion, which will be shown on the instance details page in your Grafana Cloud portal.
* api key should have editor or admin role.
* it needs to read your graphite storage-schemas.conf and storage-aggregation.conf as described above.
* it needs to read your graphite storage-schemas.conf (and optionally, storage-aggregation.conf) as described above.
* any metric messages that don't validate are filtered out. see the admin ui to troubleshoot if needed.
* by specifying a prefix, sub or regex you can only send a subset of your metrics to Grafana Cloud Graphite.

Expand Down
2 changes: 1 addition & 1 deletion docs/tcp-admin-interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ commands:
spoolsleep=<int> sleep this many microseconds(!) in between ingests from bulkdata/redo buffers into spool. default 500
unspoolsleep=<int> sleep this many microseconds(!) in between reads from the spool, when replaying spooled data. default 10

addRoute grafanaNet key [prefix/notPrefix/sub/notSub/regex/notRegex] addr apiKey schemasFile aggregationFile [spool=true/false sslverify=true/false blocking=true/false concurrency=int bufSize=int flushMaxNum=int flushMaxWait=int timeout=int orgId=int errBackoffMin=int errBackoffFactor=float]")
addRoute grafanaNet key [prefix/notPrefix/sub/notSub/regex/notRegex] addr apiKey schemasFile [aggregationFile=string spool=true/false sslverify=true/false blocking=true/false concurrency=int bufSize=int flushMaxNum=int flushMaxWait=int timeout=int orgId=int errBackoffMin=int errBackoffFactor=float]")

addDest <routeKey> <dest> not implemented yet

Expand Down
4 changes: 2 additions & 2 deletions examples/k8s/configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,8 @@ data:
storage-aggregation.conf: |
[default]
pattern = .*
xFilesFactor = 0.5
aggregationMethod = average
xFilesFactor = 0.1
aggregationMethod = avg
kind: ConfigMap
metadata:
name: carbon-relay-ng-config
Expand Down
34 changes: 6 additions & 28 deletions examples/storage-aggregation.conf
Original file line number Diff line number Diff line change
@@ -1,9 +1,12 @@
# You only need this file if you want to use the grafanaNet route
# (https://github.com/grafana/carbon-relay-ng/blob/master/docs/grafana-net.md)
# and you want to configure how points are aggregated when rendering data.
# In all other cases you can ignore this file.
# This file describes what aggregation (rollup) methods to use for long term storage.
# This is an example file. To find your actual file, check your existing Graphite installation
# if you have one.
# Note that this file is optional - if you don't supply one to the grafanaNet route, the default
# Grafana Cloud aggregations will be used when rendering data.
# Format is documented at https://graphite.readthedocs.io/en/latest/config-carbon.html#storage-aggregation-conf
# Entries are scanned in order, and first match wins.
#
Expand All @@ -17,32 +20,7 @@
# xFilesFactor: Ratio of valid data points required for aggregation to the next retention to occur
# aggregationMethod: function to apply to data points for aggregation
#
[min]
pattern = \.lower$
xFilesFactor = 0.1
aggregationMethod = min

[max]
pattern = \.upper(_\d+)?$
xFilesFactor = 0.1
aggregationMethod = max

[sum]
pattern = \.sum$
xFilesFactor = 0
aggregationMethod = sum

[count]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum

[count_legacy]
pattern = ^stats_counts.*
xFilesFactor = 0
aggregationMethod = sum

[default_average]
[default]
aggregationMethod = avg
pattern = .*
xFilesFactor = 0.3
aggregationMethod = average
xFilesFactor = 0.1
30 changes: 21 additions & 9 deletions imperatives/imperatives.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ import (
"github.com/grafana/carbon-relay-ng/aggregator"
"github.com/grafana/carbon-relay-ng/destination"
"github.com/grafana/carbon-relay-ng/matcher"
conf "github.com/grafana/carbon-relay-ng/pkg/mt-conf"
"github.com/grafana/carbon-relay-ng/rewriter"
"github.com/grafana/carbon-relay-ng/route"
"github.com/grafana/carbon-relay-ng/table"
Expand Down Expand Up @@ -92,6 +93,7 @@ const (
optPubSubFormat
optPubSubCodec
optPubSubFlushMaxSize
optAggregationFile
)

// we should make sure we apply changes atomatically. e.g. when changing dest between address A and pickle=false and B with pickle=true,
Expand Down Expand Up @@ -159,6 +161,7 @@ var tokens = []toki.Def{
{Token: optPubSubFormat, Pattern: "format="},
{Token: optPubSubCodec, Pattern: "codec="},
{Token: optPubSubFlushMaxSize, Pattern: "flushMaxSize="},
{Token: optAggregationFile, Pattern: "aggregationFile="},
{Token: str, Pattern: "\".*\""},
{Token: sep, Pattern: "##"},
{Token: avgFn, Pattern: "avg "},
Expand All @@ -179,7 +182,7 @@ var tokens = []toki.Def{
var errFmtAddBlock = errors.New("addBlock <prefix|sub|regex> <pattern>")
var errFmtAddAgg = errors.New("addAgg <avg|count|delta|derive|last|max|min|stdev|sum> [prefix/sub/regex=,..] <fmt> <interval> <wait> [cache=true/false] [dropRaw=true/false]")
var errFmtAddRoute = errors.New("addRoute <type> <key> [prefix/sub/regex=,..] <dest> [<dest>[...]] where <dest> is <addr> [prefix/sub,regex,flush,reconn,pickle,spool=...]") // note flush and reconn are ints, pickle and spool are true/false. other options are strings
var errFmtAddRouteGrafanaNet = errors.New("addRoute grafanaNet key [prefix/notPrefix/sub/notSub/regex/notRegex] addr apiKey schemasFile aggregationFile [spool=true/false sslverify=true/false blocking=true/false concurrency=int bufSize=int flushMaxNum=int flushMaxWait=int timeout=int orgId=int errBackoffMin=int errBackoffFactor=float]")
var errFmtAddRouteGrafanaNet = errors.New("addRoute grafanaNet key [prefix/notPrefix/sub/notSub/regex/notRegex] addr apiKey schemasFile [aggregationFile=string spool=true/false sslverify=true/false blocking=true/false concurrency=int bufSize=int flushMaxNum=int flushMaxWait=int timeout=int orgId=int errBackoffMin=int errBackoffFactor=float]")
var errFmtAddRouteKafkaMdm = errors.New("addRoute kafkaMdm key [prefix/sub/regex=,...] broker topic codec schemasFile partitionBy orgId [blocking=true/false bufSize=int flushMaxNum=int flushMaxWait=int timeout=int tlsEnabled=bool tlsSkipVerify=bool tlsClientKey='<key>' tlsClientCert='<file>' saslEnabled=bool saslMechanism='mechanism' saslUsername='username' saslPassword='password']")
var errFmtAddRoutePubSub = errors.New("addRoute pubsub key [prefix/sub/regex=,...] project topic [codec=gzip/none format=plain/pickle blocking=true/false bufSize=int flushMaxSize=int flushMaxWait=int]")
var errFmtAddDest = errors.New("addDest <routeKey> <dest>") // not implemented yet
Expand Down Expand Up @@ -507,20 +510,29 @@ func readAddRouteGrafanaNet(s *toki.Scanner, table table.Interface) error {
}
schemasFile := string(t.Value)

t = s.Next()
if t.Token != word {
return errFmtAddRouteGrafanaNet
}
aggregationFile := string(t.Value)
t = s.Next()

cfg, err := route.NewGrafanaNetConfig(addr, apiKey, schemasFile, aggregationFile)
// The aggregationFile argument is blank - it will be set later if it's found
// in the list of optional arguments
cfg, err := route.NewGrafanaNetConfig(addr, apiKey, schemasFile, "")
if err != nil {
return errFmtAddRouteGrafanaNet
}

t = s.Next()

for ; t.Token != toki.EOF; t = s.Next() {
switch t.Token {
case optAggregationFile:
t = s.Next()
if t.Token == word {
aggregationFile := string(t.Value)
_, err := conf.ReadAggregations(aggregationFile)
if err != nil {
return err
}
cfg.AggregationFile = aggregationFile
} else {
return errFmtAddRouteGrafanaNet
}
case optBlocking:
t = s.Next()
if t.Token == optTrue || t.Token == optFalse {
Expand Down
27 changes: 22 additions & 5 deletions imperatives/imperatives_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -93,19 +93,19 @@ func TestApplyAddRouteGrafanaNet(t *testing.T) {
var testCases []testCase

// trivial case. mostly defaults, so let's rely on the helper that generates the (mostly default) config
cfg, err := route.NewGrafanaNetConfig("http://foo/metrics", "apiKey", schemasFile.Name(), aggregationFile.Name())
cfg, err := route.NewGrafanaNetConfig("http://foo/metrics", "apiKey", schemasFile.Name(), "")
if err != nil {
t.Fatal(err) // should never happen
}
testCases = append(testCases, testCase{
cmd: "addRoute grafanaNet key http://foo/metrics apiKey " + schemasFile.Name() + " " + aggregationFile.Name(),
cmd: "addRoute grafanaNet key http://foo/metrics apiKey " + schemasFile.Name(),
expCfg: cfg,
expErr: false,
})

// advanced case full of all possible settings.
testCases = append(testCases, testCase{
cmd: "addRoute grafanaNet key prefix=prefix notPrefix=notPrefix sub=sub notSub=notSub regex=regex notRegex=notRegex http://foo.bar/metrics apiKey " + schemasFile.Name() + " " + aggregationFile.Name() + " spool=true sslverify=false blocking=true concurrency=42 bufSize=123 flushMaxNum=456 flushMaxWait=5 timeout=123 orgId=10010 errBackoffMin=14 errBackoffFactor=1.8",
cmd: "addRoute grafanaNet key prefix=prefix notPrefix=notPrefix sub=sub notSub=notSub regex=regex notRegex=notRegex http://foo.bar/metrics apiKey " + schemasFile.Name() + " aggregationFile=" + aggregationFile.Name() + " spool=true sslverify=false blocking=true concurrency=42 bufSize=123 flushMaxNum=456 flushMaxWait=5 timeout=123 orgId=10010 errBackoffMin=14 errBackoffFactor=1.8",
expCfg: route.GrafanaNetConfig{
Addr: "http://foo.bar/metrics",
ApiKey: "apiKey",
Expand Down Expand Up @@ -136,14 +136,31 @@ func TestApplyAddRouteGrafanaNet(t *testing.T) {
expErr: false,
})

otherFile := test.TempFdOrFatal("carbon-relay-ng-TestNewGrafanaNetConfig-otherFile", "this is not an aggregation file", t)
defer os.Remove(otherFile.Name())

for _, aggFile := range []string{
"some-path-that-definitely-will-not-exist-for-carbon-relay-ng",
otherFile.Name(),
} {
testCases = append(testCases, testCase{
cmd: "addRoute grafanaNet key http://foo/metrics apiKey " + schemasFile.Name() + " aggregationFile=" + aggFile,
expErr: true,
})
}

for _, testCase := range testCases {
m := &table.MockTable{}
err := Apply(m, testCase.cmd)
if !testCase.expErr && err != nil {
t.Fatalf("testcase with cmd %q expected no error but got error %s", testCase.cmd, err.Error())
}
if testCase.expErr && err == nil {
t.Fatalf("testcase with cmd %q expected error but got no error", testCase.cmd)
if testCase.expErr {
if err == nil {
t.Fatalf("testcase with cmd %q expected error but got no error", testCase.cmd)
}
// don't check other conditions if we are in an error state
continue
}
if len(m.Routes) != 1 {
t.Fatalf("testcase with cmd %q resulted in %d routes, not 1", testCase.cmd, len(m.Routes))
Expand Down
6 changes: 5 additions & 1 deletion jsonnet/lib/carbon-relay-ng/crng.libsonnet
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,10 @@ local k = import 'ksonnet-util/kausal.libsonnet';
crng_replicas: 1,
crng_config: importstr 'files/carbon-relay-ng.ini',
storage_schemas: importstr 'files/storage-schemas.conf',
// You can set storage_aggregation to null if you don't have an
// aggregation file and want to use Grafana Cloud's default aggregations
// instead. (And in that case, don't forget to remove the aggregationFile
// setting in carbon-relay-ng.ini.)
storage_aggregation: importstr 'files/storage-aggregation.conf',
},

Expand All @@ -23,7 +27,7 @@ local k = import 'ksonnet-util/kausal.libsonnet';
{
'carbon-relay-ng.ini': $._config.crng_config,
'storage-schemas.conf': $._config.storage_schemas,
'storage-aggregation.conf': $._config.storage_aggregation,
[if $._config.storage_aggregation != null then 'storage-aggregation.conf']: $._config.storage_aggregation,
}
),

Expand Down
2 changes: 2 additions & 0 deletions jsonnet/lib/carbon-relay-ng/files/carbon-relay-ng.ini
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@ type = 'grafanaNet'
addr = "${GRAFANA_NET_ADDR}"
apikey = "${GRAFANA_NET_USER_ID}:${GRAFANA_NET_API_KEY}"
schemasFile = '/conf/storage-schemas.conf'
# aggregationFile is optional. If you remove this setting, Grafana Cloud's
# default aggregations will be used when rendering data.
aggregationFile = '/conf/storage-aggregation.conf'

## Instrumentation ##
Expand Down
34 changes: 6 additions & 28 deletions jsonnet/lib/carbon-relay-ng/files/storage-aggregation.conf
Original file line number Diff line number Diff line change
@@ -1,9 +1,12 @@
# You only need this file if you want to use the grafanaNet route
# (https://github.com/grafana/carbon-relay-ng/blob/master/docs/grafana-net.md)
# and you want to configure how points are aggregated when rendering data.
# In all other cases you can ignore this file.
# This file describes what aggregation (rollup) methods to use for long term storage.
# This is an example file. To find your actual file, check your existing Graphite installation
# if you have one.
# Note that this file is optional - if you don't supply one to the grafanaNet route, the default
# Grafana Cloud aggregations will be used when rendering data.
# Format is documented at https://graphite.readthedocs.io/en/latest/config-carbon.html#storage-aggregation-conf
# Entries are scanned in order, and first match wins.
#
Expand All @@ -17,32 +20,7 @@
# xFilesFactor: Ratio of valid data points required for aggregation to the next retention to occur
# aggregationMethod: function to apply to data points for aggregation
#
[min]
pattern = \.lower$
xFilesFactor = 0.1
aggregationMethod = min

[max]
pattern = \.upper(_\d+)?$
xFilesFactor = 0.1
aggregationMethod = max

[sum]
pattern = \.sum$
xFilesFactor = 0
aggregationMethod = sum

[count]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum

[count_legacy]
pattern = ^stats_counts.*
xFilesFactor = 0
aggregationMethod = sum

[default_average]
[default]
aggregationMethod = avg
pattern = .*
xFilesFactor = 0.3
aggregationMethod = average
xFilesFactor = 0.1

0 comments on commit 6970b8c

Please sign in to comment.