Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove feature flags and default to on #2004

Merged
merged 12 commits into from
Jan 19, 2023
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,7 @@ Old config will still work but will be removed in a future release. [#1735](http
encoding: // renamed to v2_encoding
row_group_size_bytes: // renamed to parquet_row_group_size_bytes
```
* [CHANGE] **BREAKING CHANGE** Remove `search_enabled` and `metrics_generator_enabled`. Both default to true. [#2004](https://github.com/grafana/tempo/pull/2004) (@joe-elliott)
* [FEATURE] Add capability to configure the used S3 Storage Class [#1697](https://github.com/grafana/tempo/pull/1714) (@amitsetty)
* [ENHANCEMENT] cache: expose username and sentinel_username redis configuration options for ACL-based Redis Auth support [#1708](https://github.com/grafana/tempo/pull/1708) (@jsievenpiper)
* [ENHANCEMENT] metrics-generator: expose span size as a metric [#1662](https://github.com/grafana/tempo/pull/1662) (@ie-pham)
Expand Down
18 changes: 3 additions & 15 deletions cmd/tempo/app/app.go
Original file line number Diff line number Diff line change
Expand Up @@ -51,10 +51,8 @@ var (
nil,
)

statFeatureEnabledMetricsGenerator = usagestats.NewInt("feature_enabled_metrics_generator")
statFeatureEnabledAuth = usagestats.NewInt("feature_enabled_auth_stats")
statFeatureEnabledMultitenancy = usagestats.NewInt("feature_enabled_multitenancy")
statFeatureEnabledSearch = usagestats.NewInt("feature_enabled_search")
statFeatureEnabledAuth = usagestats.NewInt("feature_enabled_auth_stats")
statFeatureEnabledMultitenancy = usagestats.NewInt("feature_enabled_multitenancy")
)

// App is the root datastructure.
Expand Down Expand Up @@ -96,21 +94,11 @@ func New(cfg Config) (*App, error) {
statFeatureEnabledAuth.Set(1)
}

statFeatureEnabledMetricsGenerator.Set(0)
if cfg.MetricsGeneratorEnabled {
statFeatureEnabledMetricsGenerator.Set(1)
}

statFeatureEnabledMultitenancy.Set(0)
if cfg.SearchEnabled {
if cfg.MultitenancyEnabled {
joe-elliott marked this conversation as resolved.
Show resolved Hide resolved
statFeatureEnabledMultitenancy.Set(1)
}

statFeatureEnabledSearch.Set(0)
if cfg.SearchEnabled {
statFeatureEnabledSearch.Set(1)
}

app.setupAuthMiddleware()

if err := app.setupModuleManager(); err != nil {
Expand Down
31 changes: 5 additions & 26 deletions cmd/tempo/app/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,13 +26,11 @@ import (

// Config is the root config for App.
type Config struct {
Target string `yaml:"target,omitempty"`
AuthEnabled bool `yaml:"auth_enabled,omitempty"`
MultitenancyEnabled bool `yaml:"multitenancy_enabled,omitempty"`
SearchEnabled bool `yaml:"search_enabled,omitempty"`
MetricsGeneratorEnabled bool `yaml:"metrics_generator_enabled"`
HTTPAPIPrefix string `yaml:"http_api_prefix"`
UseOTelTracer bool `yaml:"use_otel_tracer,omitempty"`
Target string `yaml:"target,omitempty"`
AuthEnabled bool `yaml:"auth_enabled,omitempty"`
MultitenancyEnabled bool `yaml:"multitenancy_enabled,omitempty"`
HTTPAPIPrefix string `yaml:"http_api_prefix"`
UseOTelTracer bool `yaml:"use_otel_tracer,omitempty"`

Server server.Config `yaml:"server,omitempty"`
Distributor distributor.Config `yaml:"distributor,omitempty"`
Expand Down Expand Up @@ -63,7 +61,6 @@ func (c *Config) RegisterFlagsAndApplyDefaults(prefix string, f *flag.FlagSet) {
f.StringVar(&c.Target, "target", SingleBinary, "target module")
f.BoolVar(&c.AuthEnabled, "auth.enabled", false, "Set to true to enable auth (deprecated: use multitenancy.enabled)")
f.BoolVar(&c.MultitenancyEnabled, "multitenancy.enabled", false, "Set to true to enable multitenancy.")
f.BoolVar(&c.SearchEnabled, "search.enabled", false, "Set to true to enable search (unstable).")
f.StringVar(&c.HTTPAPIPrefix, "http-api-prefix", "", "String prefix for all http api endpoints.")
f.BoolVar(&c.UseOTelTracer, "use-otel-tracer", false, "Set to true to replace the OpenTracing tracer with the OpenTelemetry tracer")

Expand Down Expand Up @@ -123,10 +120,6 @@ func (c *Config) MultitenancyIsEnabled() bool {
// CheckConfig checks if config values are suspect and returns a bundled list of warnings and explanation.
func (c *Config) CheckConfig() []ConfigWarning {
var warnings []ConfigWarning
if c.Target == MetricsGenerator && !c.MetricsGeneratorEnabled {
warnings = append(warnings, warnMetricsGenerator)
}

if c.Ingester.CompleteBlockTimeout < c.StorageConfig.Trace.BlocklistPoll {
warnings = append(warnings, warnCompleteBlockTimeout)
}
Expand Down Expand Up @@ -187,22 +180,12 @@ func (c *Config) Collect(ch chan<- prometheus.Metric) {

features := map[string]int{
"search_external_endpoints": 0,
"search": 0,
"metrics_generator": 0,
}

if len(c.Querier.Search.ExternalEndpoints) > 0 {
features["search_external_endpoints"] = 1
}

if c.SearchEnabled {
features["search"] = 1
}

if c.MetricsGeneratorEnabled {
features["metrics_generator"] = 1
}

for label, value := range features {
ch <- prometheus.MustNewConstMetric(metricConfigFeatDesc, prometheus.GaugeValue, float64(value), label)
}
Expand All @@ -215,10 +198,6 @@ type ConfigWarning struct {
}

var (
warnMetricsGenerator = ConfigWarning{
Message: "target == metrics-generator but metrics_generator_enabled != true",
Explain: "The metrics-generator will only receive data if metrics_generator_enabled is set to true globally",
}
warnCompleteBlockTimeout = ConfigWarning{
Message: "ingester.complete_block_timeout < storage.trace.blocklist_poll",
Explain: "You may receive 404s between the time the ingesters have flushed a trace and the querier is aware of the new block",
Expand Down
1 change: 0 additions & 1 deletion cmd/tempo/app/config_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ func TestConfig_CheckConfig(t *testing.T) {
},
},
expect: []ConfigWarning{
warnMetricsGenerator,
warnCompleteBlockTimeout,
warnBlockRetention,
warnRetentionConcurrency,
Expand Down
55 changes: 26 additions & 29 deletions cmd/tempo/app/modules.go
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ func (t *App) initOverrides() (services.Service, error) {

func (t *App) initDistributor() (services.Service, error) {
// todo: make ingester client a module instead of passing the config everywhere
distributor, err := distributor.New(t.cfg.Distributor, t.cfg.IngesterClient, t.ring, t.cfg.GeneratorClient, t.generatorRing, t.overrides, t.TracesConsumerMiddleware, log.Logger, t.cfg.Server.LogLevel, t.cfg.SearchEnabled, t.cfg.MetricsGeneratorEnabled, prometheus.DefaultRegisterer)
distributor, err := distributor.New(t.cfg.Distributor, t.cfg.IngesterClient, t.ring, t.cfg.GeneratorClient, t.generatorRing, t.overrides, t.TracesConsumerMiddleware, log.Logger, t.cfg.Server.LogLevel, prometheus.DefaultRegisterer)
if err != nil {
return nil, fmt.Errorf("failed to create distributor %w", err)
}
Expand Down Expand Up @@ -171,8 +171,15 @@ func (t *App) initGenerator() (services.Service, error) {
}
t.generator = generator

tempopb.RegisterMetricsGeneratorServer(t.Server.GRPC, t.generator)
return t.generator, nil
var service services.Service
if generator != nil {
tempopb.RegisterMetricsGeneratorServer(t.Server.GRPC, t.generator)
service = t.generator
} else {
service = services.NewIdleService(nil, nil)
}

return service, nil
}

func (t *App) initQuerier() (services.Service, error) {
Expand Down Expand Up @@ -207,19 +214,17 @@ func (t *App) initQuerier() (services.Service, error) {
tracesHandler := middleware.Wrap(http.HandlerFunc(t.querier.TraceByIDHandler))
t.Server.HTTP.Handle(path.Join(api.PathPrefixQuerier, addHTTPAPIPrefix(&t.cfg, api.PathTraces)), tracesHandler)

if t.cfg.SearchEnabled {
searchHandler := t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.querier.SearchHandler))
t.Server.HTTP.Handle(path.Join(api.PathPrefixQuerier, addHTTPAPIPrefix(&t.cfg, api.PathSearch)), searchHandler)
searchHandler := t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.querier.SearchHandler))
t.Server.HTTP.Handle(path.Join(api.PathPrefixQuerier, addHTTPAPIPrefix(&t.cfg, api.PathSearch)), searchHandler)

searchTagsHandler := t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.querier.SearchTagsHandler))
t.Server.HTTP.Handle(path.Join(api.PathPrefixQuerier, addHTTPAPIPrefix(&t.cfg, api.PathSearchTags)), searchTagsHandler)
searchTagsHandler := t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.querier.SearchTagsHandler))
t.Server.HTTP.Handle(path.Join(api.PathPrefixQuerier, addHTTPAPIPrefix(&t.cfg, api.PathSearchTags)), searchTagsHandler)

searchTagValuesHandler := t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.querier.SearchTagValuesHandler))
t.Server.HTTP.Handle(path.Join(api.PathPrefixQuerier, addHTTPAPIPrefix(&t.cfg, api.PathSearchTagValues)), searchTagValuesHandler)
searchTagValuesHandler := t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.querier.SearchTagValuesHandler))
t.Server.HTTP.Handle(path.Join(api.PathPrefixQuerier, addHTTPAPIPrefix(&t.cfg, api.PathSearchTagValues)), searchTagValuesHandler)

searchTagValuesV2Handler := t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.querier.SearchTagValuesV2Handler))
t.Server.HTTP.Handle(path.Join(api.PathPrefixQuerier, addHTTPAPIPrefix(&t.cfg, api.PathSearchTagValuesV2)), searchTagValuesV2Handler)
}
searchTagValuesV2Handler := t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.querier.SearchTagValuesV2Handler))
t.Server.HTTP.Handle(path.Join(api.PathPrefixQuerier, addHTTPAPIPrefix(&t.cfg, api.PathSearchTagValuesV2)), searchTagValuesV2Handler)

return t.querier, t.querier.CreateAndRegisterWorker(t.Server.HTTPServer.Handler)
}
Expand Down Expand Up @@ -255,14 +260,13 @@ func (t *App) initQueryFrontend() (services.Service, error) {
t.Server.HTTP.Handle(addHTTPAPIPrefix(&t.cfg, api.PathTraces), traceByIDHandler)

// http search endpoints
if t.cfg.SearchEnabled {
t.Server.HTTP.Handle(addHTTPAPIPrefix(&t.cfg, api.PathSearch), searchHandler)
t.Server.HTTP.Handle(addHTTPAPIPrefix(&t.cfg, api.PathSearchTags), searchHandler)
t.Server.HTTP.Handle(addHTTPAPIPrefix(&t.cfg, api.PathSearchTagValues), searchHandler)
t.Server.HTTP.Handle(addHTTPAPIPrefix(&t.cfg, api.PathSearchTagValuesV2), searchHandler)
t.Server.HTTP.Handle(addHTTPAPIPrefix(&t.cfg, api.PathSearch), searchHandler)
t.Server.HTTP.Handle(addHTTPAPIPrefix(&t.cfg, api.PathSearchTags), searchHandler)
t.Server.HTTP.Handle(addHTTPAPIPrefix(&t.cfg, api.PathSearchTagValues), searchHandler)
t.Server.HTTP.Handle(addHTTPAPIPrefix(&t.cfg, api.PathSearchTagValuesV2), searchHandler)

t.store.EnablePolling(nil) // the query frontend does not need to have knowledge of the backend unless it is building jobs for backend search
}
// the query frontend needs to have knowledge of the blocks so it can shard search jobs
t.store.EnablePolling(nil)

// http query echo endpoint
t.Server.HTTP.Handle(addHTTPAPIPrefix(&t.cfg, api.PathEcho), echoHandler())
Expand Down Expand Up @@ -401,23 +405,16 @@ func (t *App) setupModuleManager() error {
QueryFrontend: {Store, Server, Overrides, UsageReport},
Ring: {Server, MemberlistKV},
MetricsGeneratorRing: {Server, MemberlistKV},
Distributor: {Ring, Server, Overrides, UsageReport},
Distributor: {Ring, Server, Overrides, UsageReport, MetricsGeneratorRing},
Ingester: {Store, Server, Overrides, MemberlistKV, UsageReport},
MetricsGenerator: {Server, Overrides, MemberlistKV, UsageReport},
Querier: {Store, Ring, Overrides, UsageReport},
Compactor: {Store, Server, Overrides, MemberlistKV, UsageReport},
SingleBinary: {Compactor, QueryFrontend, Querier, Ingester, Distributor},
SingleBinary: {Compactor, QueryFrontend, Querier, Ingester, Distributor, MetricsGenerator},
ScalableSingleBinary: {SingleBinary},
UsageReport: {MemberlistKV},
}

if t.cfg.MetricsGeneratorEnabled {
// If metrics-generator is enabled, the distributor needs the metrics-generator ring
deps[Distributor] = append(deps[Distributor], MetricsGeneratorRing)
// Add the metrics generator as dependency for when target is {,scalable-}single-binary
deps[SingleBinary] = append(deps[SingleBinary], MetricsGenerator)
}

for mod, targets := range deps {
if err := mm.AddDependency(mod, targets...); err != nil {
return err
Expand Down
17 changes: 0 additions & 17 deletions docs/sources/configuration/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -230,13 +230,6 @@ For more information on configuration options, see [here](https://github.com/gra

The metrics-generator processes spans and write metrics using the Prometheus remote write protocol.

The metrics-generator is an optional component, it can be enabled by setting the following top-level setting.
In microservices mode, it must be set for the distributors and the metrics-generators.

```yaml
metrics_generator_enabled: true
```

Metrics-generator processors are disabled by default. To enable it for a specific tenant set `metrics_generator_processors` in the [overrides](#overrides) section.

```yaml
Expand Down Expand Up @@ -1265,16 +1258,6 @@ overrides:
- ingestion_rate_limit_bytes: 15000000
```

## Search

Tempo search can be enabled by the following top-level setting. In microservices mode, it must be set for the distributors and queriers.

```yaml
search_enabled: true
```

Additional search-related settings are available in the [distributor](#distributor) and [ingester](#ingester) sections.

## Usage-report

By default, Tempo will report anonymous usage data about the shape of a deployment to Grafana Labs.
Expand Down
1 change: 0 additions & 1 deletion docs/sources/configuration/manifest.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ go run ./cmd/tempo --storage.trace.backend=local --storage.trace.local.path=/tmp

```yaml
target: all
metrics_generator_enabled: false
http_api_prefix: ""
server:
http_listen_network: tcp
Expand Down
10 changes: 2 additions & 8 deletions docs/sources/operations/backend_search.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: Backend search
weight: 9
---

jpe cleanup

joe-elliott marked this conversation as resolved.
Show resolved Hide resolved
# Backend search

Backend search is not yet mature. It can therefore be operationally more complex.
Expand All @@ -17,13 +19,6 @@ Queriers and query frontends have additional configuration related
to search of the backend datastore.
Some defaults are currently tuned for a search by trace ID.

### All components

```
# Enable search functionality
search_enabled: true
```

### Querier

Without serverless technologies:
Expand All @@ -37,7 +32,6 @@ querier:
With serverless technologies:

```
search_enabled: true
querier:
# The querier is only a proxy to the serverless endpoint.
# Increase this greatly to permit needed throughput.
Expand Down