Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate -ingester.ring.join-after #1965

Merged
merged 2 commits into from
Jun 1, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Changelog

## Grafana Mimir - main / unreleased

### Grafana Mimir

* [CHANGE] Increased default configuration for `-server.grpc-max-recv-msg-size-bytes` and `-server.grpc-max-send-msg-size-bytes` from 4MB to 100MB. #1883
Expand All @@ -12,6 +13,7 @@
- `-querier.query-ingesters-within`
- `-querier.query-store-after`
* [CHANGE] Config flag category overrides can be set dynamically at runtime. #1934
* [CHANGE] Ingester: deprecated `-ingester.ring.join-after`. Mimir now behaves as this setting is always set to 0s. This configuration option will be removed in Mimir 2.4.0. #1965
* [ENHANCEMENT] Distributor: Added limit to prevent tenants from sending excessive number of requests: #1843
* The following CLI flags (and their respective YAML config options) have been added:
* `-distributor.request-rate-limit`
Expand Down
11 changes: 0 additions & 11 deletions cmd/mimir/config-descriptor.json
Original file line number Diff line number Diff line change
Expand Up @@ -2191,17 +2191,6 @@
"fieldType": "duration",
"fieldCategory": "advanced"
},
{
"kind": "field",
"name": "join_after",
"required": false,
"desc": "Period to wait for a claim from another member; will join automatically after this.",
"fieldValue": null,
"fieldDefaultValue": 0,
"fieldFlag": "ingester.ring.join-after",
"fieldType": "duration",
"fieldCategory": "advanced"
},
{
"kind": "field",
"name": "min_ready_duration",
Expand Down
4 changes: 0 additions & 4 deletions cmd/mimir/help-all.txt.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -639,8 +639,6 @@ Usage of ./cmd/mimir/mimir:
How frequently to clean up clients for ingesters that have gone away. (default 15s)
-distributor.drop-label value
This flag can be used to specify label names that to drop during sample ingestion within the distributor and can be repeated in order to drop multiple labels.
-distributor.extend-writes value
Deprecated: this setting was used to try writing to an additional ingester in the presence of an ingester not in the ACTIVE state. Mimir now behaves as this setting is always disabled.
-distributor.forwarding.enabled
[experimental] Enables the feature to forward certain metrics in remote_write requests, depending on defined rules.
-distributor.forwarding.request-timeout duration
Expand Down Expand Up @@ -913,8 +911,6 @@ Usage of ./cmd/mimir/mimir:
List of network interface names to look up when finding the instance IP address. (default [<private network interfaces>])
-ingester.ring.instance-port int
Port to advertise in the ring (defaults to -server.grpc-listen-port).
-ingester.ring.join-after duration
Period to wait for a claim from another member; will join automatically after this.
-ingester.ring.min-ready-duration duration
Minimum duration to wait after the internal readiness checks have passed but before succeeding the readiness endpoint. This is used to slowdown deployment controllers (eg. Kubernetes) after an instance is ready and before they proceed with a rolling update, to give the rest of the cluster instances enough time to receive ring updates. (default 15s)
-ingester.ring.multi.mirror-enabled
Expand Down
2 changes: 0 additions & 2 deletions cmd/mimir/help.txt.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -225,8 +225,6 @@ Usage of ./cmd/mimir/mimir:
Expands ${var} or $var in config according to the values of the environment variables.
-config.file value
Configuration file to load.
-distributor.extend-writes value
Deprecated: this setting was used to try writing to an additional ingester in the presence of an ingester not in the ACTIVE state. Mimir now behaves as this setting is always disabled.
-distributor.ha-tracker.cluster string
Prometheus label to look for in samples to identify a Prometheus HA cluster. (default "cluster")
-distributor.ha-tracker.consul.hostname string
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ ingester_client:
ingester:
ring:
# We want to start immediately.
join_after: 0
final_sleep: 0s
num_tokens: 512
kvstore:
Expand Down
1 change: 0 additions & 1 deletion development/tsdb-blocks-storage-s3/config/mimir.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ ingester_client:
ingester:
ring:
# We want to start immediately.
join_after: 0
final_sleep: 0s
num_tokens: 512
kvstore:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ ingester_client:
ingester:
ring:
# We want to start immediately.
join_after: 0
final_sleep: 0s
num_tokens: 512
kvstore:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,6 @@ ingester:
# Defaults to hostname, but we run both ingesters in this demonstration on the same machine.
instance_id: "Ingester 1"

# We don't want to join immediately, but wait a bit to see other ingesters and their tokens first.
# It can take a while to have the full picture when using gossip
join_after: 10s

# To avoid generating same tokens by multiple ingesters, they can "observe" the ring for a while,
# after putting their own tokens into it. This is only useful when using gossip, since multiple
# ingesters joining at the same time can have conflicting tokens if they don't see each other yet.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,6 @@ ingester:
# Defaults to hostname, but we run both ingesters in this demonstration on the same machine.
instance_id: "Ingester 2"

# We don't want to join immediately, but wait a bit to see other ingesters and their tokens first.
# It can take a while to have the full picture when using gossip
join_after: 10s

# To avoid generating same tokens by multiple ingesters, they can "observe" the ring for a while,
# after putting their own tokens into it. This is only useful when using gossip, since multiple
# ingesters joining at the same time can have conflicting tokens if they don't see each other yet.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,6 @@ ingester:
# address: 127.0.0.1

# We want to start immediately and flush on shutdown.
join_after: 0
min_ready_duration: 0s
final_sleep: 0s
num_tokens: 512
Expand Down
1 change: 0 additions & 1 deletion docs/configurations/single-process-config-blocks.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@ ingester:
# address: 127.0.0.1

# We want to start immediately and flush on shutdown.
join_after: 0
min_ready_duration: 0s
final_sleep: 0s
num_tokens: 512
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -679,11 +679,6 @@ ring:
# CLI flag: -ingester.ring.observe-period
[observe_period: <duration> | default = 0s]

# (advanced) Period to wait for a claim from another member; will join
# automatically after this.
# CLI flag: -ingester.ring.join-after
[join_after: <duration> | default = 0s]

# (advanced) Minimum duration to wait after the internal readiness checks have
# passed but before succeeding the readiness endpoint. This is used to
# slowdown deployment controllers (eg. Kubernetes) after an instance is ready
Expand Down
7 changes: 5 additions & 2 deletions pkg/ingester/ingester_ring.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,13 +44,16 @@ type RingConfig struct {

// Config for the ingester lifecycle control
ObservePeriod time.Duration `yaml:"observe_period" category:"advanced"`
JoinAfter time.Duration `yaml:"join_after" category:"advanced"`
DeprecatedJoinAfter time.Duration `yaml:"join_after" category:"advanced" doc:"hidden"` // TODO Deprecated: remove in Mimir 2.4.0.
MinReadyDuration time.Duration `yaml:"min_ready_duration" category:"advanced"`
FinalSleep time.Duration `yaml:"final_sleep" category:"advanced"`
ReadinessCheckRingHealth bool `yaml:"readiness_check_ring_health" category:"advanced"`

// Injected internally
ListenPort int `yaml:"-"`

// Used only for testing.
JoinAfter time.Duration `yaml:"-"`
}

// RegisterFlags adds the flags required to config this to the given FlagSet
Expand Down Expand Up @@ -88,7 +91,7 @@ func (cfg *RingConfig) RegisterFlags(f *flag.FlagSet, logger log.Logger) {

/// Lifecycler.
f.DurationVar(&cfg.ObservePeriod, prefix+"observe-period", 0*time.Second, "Observe tokens after generating to resolve collisions. Useful when using gossiping ring.")
f.DurationVar(&cfg.JoinAfter, prefix+"join-after", 0*time.Second, "Period to wait for a claim from another member; will join automatically after this.")
flagext.DeprecatedFlag(f, prefix+"join-after", "Deprecated: this setting was used to set a period of time to wait before joining the hash ring. Mimir now behaves as this setting is always set to 0s.", logger)
f.DurationVar(&cfg.MinReadyDuration, prefix+"min-ready-duration", 15*time.Second, "Minimum duration to wait after the internal readiness checks have passed but before succeeding the readiness endpoint. This is used to slowdown deployment controllers (eg. Kubernetes) after an instance is ready and before they proceed with a rolling update, to give the rest of the cluster instances enough time to receive ring updates.")
f.DurationVar(&cfg.FinalSleep, prefix+"final-sleep", 0, "Duration to sleep for before exiting, to ensure metrics are scraped.")
f.BoolVar(&cfg.ReadinessCheckRingHealth, prefix+"readiness-check-ring-health", true, "When enabled the readiness probe succeeds only after all instances are ACTIVE and healthy in the ring, otherwise only the instance itself is checked. This option should be disabled if in your cluster multiple instances can be rolled out simultaneously, otherwise rolling updates may be slowed down.")
Expand Down
3 changes: 1 addition & 2 deletions pkg/ingester/ingester_ring_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,6 @@ func TestRingConfig_CustomConfigToLifecyclerConfig(t *testing.T) {
cfg.InstanceZone = "zone-X"
cfg.UnregisterOnShutdown = true
cfg.ObservePeriod = 10 * time.Minute
cfg.JoinAfter = 5 * time.Minute
cfg.MinReadyDuration = 3 * time.Minute
cfg.FinalSleep = 2 * time.Minute
cfg.ReadinessCheckRingHealth = false
Expand All @@ -67,7 +66,7 @@ func TestRingConfig_CustomConfigToLifecyclerConfig(t *testing.T) {
expected.HeartbeatPeriod = cfg.HeartbeatPeriod
expected.HeartbeatTimeout = cfg.HeartbeatTimeout
expected.ObservePeriod = cfg.ObservePeriod
expected.JoinAfter = cfg.JoinAfter
expected.JoinAfter = 0
expected.MinReadyDuration = cfg.MinReadyDuration
expected.InfNames = cfg.InstanceInterfaceNames
expected.FinalSleep = cfg.FinalSleep
Expand Down
21 changes: 0 additions & 21 deletions pkg/ingester/ingester_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -619,7 +619,6 @@ func TestIngester_Push(t *testing.T) {

// Create a mocked ingester
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0
cfg.ActiveSeriesMetricsEnabled = !testData.disableActiveSeries
limits := defaultLimitsTestConfig()
limits.MaxGlobalExemplarsPerUser = testData.maxExemplars
Expand Down Expand Up @@ -720,7 +719,6 @@ func TestIngester_Push_ShouldCorrectlyTrackMetricsInMultiTenantScenario(t *testi

// Create a mocked ingester
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0

i, err := prepareIngesterWithBlocksStorage(t, cfg, registry)
require.NoError(t, err)
Expand Down Expand Up @@ -808,7 +806,6 @@ func TestIngester_Push_DecreaseInactiveSeries(t *testing.T) {
// Create a mocked ingester
cfg := defaultIngesterTestConfig(t)
cfg.ActiveSeriesMetricsIdleTimeout = 100 * time.Millisecond
cfg.IngesterRing.JoinAfter = 0

i, err := prepareIngesterWithBlocksStorage(t, cfg, registry)
currentTime := time.Now()
Expand Down Expand Up @@ -878,7 +875,6 @@ func benchmarkIngesterPush(b *testing.B, limits validation.Limits, errorsExpecte

// Create a mocked ingester
cfg := defaultIngesterTestConfig(b)
cfg.IngesterRing.JoinAfter = 0

ingester, err := prepareIngesterWithBlocksStorage(b, cfg, registry)
require.NoError(b, err)
Expand Down Expand Up @@ -1176,7 +1172,6 @@ func Benchmark_Ingester_PushOnError(b *testing.B) {

// Create a mocked ingester
cfg := defaultIngesterTestConfig(b)
cfg.IngesterRing.JoinAfter = 0

limits := defaultLimitsTestConfig()
if !testData.prepareConfig(&limits, instanceLimits) {
Expand Down Expand Up @@ -3028,7 +3023,6 @@ func TestIngester_OpenExistingTSDBOnStartup(t *testing.T) {

func TestIngester_shipBlocks(t *testing.T) {
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0
cfg.BlocksStorageConfig.TSDB.ShipConcurrency = 2

// Create ingester
Expand Down Expand Up @@ -3066,7 +3060,6 @@ func TestIngester_shipBlocks(t *testing.T) {

func TestIngester_dontShipBlocksWhenTenantDeletionMarkerIsPresent(t *testing.T) {
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0
cfg.BlocksStorageConfig.TSDB.ShipConcurrency = 2

// Create ingester
Expand Down Expand Up @@ -3115,7 +3108,6 @@ func TestIngester_dontShipBlocksWhenTenantDeletionMarkerIsPresent(t *testing.T)

func TestIngester_seriesCountIsCorrectAfterClosingTSDBForDeletedTenant(t *testing.T) {
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0
cfg.BlocksStorageConfig.TSDB.ShipConcurrency = 2

// Create ingester
Expand Down Expand Up @@ -3159,7 +3151,6 @@ func TestIngester_seriesCountIsCorrectAfterClosingTSDBForDeletedTenant(t *testin
func TestIngester_closeAndDeleteUserTSDBIfIdle_shouldNotCloseTSDBIfShippingIsInProgress(t *testing.T) {
ctx := context.Background()
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0
cfg.BlocksStorageConfig.TSDB.ShipConcurrency = 2

// We want it to be idle immediately (setting to 1ns because 0 means disabled).
Expand Down Expand Up @@ -3305,7 +3296,6 @@ func (m *shipperMock) Sync(ctx context.Context) (uploaded int, err error) {

func TestIngester_invalidSamplesDontChangeLastUpdateTime(t *testing.T) {
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0

// Create ingester
i, err := prepareIngesterWithBlocksStorage(t, cfg, nil)
Expand Down Expand Up @@ -3533,7 +3523,6 @@ func TestIngester_flushing(t *testing.T) {
} {
t.Run(name, func(t *testing.T) {
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0
cfg.BlocksStorageConfig.TSDB.ShipConcurrency = 1
cfg.BlocksStorageConfig.TSDB.ShipInterval = 1 * time.Minute // Long enough to not be reached during the test.

Expand Down Expand Up @@ -3564,7 +3553,6 @@ func TestIngester_flushing(t *testing.T) {

func TestIngester_ForFlush(t *testing.T) {
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0
cfg.BlocksStorageConfig.TSDB.ShipConcurrency = 1
cfg.BlocksStorageConfig.TSDB.ShipInterval = 10 * time.Minute // Long enough to not be reached during the test.

Expand Down Expand Up @@ -3740,7 +3728,6 @@ func Test_Ingester_AllUserStats(t *testing.T) {

func TestIngesterCompactIdleBlock(t *testing.T) {
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0
cfg.BlocksStorageConfig.TSDB.ShipConcurrency = 1
cfg.BlocksStorageConfig.TSDB.HeadCompactionInterval = 1 * time.Hour // Long enough to not be reached during the test.
cfg.BlocksStorageConfig.TSDB.HeadCompactionIdleTimeout = 1 * time.Second // Testing this.
Expand Down Expand Up @@ -3819,7 +3806,6 @@ func TestIngesterCompactIdleBlock(t *testing.T) {

func TestIngesterCompactAndCloseIdleTSDB(t *testing.T) {
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0
cfg.BlocksStorageConfig.TSDB.ShipInterval = 1 * time.Second // Required to enable shipping.
cfg.BlocksStorageConfig.TSDB.ShipConcurrency = 1
cfg.BlocksStorageConfig.TSDB.HeadCompactionIdleTimeout = 1 * time.Second
Expand Down Expand Up @@ -4033,7 +4019,6 @@ func TestHeadCompactionOnStartup(t *testing.T) {

func TestIngester_CloseTSDBsOnShutdown(t *testing.T) {
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0

// Create ingester
i, err := prepareIngesterWithBlocksStorage(t, cfg, nil)
Expand Down Expand Up @@ -4066,7 +4051,6 @@ func TestIngesterNotDeleteUnshippedBlocks(t *testing.T) {
cfg := defaultIngesterTestConfig(t)
cfg.BlocksStorageConfig.TSDB.BlockRanges = []time.Duration{chunkRange}
cfg.BlocksStorageConfig.TSDB.Retention = time.Millisecond // Which means delete all but first block.
cfg.IngesterRing.JoinAfter = 0

// Create ingester
reg := prometheus.NewPedanticRegistry()
Expand Down Expand Up @@ -4370,7 +4354,6 @@ func TestIngester_PushInstanceLimits(t *testing.T) {
t.Run(testName, func(t *testing.T) {
// Create a mocked ingester
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0
cfg.InstanceLimitsFn = func() *InstanceLimits {
return &testData.limits
}
Expand Down Expand Up @@ -4439,7 +4422,6 @@ func TestIngester_instanceLimitsMetrics(t *testing.T) {
cfg.InstanceLimitsFn = func() *InstanceLimits {
return &l
}
cfg.IngesterRing.JoinAfter = 0

_, err := prepareIngesterWithBlocksStorage(t, cfg, reg)
require.NoError(t, err)
Expand Down Expand Up @@ -4472,7 +4454,6 @@ func TestIngester_inflightPushRequests(t *testing.T) {
// Create a mocked ingester
cfg := defaultIngesterTestConfig(t)
cfg.InstanceLimitsFn = func() *InstanceLimits { return &limits }
cfg.IngesterRing.JoinAfter = 0

i, err := prepareIngesterWithBlocksStorage(t, cfg, nil)
require.NoError(t, err)
Expand Down Expand Up @@ -5292,7 +5273,6 @@ func TestIngesterActiveSeries(t *testing.T) {

// Create a mocked ingester
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0
cfg.ActiveSeriesMetricsEnabled = !testData.disableActiveSeries

limits := defaultLimitsTestConfig()
Expand Down Expand Up @@ -5624,7 +5604,6 @@ func TestIngesterActiveSeriesConfigChanges(t *testing.T) {

// Create a mocked ingester
cfg := defaultIngesterTestConfig(t)
cfg.IngesterRing.JoinAfter = 0
cfg.ActiveSeriesMetricsEnabled = true

limits := defaultLimitsTestConfig()
Expand Down
2 changes: 1 addition & 1 deletion pkg/mimirtool/config/cortex.go
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,6 @@ var cortexRenameMappings = map[string]Mapping{
"ingester.lifecycler.availability_zone": RenameMapping("ingester.ring.instance_availability_zone"),
"ingester.lifecycler.final_sleep": RenameMapping("ingester.ring.final_sleep"),
"ingester.lifecycler.heartbeat_period": RenameMapping("ingester.ring.heartbeat_period"),
"ingester.lifecycler.join_after": RenameMapping("ingester.ring.join_after"),
"ingester.lifecycler.min_ready_duration": RenameMapping("ingester.ring.min_ready_duration"),
"ingester.lifecycler.num_tokens": RenameMapping("ingester.ring.num_tokens"),
"ingester.lifecycler.observe_period": RenameMapping("ingester.ring.observe_period"),
Expand Down Expand Up @@ -584,6 +583,7 @@ var removedConfigPaths = append(gemRemovedConfigPath, []string{
"ingester.concurrent_flushes", // -ingester.concurrent-flushes
"ingester.flush_op_timeout", // -ingester.flush-op-timeout
"ingester.flush_period", // -ingester.flush-period
"ingester.lifecycler.join_after", // -ingester.ring.join-after
"ingester.max_chunk_age", // -ingester.max-chunk-age
"ingester.max_chunk_idle_time", // -ingester.max-chunk-idle
"ingester.max_stale_chunk_idle_time", // -ingester.max-stale-chunk-idle
Expand Down
1 change: 0 additions & 1 deletion pkg/mimirtool/config/testdata/ingester-ring-new.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ ingester:
final_sleep: 1m0s
heartbeat_period: 15s
instance_availability_zone: test-zone
join_after: 30s
kvstore:
store: consul
consul:
Expand Down
5 changes: 5 additions & 0 deletions pkg/util/usage/usage.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,11 @@ func Usage(printAll bool, configs ...interface{}) error {
fieldCat := fieldcategory.Basic
var field reflect.StructField

// Do not print usage for deprecated flags.
if fl.Value.String() == "deprecated" {
return
}

pstibrany marked this conversation as resolved.
Show resolved Hide resolved
if override, ok := fieldcategory.GetOverride(fl.Name); ok {
fieldCat = override
} else if v.Kind() == reflect.Ptr {
Expand Down