-
Notifications
You must be signed in to change notification settings - Fork 453
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Removed chunks storage backend logic and storage clients #755
Conversation
@@ -44,7 +44,6 @@ type Flusher struct { | |||
|
|||
cfg Config | |||
ingesterConfig ingester.Config | |||
chunkStore ingester.ChunkStore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was unused.
@@ -537,8 +536,29 @@ func TestIngesterMetricLimitExceeded(t *testing.T) { | |||
|
|||
// Construct a set of realistic-looking samples, all with slightly different label sets | |||
func benchmarkData(nSeries int) (allLabels []labels.Labels, allSamples []mimirpb.Sample) { | |||
// Real example from Kubernetes' embedded cAdvisor metrics, lightly obfuscated. | |||
var benchmarkLabels = labels.Labels{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved here from pkg/chunk
.
@@ -170,9 +164,6 @@ func (c *Config) RegisterFlags(f *flag.FlagSet, logger log.Logger) { | |||
c.MemberlistKV.RegisterFlags(f) | |||
c.ActivityTracker.RegisterFlags(f) | |||
c.QueryScheduler.RegisterFlags(f) | |||
|
|||
// These don't seem to have a home. | |||
f.IntVar(&chunk_util.QueryParallelism, "querier.query-parallelism", 100, "Max subqueries run in parallel per higher-level query.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was chunks storage specific.
@@ -158,7 +157,6 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) { | |||
f.IntVar(&l.LabelNamesAndValuesResultsMaxSizeBytes, "querier.label-names-and-values-results-max-size-bytes", 400*1024*1024, "Maximum size in bytes of distinct label names and values. When querier receives response from ingester, it merges the response with responses from other ingesters. This maximum size limit is applied to the merged(distinct) results. If the limit is reached, an error is returned.") | |||
f.BoolVar(&l.CardinalityAnalysisEnabled, "querier.cardinality-analysis-enabled", false, "Enables endpoints used for cardinality analysis.") | |||
f.IntVar(&l.LabelValuesMaxCardinalityLabelNamesPerRequest, "querier.label-values-max-cardinality-label-names-per-request", 100, "Maximum number of label names allowed to be queried in a single /api/v1/cardinality/label_values API call.") | |||
f.IntVar(&l.CardinalityLimit, "store.cardinality-limit", 1e5, "Cardinality limit for index queries. This limit is ignored when using blocks storage. 0 to disable.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was chunks storage specific.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes LGTM. Nice work!
}, | ||
}, | ||
{ | ||
name: "url-config", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed test cases that don't apply anymore to the new S3 client.
@@ -61,9 +35,6 @@ type Chunk struct { | |||
// missing, we default to DoubleDelta. | |||
Encoding prom_chunk.Encoding `json:"encoding"` | |||
Data prom_chunk.Chunk `json:"-"` | |||
|
|||
// The encoded version of the chunk, held so we don't need to re-encode it | |||
encoded []byte |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused
return a.UserID == b.UserID && a.Fingerprint == b.Fingerprint && | ||
a.From == b.From && a.Through == b.Through && a.Checksum == b.Checksum | ||
} | ||
|
||
// Samples returns all SamplePairs for the chunk. | ||
func (c *Chunk) Samples(from, through model.Time) ([]model.SamplePair, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still in use.
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Signed-off-by: Marco Pracucci <marco@pracucci.com>
401fe61
to
ddfcba6
Compare
Signed-off-by: Marco Pracucci <marco@pracucci.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job, love to see legacy chunk code removed.
pkg/util/validation/limits.go
Outdated
@@ -151,14 +148,13 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) { | |||
f.IntVar(&l.MaxChunksPerQuery, "querier.max-fetched-chunks-per-query", 2e6, "Maximum number of chunks that can be fetched in a single query from ingesters and long-term storage. This limit is enforced in the querier, ruler and store-gateway. 0 to disable.") | |||
f.IntVar(&l.MaxFetchedSeriesPerQuery, "querier.max-fetched-series-per-query", 0, "The maximum number of unique series for which a query can fetch samples from each ingesters and blocks storage. This limit is enforced in the querier only when running with blocks storage. 0 to disable") | |||
f.IntVar(&l.MaxFetchedChunkBytesPerQuery, "querier.max-fetched-chunk-bytes-per-query", 0, "The maximum size of all chunks in bytes that a query can fetch from each ingester and storage. This limit is enforced in the querier and ruler only when running with blocks storage. 0 to disable.") | |||
f.Var(&l.MaxQueryLength, "store.max-query-length", "Limit the query time range (end - start time). This limit is enforced in the query-frontend (on the received query), in the querier (on the query possibly split by the query-frontend) and in the chunks storage. 0 to disable.") | |||
f.Var(&l.MaxQueryLength, "store.max-query-length", "Limit the query time range (end - start time). This limit is enforced in the query-frontend (on the received query) and in the querier (on the query possibly split by the query-frontend). 0 to disable.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And ruler too by extension.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. I went through the limits CLI flags descriptions and update it in few other places too.
@@ -21,7 +16,6 @@ require ( | |||
github.com/go-openapi/strfmt v0.21.1 | |||
github.com/go-openapi/swag v0.19.15 | |||
github.com/go-redis/redis/v8 v8.11.4 | |||
github.com/gocql/gocql v0.0.0-20200526081602-cd04bd7f22a7 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can remove replace github.com/gocql/gocql
too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I even looked at go.mod before opening the PR and I missed that 🤦
Signed-off-by: Marco Pracucci <marco@pracucci.com>
afffd39
to
3f9c531
Compare
What this PR does:
In this PR I'm removing most of the chunks storage backend logic and storage clients. To keep PR easier to review, I haven't addressed the packages
pkg/chunk/encoding
andpkg/chunk/cache
yet, as well as the 2nd storage engine support and the storage engine config (it's still configurable). These will be addressed in follow up PRs.Which issue(s) this PR fixes:
N/A
Checklist
CHANGELOG.md
updated - the order of entries should be[CHANGE]
,[FEATURE]
,[ENHANCEMENT]
,[BUGFIX]