Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix flaky retention tests #3976

Merged
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 7 additions & 9 deletions pkg/storage/stores/shipper/compactor/retention/util_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ var (
RowShards: 16,
},
{
From: dayFromTime(start.Add(49 * time.Hour)),
From: dayFromTime(start.Add(73 * time.Hour)),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The schema start date for this and the previous one is too close with just a days gap. Based on when the tests run, when we add 24h > duration < 48h to start, it could result in jumping ahead by 2 days and end up falling into this schema instead of the previous one. For example, if start is 9th Jun 2300 hours and we add 28h like the test here, we would fall into Config[2] as opposed to what the test expects i.e Config[1].

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this just some arbitrary value?

Copy link
Contributor Author

@sandeepsukhani sandeepsukhani Jul 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want to set the date to start + 3 days with an extra hour for buffer. The extra buffer gets ignored because we drop the time and just use the day.

IndexType: "boltdb",
ObjectType: "filesystem",
Schema: "v11",
Expand Down Expand Up @@ -158,15 +158,13 @@ func (t *testStore) indexTables() []table {
}

func (t *testStore) HasChunk(c chunk.Chunk) bool {
t.t.Helper()
var matchers []*labels.Matcher
for _, l := range c.Metric {
matchers = append(matchers, labels.MustNewMatcher(labels.MatchEqual, l.Name, l.Value))
chunks := t.GetChunks(c.UserID, c.From, c.Through, c.Metric)

chunkIDs := make(map[string]struct{})
for _, chk := range chunks {
chunkIDs[chk.ExternalKey()] = struct{}{}
}
chunks, err := t.Store.Get(user.InjectOrgID(context.Background(), c.UserID),
c.UserID, c.From, c.Through, matchers...)
require.NoError(t.t, err)
return len(chunks) == 1 && c.ExternalKey() == chunks[0].ExternalKey()
return len(chunkIDs) == 1 && c.ExternalKey() == chunks[0].ExternalKey()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could get multiple chunk ids if the chunk overlaps multiple schema configs because the Cortex code dedupes chunk ids within a store, not across the stores. This change dedupes the chunk ids before checking for having single expected chunk id in the response.

}

func (t *testStore) GetChunks(userID string, from, through model.Time, metric labels.Labels) []chunk.Chunk {
Expand Down