Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent excessive memory usage when dropping series #8630

Merged
merged 1 commit into from
Jul 27, 2017
Merged

Conversation

jwilder
Copy link
Contributor

@jwilder jwilder commented Jul 25, 2017

There was a change to speed up deleting and dropping measurements
that executed the deletes in parallel for all shards at once around the time 1.0 was released (#7015).

When TSI was merged in #7618, the series keys passed into Shard.DeleteMeasurement
were removed and were expanded lower down. This causes memory to blow up
when a delete across many shards occurs as we now expand the set of series
keys N times (1 for each shard) instead of just once as before. We actually didn't expand them in the old version since the in-memory index slice was passed in directly. This isn't possible in TSI though.

While running the deletes in parallel would be ideal, there have been a number
of optimizations in the delete path that make running deletes serially pretty
good. This change just limits the concurrency of the deletes which keeps memory
more stable. I think we can still get back to running them concurrently, but that would involve a much larger interface change.

In my local tests, dropping 1M series in 75 shards went from 1m33s to 3.9s using drop measurement, but degraded from 2m37s to 3m25s when using delete from cpu. Memory usage now only uses a few hundred MB instead of ballooning to several GBs.

Before

screen shot 2017-07-25 at 4 57 56 pm

After

screen shot 2017-07-25 at 5 00 15 pm

Required for all non-trivial PRs
  • Rebased/mergable
  • Tests pass
  • CHANGELOG.md updated
  • Sign CLA (if not already signed)

@jwilder jwilder added this to the 1.4.0 milestone Jul 25, 2017
@jwilder jwilder requested a review from benbjohnson July 25, 2017 23:09
@jwilder jwilder added the review label Jul 25, 2017
@sebito91
Copy link
Contributor

💣 very cool, excited to test this out

Copy link
Contributor

@e-dard e-dard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

@jwilder jwilder modified the milestones: 1.3.2, 1.4.0 Jul 27, 2017
@jwilder jwilder changed the title Limit delete to run one shard at a time Prevent excessive memory usage when dropping series Jul 27, 2017
There was a change to speed up deleting and dropping measurements
that executed the deletes in parallel for all shards at once. #7015

When TSI was merged in #7618, the series keys passed into Shard.DeleteMeasurement
were removed and were expanded lower down.  This causes memory to blow up
when a delete across many shards occurs as we now expand the set of series
keys N times instead of just once as before.

While running the deletes in parallel would be ideal, there have been a number
of optimizations in the delete path that make running deletes serially pretty
good.  This change just limits the concurrency of the deletes which keeps memory
more stable.
@jwilder jwilder merged commit 85642e2 into master Jul 27, 2017
@jwilder jwilder deleted the jw-drop-oom branch July 27, 2017 22:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants