Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expression evaluation time is proportional to chunk descs #1035

Closed
brian-brazil opened this Issue Aug 27, 2015 · 7 comments

Comments

Projects
None yet
4 participants
@brian-brazil
Copy link
Member

brian-brazil commented Aug 27, 2015

I noticed that demo.robustperception.io is pegged for CPU. It's scraping a handful of targets and running the Conway's life demo.

If you run

go tool pprof http://demo.robustperception.io:9090/debug/pprof/profile

you'll see that the problem is in the analyze phase, in particular in github.com/prometheus/prometheus/storage/local.(*memorySeries).newIterator which copies all the chunkdescs for each series referenced. Evaluation time should be proportional to the amount of a time series that is accessed, not the total amount of data stored. Accordingly I think we need to rework this, as 9k timeseries being referenced a second is enough to eat an entire core.

@fabxc

This comment has been minimized.

Copy link
Member

fabxc commented Aug 27, 2015

Could you provide the data rather than your conclusion of it so we don't all have replicate the whole process?

@juliusv

This comment has been minimized.

Copy link
Member

juliusv commented Aug 27, 2015

The SVG graph would be nice :)

@brian-brazil

This comment has been minimized.

Copy link
Member Author

brian-brazil commented Aug 27, 2015

The command to retrieve it is above, and it's monitoring itself too if you need anything else. It's running at head.

@beorn7 beorn7 self-assigned this Aug 28, 2015

@brian-brazil brian-brazil added the bug label Dec 16, 2015

@beorn7

This comment has been minimized.

Copy link
Member

beorn7 commented Feb 15, 2016

OK, finally found time to look into this.

The code in question does not copy all the chunkDescs. It creates a slice of chunk pointers and let's them point to the chunks currently loaded. The series iterator then acts directly on chunks and not on chunkDescs at all.

I'm trying to remember why we did it that way. It kind of makes the pinning not needed...

I'll continue to think about it.

@beorn7

This comment has been minimized.

Copy link
Member

beorn7 commented Feb 16, 2016

OK, wrapped me head around the multiple layers of indirection here. It makes sense after all.

I see a number of possible optimizations, but I cannot really believe the copying of pointers to chunks would be a bottle neck. I'm looking at the profiling data right now, assuming it still runs with the same parameters.

@beorn7

This comment has been minimized.

Copy link
Member

beorn7 commented Feb 16, 2016

The issue is mostly the mutex that has to be acquired to get to the chunk reference. So while the problem is not copying chunkDesc, the fundamental cause is similar: Even for accessing a single sample, we go through all chunks. I think it will help a lot even in less special use-cases than the game-of-life setup to fix that. Working on it...

beorn7 added a commit that referenced this issue Feb 18, 2016

Streamline series iterator creation
This will fix issue #1035 and will also help to make issue #1264 less
bad.

The fundamental problem in the current code:

In the preload phase, we quite accurately determine which chunks will
be used for the query being executed. However, in the subsequent step
of creating series iterators, the created iterators are referencing
_all_ in-memory chunks in their series, even the un-pinned ones. In
iterator creation, we copy a pointer to each in-memory chunk of a
series into the iterator. While this creates a certain amount of
allocation churn, the worst thing about it is that copying the chunk
pointer out of the chunkDesc requires a mutex acquisition. (Remember
that the iterator will also reference un-pinned chunks, so we need to
acquire the mutex to protect against concurrent eviction.) The worst
case happens if a series doesn't even contain any relevant samples for
the query time range. We notice that during preloading but then we
will still create a series iterator for it. But even for series that
do contain relevant samples, the overhead is quite bad for instant
queries that retrieve a single sample from each series, but still go
through all the effort of series iterator creation. All of that is
particularly bad if a series has many in-memory chunks.

This commit addresses the problem from two sides:

First, it merges preloading and iterator creation into one step,
i.e. the preload call returns an iterator for exactly the preloaded
chunks.

Second, the required mutex acquisition in chunkDesc has been greatly
reduced. That was enabled by a side effect of the first step, which is
that the iterator is only referencing pinned chunks, so there is no
risk of concurrent eviction anymore, and chunks can be accessed
without mutex acquisition.

To simplify the code changes for the above, the long-planned change of
ValueAtTime to ValueAtOrBefore time was performed at the same
time. (It should have been done first, but it kind of accidentally
happened while I was in the middle of writing the series iterator
changes. Sorry for that.) So far, we actively filtered the up to two
values that were returned by ValueAtTime, i.e. we invested work to
retrieve up to two values, and then we invested more work to throw one
of them away.

The SeriesIterator.BoundaryValues method can be removed once #1401 is
fixed. But I really didn't want to load even more changes into this
PR.

Benchmarks:

The BenchmarkFuzz.* benchmarks run 83% faster (i.e. about six times
faster) and allocate 95% fewer bytes. The reason for that is that the
benchmark reads one sample after another from the time series and
creates a new series iterator for each sample read.

To find out how much these improvements matter in practice, I have
mirrored a beefy Prometheus server at SoundCloud that suffers from
both issues #1035 and #1264. To reach steady state that would be
comparable, the server needs to run for 15d. So far, it has run for
1d. The test server currently has only half as many memory time series
and 60% of the memory chunks the main server has. The 90th percentile
rule evaluation cycle time is ~11s on the main server and only ~3s on
the test server. However, these numbers might get much closer over
time.

In addition to performance improvements, this commit removes about 150
LOC.

beorn7 added a commit that referenced this issue Feb 19, 2016

Streamline series iterator creation
This will fix issue #1035 and will also help to make issue #1264 less
bad.

The fundamental problem in the current code:

In the preload phase, we quite accurately determine which chunks will
be used for the query being executed. However, in the subsequent step
of creating series iterators, the created iterators are referencing
_all_ in-memory chunks in their series, even the un-pinned ones. In
iterator creation, we copy a pointer to each in-memory chunk of a
series into the iterator. While this creates a certain amount of
allocation churn, the worst thing about it is that copying the chunk
pointer out of the chunkDesc requires a mutex acquisition. (Remember
that the iterator will also reference un-pinned chunks, so we need to
acquire the mutex to protect against concurrent eviction.) The worst
case happens if a series doesn't even contain any relevant samples for
the query time range. We notice that during preloading but then we
will still create a series iterator for it. But even for series that
do contain relevant samples, the overhead is quite bad for instant
queries that retrieve a single sample from each series, but still go
through all the effort of series iterator creation. All of that is
particularly bad if a series has many in-memory chunks.

This commit addresses the problem from two sides:

First, it merges preloading and iterator creation into one step,
i.e. the preload call returns an iterator for exactly the preloaded
chunks.

Second, the required mutex acquisition in chunkDesc has been greatly
reduced. That was enabled by a side effect of the first step, which is
that the iterator is only referencing pinned chunks, so there is no
risk of concurrent eviction anymore, and chunks can be accessed
without mutex acquisition.

To simplify the code changes for the above, the long-planned change of
ValueAtTime to ValueAtOrBefore time was performed at the same
time. (It should have been done first, but it kind of accidentally
happened while I was in the middle of writing the series iterator
changes. Sorry for that.) So far, we actively filtered the up to two
values that were returned by ValueAtTime, i.e. we invested work to
retrieve up to two values, and then we invested more work to throw one
of them away.

The SeriesIterator.BoundaryValues method can be removed once #1401 is
fixed. But I really didn't want to load even more changes into this
PR.

Benchmarks:

The BenchmarkFuzz.* benchmarks run 83% faster (i.e. about six times
faster) and allocate 95% fewer bytes. The reason for that is that the
benchmark reads one sample after another from the time series and
creates a new series iterator for each sample read.

To find out how much these improvements matter in practice, I have
mirrored a beefy Prometheus server at SoundCloud that suffers from
both issues #1035 and #1264. To reach steady state that would be
comparable, the server needs to run for 15d. So far, it has run for
1d. The test server currently has only half as many memory time series
and 60% of the memory chunks the main server has. The 90th percentile
rule evaluation cycle time is ~11s on the main server and only ~3s on
the test server. However, these numbers might get much closer over
time.

In addition to performance improvements, this commit removes about 150
LOC.

@beorn7 beorn7 closed this Mar 8, 2016

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 24, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 24, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.