Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix query cache with sparse columns #48500

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 2 additions & 1 deletion src/Interpreters/Cache/QueryCache.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -242,8 +242,9 @@ void QueryCache::Writer::finalizeWrite()
Chunks squashed_chunks;
size_t rows_remaining_in_squashed = 0; /// how many further rows can the last squashed chunk consume until it reaches max_block_size

for (const auto & chunk : *query_result)
for (auto & chunk : *query_result)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand - why don't allow to have sparse columns in the cache?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Columns are squashed and we cannot squash sparse and non-sparse columns.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok.

@rschu1ze, why is there a need for squashing for the query cache? Why don't we save a sequence of blocks as is?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late reply.

Squashing is configurable but on by default. The motivation is

  1. make entries more compressible when the original query produced many small chunks (because of filtering, aggregation, ...)
  2. reading from the cache becomes more "natural" because chunks already have "max_block_size" rows.

The disadvantage is that writing in the cache becomes more expensive. The question is should the cache optimize for insertion or for lookups? With the current approach, the user chooses specific queries which write to the cache (by adding SETTINGS use_query_cache = 1). This is different from a model where all or most queries get cached. With selected queries, entries in the cache will likely have a high hit rate, so we should make lookup faster at the cost of more expensive inserts. At least that's the theory 😄 - but in the end that's a heuristics.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good. Let's keep squashing, but also allow cases with many blocks and even with many ports like totals and extremes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay. I can add logic to avoid squashing/splitting if the input blocks are already reasonably sized. And yes, multiple ports for totals and extremes should also be supported ... just did not think about this usecase initially.

{
convertToFullIfSparse(chunk);
const size_t rows_chunk = chunk.getNumRows();
size_t rows_chunk_processed = 0;

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
1
23 changes: 23 additions & 0 deletions tests/queries/0_stateless/02708_query_cache_sparse_columns.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
-- Tags: no-parallel

DROP TABLE IF EXISTS t_cache_sparse;
SYSTEM DROP QUERY CACHE;

CREATE TABLE t_cache_sparse (id UInt64, v UInt64)
ENGINE = MergeTree ORDER BY id
SETTINGS ratio_of_defaults_for_sparse_serialization = 0.9;

SYSTEM STOP MERGES t_cache_sparse;

INSERT INTO t_cache_sparse SELECT number, number FROM numbers(10000);
INSERT INTO t_cache_sparse SELECT number, 0 FROM numbers(10000);

SET allow_experimental_query_cache = 1;
SET use_query_cache = 1;
SET max_threads = 1;

SELECT v FROM t_cache_sparse FORMAT Null;
SELECT v FROM t_cache_sparse FORMAT Null;
SELECT count() FROM system.query_cache WHERE query LIKE 'SELECT v FROM t_cache_sparse%';

DROP TABLE t_cache_sparse;