Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assorted ideas to slightly improve JOINs #21047

Open
alexey-milovidov opened this issue Feb 21, 2021 · 6 comments
Open

Assorted ideas to slightly improve JOINs #21047

alexey-milovidov opened this issue Feb 21, 2021 · 6 comments

Comments

@alexey-milovidov
Copy link
Member

alexey-milovidov commented Feb 21, 2021

Only ideas that are rather easy to implement.

  1. Add IColumn::shrinkToFit method. It will remove overallocation of columns and save memory (for hash join) up to 2x. Use this method in HashJoin unconditionally.

  2. For CROSS JOIN (nested loops): compress blocks in memory if there is large amount of data. Uncompress while joining (repeatedly for every iteration of the outer loop). The implementation is very easy after Compression for Memory tables #20168.

  3. For CROSS JOIN (nested loops): write blocks to tmp directory in Native format (similar to external sorting and external aggregation) if there is large amount of data. Read many times while joining.

  4. Compress blocks for hash join in memory. While joining, maintain LRU cache of uncompressed blocks. Can work good if JOIN is skewed, otherwise questionable.

  5. If the amount of data is large, serialize all records on disk in RowBinary format and keep offsets in hash table (we will have 8 bytes per record + key size + hash table overhead instead of keeping all data). While joining, do batch reads with AIO and also maintain small LRU hash table in memory. The performance can be decent (1 million IOPS on modern SSD).

  6. If the amount of data is large, replace HashJoin to SSDCacheDictionary (the performance of SSDCacheDictionary assumed to be decent).

  7. Represent the data structure for right hand side of JOIN or IN as a table for key-value requests. When doing distributed JOIN, instead of usual (broadcast or shuffle) algorithms, do lookup requests over network. Right hand side of distributed JOIN can be represented as a special kind of distributed table with local cache of lookup results. Applicability is limited but can be good for some scenarios: large rhs table but small subset of keys are JOINed.

  8. For INNER and RIGHT JOIN try to use the set of keys of rhs table as an index for lhs table, similar to IN.

@UnamedRus
Copy link
Contributor

Does it make sense to use HashedArray-like layout for JOIN hashtables? (#30236)

It uses less memory, but gives pretty similar performance as in hashed dictionary especially in case of multi attribute lookup (because we need to get index offset only once for all value).

It will also speedup initial hash table build, which is important for joins.

@alexey-milovidov
Copy link
Member Author

@UnamedRus It is already using one hash table for all columns. The hash table contains a reference to row.

@awakeljw
Copy link
Contributor

awakeljw commented Apr 27, 2022

Is RowRef worth Optimizing? now RowRef has two members (Block* block , SizeT row_num), If we can use (SizeT block_number, SizeT row_num),we can save hash memory and make cache compact.

@alexey-milovidov
Copy link
Member Author

@awakeljw
size_t has the same size as pointer.

@awakeljw
Copy link
Contributor

awakeljw commented Apr 30, 2022

@awakeljw size_t has the same size as pointer.

struct RowRef
{
    using SizeT = uint32_t; /// Do not use size_t cause of memory economy

    const Block * block = nullptr;
    SizeT row_num = 0;

    RowRef() = default;
    RowRef(const Block * block_, size_t row_num_) : block(block_), row_num(row_num_) {}
};

Currently, we use uint32_t as SizeT in code.

@Yur3k
Copy link
Contributor

Yur3k commented Oct 24, 2022

@watemus and I have started working on this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants