Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use 8byte offsets in chunk based raw index creator #5285

Merged
merged 5 commits into from Apr 23, 2020

Conversation

siddharthteotia
Copy link
Contributor

@siddharthteotia siddharthteotia commented Apr 22, 2020

This is a follow-up to #5256

In the previous PR support was added for computing the number of rows per chunk based on the metadata (variable width column value max length).

In this PR, the writer tracks the chunk offsets in the file header using long instead of int. The writer version has been bumped to protect compatibility. Backward compatibility test has also been added.

The need for this and #5256 was seen as part of our internal testing of text search feature. There could be cases where text column value is several hundred thousands of characters. In our particular case around 1% values (of the total rows in a segment) were around 1.5million characters in the worst case.

Limits of the current implementation (without this PR) - let's say we have 10million rows in a segment. The 4-byte chunk offsets require each text column value to have less than or equal 75 characters on average in order to keep overall index size (across rows) <= 2GB which can be accessed using int offsets for each chunk in the file header. This is for the case of uncompressed chunks and with 1:3 average ratio of encoding String as UTF-8. If the index is compressed, the max number of characters could be around 150 assuming a decent compression ratio.

If we go beyond these limits, the chunk offsets overflow since index size becomes > 2GB. These limits were gathered through unit tests.

In our case, there are actually 10million rows per segment for the table using text search feature.

The above change isn't really aimed at supporting blobs. The PR fixes a limitation in our raw index. The user should still be aware of the potential memory overhead that comes with unreasonably large variable width column values. During indexing, they will be on heap (string, byte[]) and are going to increase the GC.

@siddharthteotia siddharthteotia changed the title Use 8byte offsets in chunk based raw index creator Use 8-byte offsets in chunk based raw index creator Apr 22, 2020
@siddharthteotia siddharthteotia changed the title Use 8-byte offsets in chunk based raw index creator Use 8byte offsets in chunk based raw index creator Apr 22, 2020
public void testBackwardCompatibilityV2()
throws Exception {
String[] data = {"abcdefghijk", "12456887", "pqrstuv", "500"};
testBackwardCompatibilityHelper("data/varByteStringsCompressed.v2", data, 1000);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. Do we also want to add a v1 raw data in the tests?

@@ -139,7 +141,8 @@ public void close()
private int writeHeader(ChunkCompressorFactory.CompressionType compressionType, int totalDocs, int numDocsPerChunk,
int sizeOfEntry, int version) {
int numChunks = (totalDocs + numDocsPerChunk - 1) / numDocsPerChunk;
int headerSize = (numChunks + 7) * Integer.BYTES; // 7 items written before chunk indexing.
// 7 items written before chunk indexing.
int headerSize = (7 * Integer.BYTES) + (numChunks * VarByteChunkSingleValueWriter.FILE_HEADER_ENTRY_CHUNK_OFFSET_SIZE);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be based on the version passed in to the writer. Yes, we use only version 2 now, but let us keep the versioning clean. It is there in the constructor, use it. It will help if we want to select a different version in the writer for whatever reason.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. done

@@ -37,6 +37,8 @@
*/
public abstract class BaseChunkSingleValueWriter implements SingleColumnSingleValueWriter {
private static final Logger LOGGER = LoggerFactory.getLogger(BaseChunkSingleValueWriter.class);
public static final int FILE_HEADER_ENTRY_CHUNK_OFFSET_SIZE_V1V2 = Integer.BYTES;
public static final int FILE_HEADER_ENTRY_CHUNK_OFFSET_SIZE = Long.BYTES;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
public static final int FILE_HEADER_ENTRY_CHUNK_OFFSET_SIZE = Long.BYTES;
public static final int FILE_HEADER_ENTRY_CHUNK_OFFSET_SIZE_V3 = Long.BYTES;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -45,7 +47,7 @@
protected final ChunkCompressor _chunkCompressor;

protected int _chunkSize;
protected int _dataOffset;
protected long _dataOffset;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add another final int _headerEntryChunkOffsetSize here, determined based on version

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -47,6 +48,7 @@
protected final int _numDocsPerChunk;
protected final int _numChunks;
protected final int _lengthOfLongestEntry;
private final int _version;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice.

I would also introduce a private final _headerEntryChunkOffsetSize here, and initialize it by calling a method getHeaderEntryChunkOffssetSize(version) in the writer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@mcvsubbu , this actually comes handy now itself since I haven't bumped the version of fixed byte chunk writer. It is still on version 2 and using 4-byte chunk offset entries in file header. So the current changes protect compatibility of v1/v2 for var-byte, read/write new var byte in v3 and still continue to read/write fixed byte indexes in v1/v2.

I am having mixed opinions on bumping up the version of fixed byte chunk writer to use 8byte offsets as well. The thing is that if we don't bump it up and tomorrow file format for fixed byte changes (for some reason), then we will bump it up to 3. At that time it will automatically get 8-byte offsets by virtue of being at version >=3. So may be do it now and keep the versions same.

The flip side is that you would ideally want to evolve fixed-byte and var-byte formats independently (which is what is done in this PR by keeping the fixed byte writer still at version 2). Obviously if we separate out base class and duplicate code, then things will be simplified but that's not the best option. Thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed and var byte formats cannot evolve independently unless we split the base class like you said. Some duplication can be avoided, but in the end, the version number at the top should decide what the format is, underneath.

I guess the con side of moving this for fixed byte will be that storage will (almost) double for the fixed byte no-dictionary columns?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No my bad. It will be double offset per chunk, so it should be ok. Let us just make it 8 bytes for all like we discussed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed offline. It is better to keep the version/format same so we will use 8-byte chunk offsets for fixed-byte indexes as well.

Storage overhead - Consider a segment with 10million rows. Since we currently pack 1000 rows in a fixed byte chunk, there will be 10k chunks. If the file header has 8-byte chunk offsets instead of 4, the storage overhead for the raw forward index of the particular column goes up by 40KB (10000 chunks * 4). Extrapolating this to 1000 segments on the server with roughly 5 fixed width no dictionary columns per segment, we are looking at 40KB * 1000 * 5 = 200MB

Will make the changes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made the changes as discussed

@siddharthteotia siddharthteotia merged commit 410fd70 into apache:master Apr 23, 2020
siddharthteotia added a commit to siddharthteotia/incubator-pinot that referenced this pull request May 29, 2020
* Use 8byte offsets in chunk based raw index
creator

* cleanup

* fixed tests

* Fix tests and address review comments

* Use 8-byte offset for fixed-byte chunk writer.
Add backward compatibility test

Co-authored-by: Siddharth Teotia <steotia@steotia-mn1.linkedin.biz>
siddharthteotia added a commit that referenced this pull request May 29, 2020
* Use 8byte offsets in chunk based raw index
creator

* cleanup

* fixed tests

* Fix tests and address review comments

* Use 8-byte offset for fixed-byte chunk writer.
Add backward compatibility test

Co-authored-by: Siddharth Teotia <steotia@steotia-mn1.linkedin.biz>
@mcvsubbu mcvsubbu added the backward-incompat Referenced by PRs that introduce or fix backward compat issues label Jun 1, 2020
siddharthteotia pushed a commit to siddharthteotia/incubator-pinot that referenced this pull request Jun 5, 2020
the raw index writer format was changed to use 8 byte offset
for each chunk in the file header. The writer version
was bumped to 3. This was done to support > 2GB indexes.
The change was backward compatible to continue the support
for reading existing/old segments using 4-byte offsets

While there is no problem with the change, it prevents rollback.
So if there is any orthogonal issue while rolling out a release,
we can't rollback to older Pinot release since segments already
generated with 8-byte offsets can't be read by old code.

This config option is temporary to help with internal roll-out
by keeping the 8-byte format disabled by default thus allowing
rollback due to any issues. In the next couple of weeks after
internal rollout, we plan to remove this option.
siddharthteotia added a commit that referenced this pull request Jun 8, 2020
* In PR #5285,
the raw index writer format was changed to use 8 byte offset
for each chunk in the file header. The writer version
was bumped to 3. This was done to support > 2GB indexes.
The change was backward compatible to continue the support
for reading existing/old segments using 4-byte offsets

While there is no problem with the change, it prevents rollback.
So if there is any orthogonal issue while rolling out a release,
we can't rollback to older Pinot release since segments already
generated with 8-byte offsets can't be read by old code.

This config option is temporary to help with internal roll-out
by keeping the 8-byte format disabled by default thus allowing
rollback due to any issues. In the next couple of weeks after
internal rollout, we plan to remove this option.

* address review comments

Co-authored-by: Siddharth Teotia <steotia@steotia-mn1.linkedin.biz>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backward-incompat Referenced by PRs that introduce or fix backward compat issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants