-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use 8byte offsets in chunk based raw index creator #5285
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -37,6 +37,8 @@ | |||||
*/ | ||||||
public abstract class BaseChunkSingleValueWriter implements SingleColumnSingleValueWriter { | ||||||
private static final Logger LOGGER = LoggerFactory.getLogger(BaseChunkSingleValueWriter.class); | ||||||
public static final int FILE_HEADER_ENTRY_CHUNK_OFFSET_SIZE_V1V2 = Integer.BYTES; | ||||||
public static final int FILE_HEADER_ENTRY_CHUNK_OFFSET_SIZE = Long.BYTES; | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. done |
||||||
|
||||||
protected final FileChannel _dataFile; | ||||||
protected ByteBuffer _header; | ||||||
|
@@ -45,7 +47,7 @@ public abstract class BaseChunkSingleValueWriter implements SingleColumnSingleVa | |||||
protected final ChunkCompressor _chunkCompressor; | ||||||
|
||||||
protected int _chunkSize; | ||||||
protected int _dataOffset; | ||||||
protected long _dataOffset; | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Add another There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. done |
||||||
|
||||||
/** | ||||||
* Constructor for the class. | ||||||
|
@@ -139,7 +141,8 @@ public void close() | |||||
private int writeHeader(ChunkCompressorFactory.CompressionType compressionType, int totalDocs, int numDocsPerChunk, | ||||||
int sizeOfEntry, int version) { | ||||||
int numChunks = (totalDocs + numDocsPerChunk - 1) / numDocsPerChunk; | ||||||
int headerSize = (numChunks + 7) * Integer.BYTES; // 7 items written before chunk indexing. | ||||||
// 7 items written before chunk indexing. | ||||||
int headerSize = (7 * Integer.BYTES) + (numChunks * VarByteChunkSingleValueWriter.FILE_HEADER_ENTRY_CHUNK_OFFSET_SIZE); | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this should be based on the version passed in to the writer. Yes, we use only version 2 now, but let us keep the versioning clean. It is there in the constructor, use it. It will help if we want to select a different version in the writer for whatever reason. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree. done |
||||||
|
||||||
_header = ByteBuffer.allocateDirect(headerSize); | ||||||
|
||||||
|
@@ -196,7 +199,7 @@ protected void writeChunk() { | |||||
throw new RuntimeException(e); | ||||||
} | ||||||
|
||||||
_header.putInt(_dataOffset); | ||||||
_header.putLong(_dataOffset); | ||||||
_dataOffset += sizeToWrite; | ||||||
|
||||||
_chunkBuffer.clear(); | ||||||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -21,16 +21,20 @@ | |
import java.io.File; | ||
import java.io.IOException; | ||
import java.net.URL; | ||
import java.nio.ByteOrder; | ||
import java.nio.charset.Charset; | ||
import java.util.Random; | ||
import org.apache.commons.io.FileUtils; | ||
import org.apache.commons.lang.RandomStringUtils; | ||
import org.apache.pinot.common.utils.StringUtil; | ||
import org.apache.pinot.core.io.compression.ChunkCompressorFactory; | ||
import org.apache.pinot.core.io.reader.impl.ChunkReaderContext; | ||
import org.apache.pinot.core.io.reader.impl.v1.VarByteChunkSingleValueReader; | ||
import org.apache.pinot.core.io.writer.impl.v1.VarByteChunkSingleValueWriter; | ||
import org.apache.pinot.core.segment.creator.impl.fwd.SingleValueVarByteRawIndexCreator; | ||
import org.apache.pinot.core.segment.memory.PinotDataBuffer; | ||
import org.apache.pinot.core.segment.memory.PinotNativeOrderLBuffer; | ||
import org.apache.pinot.core.segment.memory.PinotNonNativeOrderLBuffer; | ||
import org.testng.Assert; | ||
import org.testng.annotations.Test; | ||
|
||
|
@@ -113,27 +117,37 @@ public void test(ChunkCompressorFactory.CompressionType compressionType) | |
* @throws IOException | ||
*/ | ||
@Test | ||
public void testBackwardCompatibility() | ||
throws IOException { | ||
public void testBackwardCompatibilityV1() | ||
throws Exception { | ||
String[] expected = new String[]{"abcde", "fgh", "ijklmn", "12345"}; | ||
testBackwardCompatibilityHelper("data/varByteStrings.v1", expected, 1009); | ||
} | ||
|
||
// Get v1 from resources folder | ||
/** | ||
* This test ensures that the reader can read in an data file from version 2. | ||
*/ | ||
@Test | ||
public void testBackwardCompatibilityV2() | ||
throws Exception { | ||
String[] data = {"abcdefghijk", "12456887", "pqrstuv", "500"}; | ||
testBackwardCompatibilityHelper("data/varByteStringsCompressed.v2", data, 1000); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nice. Do we also want to add a v1 raw data in the tests? |
||
testBackwardCompatibilityHelper("data/varByteStringsRaw.v2", data, 1000); | ||
} | ||
|
||
private void testBackwardCompatibilityHelper(String fileName, String[] data, int numDocs) | ||
throws Exception { | ||
ClassLoader classLoader = getClass().getClassLoader(); | ||
String fileName = "data/varByteStrings.v1"; | ||
URL resource = classLoader.getResource(fileName); | ||
if (resource == null) { | ||
throw new RuntimeException("Input file not found: " + fileName); | ||
} | ||
|
||
File file = new File(resource.getFile()); | ||
try (VarByteChunkSingleValueReader reader = new VarByteChunkSingleValueReader( | ||
PinotDataBuffer.mapReadOnlyBigEndianFile(file))) { | ||
ChunkReaderContext context = reader.createContext(); | ||
|
||
int numEntries = 1009; // Number of entries in the input file. | ||
for (int i = 0; i < numEntries; i++) { | ||
for (int i = 0; i < numDocs; i++) { | ||
String actual = reader.getString(i, context); | ||
Assert.assertEquals(actual, expected[i % expected.length]); | ||
Assert.assertEquals(actual, data[i % data.length]); | ||
} | ||
} | ||
} | ||
|
@@ -173,7 +187,7 @@ private void testLargeVarcharHelper(ChunkCompressorFactory.CompressionType compr | |
int maxStringLengthInBytes = 0; | ||
for (int i = 0; i < numDocs; i++) { | ||
expected[i] = RandomStringUtils.random(random.nextInt(numChars)); | ||
maxStringLengthInBytes = Math.max(maxStringLengthInBytes, expected[i].getBytes(UTF_8).length); | ||
maxStringLengthInBytes = Math.max(maxStringLengthInBytes, StringUtil.encodeUtf8(expected[i]).length); | ||
} | ||
|
||
int numDocsPerChunk = SingleValueVarByteRawIndexCreator.getNumDocsPerChunk(maxStringLengthInBytes); | ||
|
@@ -183,20 +197,44 @@ private void testLargeVarcharHelper(ChunkCompressorFactory.CompressionType compr | |
|
||
for (int i = 0; i < numDocs; i += 2) { | ||
writer.setString(i, expected[i]); | ||
writer.setBytes(i + 1, expected[i].getBytes(UTF_8)); | ||
writer.setBytes(i + 1, StringUtil.encodeUtf8(expected[i])); | ||
} | ||
|
||
writer.close(); | ||
|
||
try (VarByteChunkSingleValueReader reader = new VarByteChunkSingleValueReader( | ||
PinotDataBuffer.mapReadOnlyBigEndianFile(outFile))) { | ||
PinotDataBuffer buffer = PinotDataBuffer.mapReadOnlyBigEndianFile(outFile); | ||
try (VarByteChunkSingleValueReader reader = new VarByteChunkSingleValueReader(buffer)) { | ||
ChunkReaderContext context = reader.createContext(); | ||
for (int i = 0; i < numDocs; i += 2) { | ||
String actual = reader.getString(i, context); | ||
Assert.assertEquals(actual, expected[i]); | ||
byte[] expectedBytes = StringUtil.encodeUtf8(expected[i]); | ||
Assert.assertEquals(StringUtil.encodeUtf8(actual), expectedBytes); | ||
Assert.assertEquals(reader.getBytes(i + 1, context), expectedBytes); | ||
} | ||
} | ||
|
||
// For large variable width column values (where total size of data | ||
// across all rows in the segment is > 2GB), LBuffer will be used for | ||
// reading the fwd index. However, to test this scenario the unit test | ||
// will take a long time to execute due to comparison | ||
// (75000 characters in each row and 10000 rows will hit this scenario). | ||
// So we specifically test for mapping the index file into a LBuffer | ||
// to exercise the LBuffer code | ||
if (ByteOrder.nativeOrder() == ByteOrder.BIG_ENDIAN) { | ||
buffer = PinotNativeOrderLBuffer.mapFile(outFile, true, 0, outFile.length()); | ||
} else { | ||
buffer = PinotNonNativeOrderLBuffer.mapFile(outFile, true, 0, outFile.length()); | ||
} | ||
|
||
try (VarByteChunkSingleValueReader reader = new VarByteChunkSingleValueReader(buffer)) { | ||
ChunkReaderContext context = reader.createContext(); | ||
for (int i = 0; i < numDocs; i += 2) { | ||
String actual = reader.getString(i, context); | ||
Assert.assertEquals(actual, expected[i]); | ||
Assert.assertEquals(actual.getBytes(UTF_8), expected[i].getBytes(UTF_8)); | ||
Assert.assertEquals(reader.getBytes(i + 1), expected[i].getBytes(UTF_8)); | ||
byte[] expectedBytes = StringUtil.encodeUtf8(expected[i]); | ||
Assert.assertEquals(StringUtil.encodeUtf8(actual), expectedBytes); | ||
Assert.assertEquals(reader.getBytes(i + 1, context), expectedBytes); | ||
} | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice.
I would also introduce a
private final _headerEntryChunkOffsetSize
here, and initialize it by calling a methodgetHeaderEntryChunkOffssetSize(version)
in the writer.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@mcvsubbu , this actually comes handy now itself since I haven't bumped the version of fixed byte chunk writer. It is still on version 2 and using 4-byte chunk offset entries in file header. So the current changes protect compatibility of v1/v2 for var-byte, read/write new var byte in v3 and still continue to read/write fixed byte indexes in v1/v2.
I am having mixed opinions on bumping up the version of fixed byte chunk writer to use 8byte offsets as well. The thing is that if we don't bump it up and tomorrow file format for fixed byte changes (for some reason), then we will bump it up to 3. At that time it will automatically get 8-byte offsets by virtue of being at version >=3. So may be do it now and keep the versions same.
The flip side is that you would ideally want to evolve fixed-byte and var-byte formats independently (which is what is done in this PR by keeping the fixed byte writer still at version 2). Obviously if we separate out base class and duplicate code, then things will be simplified but that's not the best option. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed and var byte formats cannot evolve independently unless we split the base class like you said. Some duplication can be avoided, but in the end, the version number at the top should decide what the format is, underneath.
I guess the con side of moving this for fixed byte will be that storage will (almost) double for the fixed byte no-dictionary columns?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No my bad. It will be double offset per chunk, so it should be ok. Let us just make it 8 bytes for all like we discussed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed offline. It is better to keep the version/format same so we will use 8-byte chunk offsets for fixed-byte indexes as well.
Storage overhead - Consider a segment with 10million rows. Since we currently pack 1000 rows in a fixed byte chunk, there will be 10k chunks. If the file header has 8-byte chunk offsets instead of 4, the storage overhead for the raw forward index of the particular column goes up by 40KB (10000 chunks * 4). Extrapolating this to 1000 segments on the server with roughly 5 fixed width no dictionary columns per segment, we are looking at 40KB * 1000 * 5 = 200MB
Will make the changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Made the changes as discussed