Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use 8byte offsets in chunk based raw index creator #5285

Merged
merged 5 commits into from
Apr 23, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,15 @@
*/
package org.apache.pinot.core.io.reader.impl.v1;

import com.google.common.base.Preconditions;
import java.io.IOException;
import java.nio.ByteBuffer;
import org.apache.pinot.core.io.compression.ChunkCompressorFactory;
import org.apache.pinot.core.io.compression.ChunkDecompressor;
import org.apache.pinot.core.io.reader.BaseSingleColumnSingleValueReader;
import org.apache.pinot.core.io.reader.impl.ChunkReaderContext;
import org.apache.pinot.core.io.writer.impl.v1.BaseChunkSingleValueWriter;
import org.apache.pinot.core.io.writer.impl.v1.VarByteChunkSingleValueWriter;
import org.apache.pinot.core.segment.memory.PinotDataBuffer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Expand All @@ -47,6 +50,8 @@ public abstract class BaseChunkSingleValueReader extends BaseSingleColumnSingleV
protected final int _numDocsPerChunk;
protected final int _numChunks;
protected final int _lengthOfLongestEntry;
private final int _version;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice.

I would also introduce a private final _headerEntryChunkOffsetSize here, and initialize it by calling a method getHeaderEntryChunkOffssetSize(version) in the writer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@mcvsubbu , this actually comes handy now itself since I haven't bumped the version of fixed byte chunk writer. It is still on version 2 and using 4-byte chunk offset entries in file header. So the current changes protect compatibility of v1/v2 for var-byte, read/write new var byte in v3 and still continue to read/write fixed byte indexes in v1/v2.

I am having mixed opinions on bumping up the version of fixed byte chunk writer to use 8byte offsets as well. The thing is that if we don't bump it up and tomorrow file format for fixed byte changes (for some reason), then we will bump it up to 3. At that time it will automatically get 8-byte offsets by virtue of being at version >=3. So may be do it now and keep the versions same.

The flip side is that you would ideally want to evolve fixed-byte and var-byte formats independently (which is what is done in this PR by keeping the fixed byte writer still at version 2). Obviously if we separate out base class and duplicate code, then things will be simplified but that's not the best option. Thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed and var byte formats cannot evolve independently unless we split the base class like you said. Some duplication can be avoided, but in the end, the version number at the top should decide what the format is, underneath.

I guess the con side of moving this for fixed byte will be that storage will (almost) double for the fixed byte no-dictionary columns?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No my bad. It will be double offset per chunk, so it should be ok. Let us just make it 8 bytes for all like we discussed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed offline. It is better to keep the version/format same so we will use 8-byte chunk offsets for fixed-byte indexes as well.

Storage overhead - Consider a segment with 10million rows. Since we currently pack 1000 rows in a fixed byte chunk, there will be 10k chunks. If the file header has 8-byte chunk offsets instead of 4, the storage overhead for the raw forward index of the particular column goes up by 40KB (10000 chunks * 4). Extrapolating this to 1000 segments on the server with roughly 5 fixed width no dictionary columns per segment, we are looking at 40KB * 1000 * 5 = 200MB

Will make the changes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made the changes as discussed

private final int _headerEntryChunkOffsetSize;

/**
* Constructor for the class.
Expand All @@ -57,7 +62,7 @@ public BaseChunkSingleValueReader(PinotDataBuffer pinotDataBuffer) {
_dataBuffer = pinotDataBuffer;

int headerOffset = 0;
int version = _dataBuffer.getInt(headerOffset);
_version = _dataBuffer.getInt(headerOffset);
headerOffset += Integer.BYTES;

_numChunks = _dataBuffer.getInt(headerOffset);
Expand All @@ -70,7 +75,7 @@ public BaseChunkSingleValueReader(PinotDataBuffer pinotDataBuffer) {
headerOffset += Integer.BYTES;

int dataHeaderStart = headerOffset;
if (version > 1) {
if (_version > 1) {
_dataBuffer.getInt(headerOffset); // Total docs
headerOffset += Integer.BYTES;

Expand All @@ -87,9 +92,10 @@ public BaseChunkSingleValueReader(PinotDataBuffer pinotDataBuffer) {
}

_chunkSize = (_lengthOfLongestEntry * _numDocsPerChunk);
_headerEntryChunkOffsetSize = BaseChunkSingleValueWriter.getHeaderEntryChunkOffsetSize(_version);

// Slice out the header from the data buffer.
int dataHeaderLength = _numChunks * Integer.BYTES;
int dataHeaderLength = _numChunks * _headerEntryChunkOffsetSize;
int rawDataStart = dataHeaderStart + dataHeaderLength;
_dataHeader = _dataBuffer.view(dataHeaderStart, rawDataStart);

Expand Down Expand Up @@ -120,14 +126,14 @@ protected ByteBuffer getChunkForRow(int row, ChunkReaderContext context) {
}

int chunkSize;
int chunkPosition = getChunkPosition(chunkId);
long chunkPosition = getChunkPosition(chunkId);

// Size of chunk can be determined using next chunks offset, or end of data buffer for last chunk.
if (chunkId == (_numChunks - 1)) { // Last chunk.
chunkSize = (int) (_dataBuffer.size() - chunkPosition);
} else {
int nextChunkOffset = getChunkPosition(chunkId + 1);
chunkSize = nextChunkOffset - chunkPosition;
long nextChunkOffset = getChunkPosition(chunkId + 1);
chunkSize = (int)(nextChunkOffset - chunkPosition);
}

ByteBuffer decompressedBuffer = context.getChunkBuffer();
Expand All @@ -145,12 +151,15 @@ protected ByteBuffer getChunkForRow(int row, ChunkReaderContext context) {

/**
* Helper method to get the offset of the chunk in the data.
*
* @param chunkId Id of the chunk for which to return the position.
* @return Position (offset) of the chunk in the data.
*/
protected int getChunkPosition(int chunkId) {
return _dataHeader.getInt(chunkId * Integer.BYTES);
protected long getChunkPosition(int chunkId) {
if (_headerEntryChunkOffsetSize == Integer.BYTES) {
return _dataHeader.getInt(chunkId * _headerEntryChunkOffsetSize);
} else {
return _dataHeader.getLong(chunkId * _headerEntryChunkOffsetSize);
}
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ public String getString(int row, ChunkReaderContext context) {
int chunkRowId = row % _numDocsPerChunk;
ByteBuffer chunkBuffer = getChunkForRow(row, context);

int rowOffset = chunkBuffer.getInt(chunkRowId * Integer.BYTES);
int rowOffset = chunkBuffer.getInt(chunkRowId * VarByteChunkSingleValueWriter.CHUNK_HEADER_ENTRY_ROW_OFFSET_SIZE);
int nextRowOffset = getNextRowOffset(chunkRowId, chunkBuffer);

int length = nextRowOffset - rowOffset;
Expand All @@ -77,7 +77,7 @@ public byte[] getBytes(int row, ChunkReaderContext context) {
int chunkRowId = row % _numDocsPerChunk;
ByteBuffer chunkBuffer = getChunkForRow(row, context);

int rowOffset = chunkBuffer.getInt(chunkRowId * Integer.BYTES);
int rowOffset = chunkBuffer.getInt(chunkRowId * VarByteChunkSingleValueWriter.CHUNK_HEADER_ENTRY_ROW_OFFSET_SIZE);
int nextRowOffset = getNextRowOffset(chunkRowId, chunkBuffer);

int length = nextRowOffset - rowOffset;
Expand Down Expand Up @@ -109,7 +109,7 @@ private int getNextRowOffset(int currentRowId, ByteBuffer chunkBuffer) {
// Last row in this trunk.
nextRowOffset = chunkBuffer.limit();
} else {
nextRowOffset = chunkBuffer.getInt((currentRowId + 1) * Integer.BYTES);
nextRowOffset = chunkBuffer.getInt((currentRowId + 1) * VarByteChunkSingleValueWriter.CHUNK_HEADER_ENTRY_ROW_OFFSET_SIZE);
// For incomplete chunks, the next string's offset will be 0 as row offset for absent rows are 0.
if (nextRowOffset == 0) {
nextRowOffset = chunkBuffer.limit();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
*/
package org.apache.pinot.core.io.writer.impl.v1;

import com.google.common.base.Preconditions;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
Expand All @@ -37,6 +38,8 @@
*/
public abstract class BaseChunkSingleValueWriter implements SingleColumnSingleValueWriter {
private static final Logger LOGGER = LoggerFactory.getLogger(BaseChunkSingleValueWriter.class);
private static final int FILE_HEADER_ENTRY_CHUNK_OFFSET_SIZE_V1V2 = Integer.BYTES;
private static final int FILE_HEADER_ENTRY_CHUNK_OFFSET_SIZE_V3 = Long.BYTES;

protected final FileChannel _dataFile;
protected ByteBuffer _header;
Expand All @@ -45,7 +48,9 @@ public abstract class BaseChunkSingleValueWriter implements SingleColumnSingleVa
protected final ChunkCompressor _chunkCompressor;

protected int _chunkSize;
protected int _dataOffset;
protected long _dataOffset;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add another final int _headerEntryChunkOffsetSize here, determined based on version

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


private final int _headerEntryChunkOffsetSize;

/**
* Constructor for the class.
Expand All @@ -64,13 +69,25 @@ protected BaseChunkSingleValueWriter(File file, ChunkCompressorFactory.Compressi
throws FileNotFoundException {
_chunkSize = chunkSize;
_chunkCompressor = ChunkCompressorFactory.getCompressor(compressionType);

_headerEntryChunkOffsetSize = getHeaderEntryChunkOffsetSize(version);
_dataOffset = writeHeader(compressionType, totalDocs, numDocsPerChunk, sizeOfEntry, version);
_chunkBuffer = ByteBuffer.allocateDirect(chunkSize);
_compressedBuffer = ByteBuffer.allocateDirect(chunkSize * 2);
_dataFile = new RandomAccessFile(file, "rw").getChannel();
}

public static int getHeaderEntryChunkOffsetSize(int version) {
switch (version) {
case 1:
case 2:
return FILE_HEADER_ENTRY_CHUNK_OFFSET_SIZE_V1V2;
case 3:
return FILE_HEADER_ENTRY_CHUNK_OFFSET_SIZE_V3;
default:
throw new IllegalStateException("Invalid version: " + version);
}
}

@Override
public void setChar(int row, char ch) {
throw new UnsupportedOperationException();
Expand Down Expand Up @@ -139,7 +156,7 @@ public void close()
private int writeHeader(ChunkCompressorFactory.CompressionType compressionType, int totalDocs, int numDocsPerChunk,
int sizeOfEntry, int version) {
int numChunks = (totalDocs + numDocsPerChunk - 1) / numDocsPerChunk;
int headerSize = (numChunks + 7) * Integer.BYTES; // 7 items written before chunk indexing.
int headerSize = (7 * Integer.BYTES) + (numChunks * _headerEntryChunkOffsetSize);

_header = ByteBuffer.allocateDirect(headerSize);

Expand Down Expand Up @@ -196,7 +213,12 @@ protected void writeChunk() {
throw new RuntimeException(e);
}

_header.putInt(_dataOffset);
if (_headerEntryChunkOffsetSize == Integer.BYTES) {
_header.putInt((int)_dataOffset);
} else if (_headerEntryChunkOffsetSize == Long.BYTES) {
_header.putLong(_dataOffset);
}

_dataOffset += sizeToWrite;

_chunkBuffer.clear();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,8 @@
* <li> Integer: Total number of docs (version 2 onwards). </li>
* <li> Integer: Compression type enum value (version 2 onwards). </li>
* <li> Integer: Start offset of data header (version 2 onwards). </li>
* <li> Integer array: Integer offsets for all chunks in the data .</li>
* <li> Integer array: Integer offsets for all chunks in the data (upto version 2),
* Long array: Long offsets for all chunks in the data (version 3 onwards) </li>
* </ul>
*
* <p> Individual Chunks: </p>
Expand All @@ -53,7 +54,7 @@
@NotThreadSafe
public class FixedByteChunkSingleValueWriter extends BaseChunkSingleValueWriter {

private static final int CURRENT_VERSION = 2;
private static final int CURRENT_VERSION = 3;
private int _chunkDataOffset;

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,11 @@
* <li> Integer: Total number of chunks. </li>
* <li> Integer: Number of docs per chunk. </li>
* <li> Integer: Length of longest entry (in bytes). </li>
* <li> Integer array: Integer offsets for all chunks in the data .</li>
* <li> Integer: Total number of docs (version 2 onwards). </li>
* <li> Integer: Compression type enum value (version 2 onwards). </li>
* <li> Integer: Start offset of data header (version 2 onwards). </li>
* <li> Integer array: Integer offsets for all chunks in the data (upto version 2),
* Long array: Long offsets for all chunks in the data (version 3 onwards) </li>
* </ul>
*
* <p> Individual Chunks: </p>
Expand All @@ -49,7 +53,7 @@
*/
@NotThreadSafe
public class VarByteChunkSingleValueWriter extends BaseChunkSingleValueWriter {
private static final int CURRENT_VERSION = 2;
private static final int CURRENT_VERSION = 3;
public static final int CHUNK_HEADER_ENTRY_ROW_OFFSET_SIZE = Integer.BYTES;

private final int _chunkHeaderSize;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ static PinotNativeOrderLBuffer loadFile(File file, long offset, long size)
return buffer;
}

static PinotNativeOrderLBuffer mapFile(File file, boolean readOnly, long offset, long size)
public static PinotNativeOrderLBuffer mapFile(File file, boolean readOnly, long offset, long size)
throws IOException {
if (readOnly) {
return new PinotNativeOrderLBuffer(new MMapBuffer(file, offset, size, MMapMode.READ_ONLY), true, false);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ static PinotNonNativeOrderLBuffer loadFile(File file, long offset, long size)
return buffer;
}

static PinotNonNativeOrderLBuffer mapFile(File file, boolean readOnly, long offset, long size)
public static PinotNonNativeOrderLBuffer mapFile(File file, boolean readOnly, long offset, long size)
throws IOException {
if (readOnly) {
return new PinotNonNativeOrderLBuffer(new MMapBuffer(file, offset, size, MMapMode.READ_ONLY), true, false);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -260,25 +260,32 @@ public void testBytes(ChunkCompressorFactory.CompressionType compressionType)
* @throws IOException
*/
@Test
public void testBackwardCompatibility()
throws IOException {
// Get v1 from resources folder
public void testBackwardCompatibilityV1()
throws Exception {
testBackwardCompatibilityHelper("data/fixedByteSVRDoubles.v1", 10009, 0);
}

@Test
public void testBackwardCompatibilityV2()
throws Exception {
testBackwardCompatibilityHelper("data/fixedByteCompressed.v2", 2000, 100.2356);
testBackwardCompatibilityHelper("data/fixedByteRaw.v2", 2000, 100.2356);
}

private void testBackwardCompatibilityHelper(String fileName, int numDocs, double startValue)
throws Exception {
ClassLoader classLoader = getClass().getClassLoader();
String fileName = "data/fixedByteSVRDoubles.v1";
URL resource = classLoader.getResource(fileName);
if (resource == null) {
throw new RuntimeException("Input file not found: " + fileName);
}

File file = new File(resource.getFile());
try (FixedByteChunkSingleValueReader reader = new FixedByteChunkSingleValueReader(
PinotDataBuffer.mapReadOnlyBigEndianFile(file))) {
ChunkReaderContext context = reader.createContext();

int numEntries = 10009; // Number of entries in the input file.
for (int i = 0; i < numEntries; i++) {
for (int i = 0; i < numDocs; i++) {
double actual = reader.getDouble(i, context);
Assert.assertEquals(actual, (double) i);
Assert.assertEquals(actual, i + startValue);
}
}
}
Expand Down