Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Redo bounds checking for decoding and encoding of double-records
* We are not allowed to throw exceptions when decoding records until after we have verified that our read was consistent, with `PageCursor.shouldRetry()` * Double-records needs two concurrently open PageCursors cursors do decode. * … but the outer retry loop in CommonAbstractStore is only looking at the cursor used to decode the first record. * To solve this, the concept of linked page cursors is introduced: * A page cursor can open a linked page cursor, and calls to `shouldRetry` and `checkAndClearOutOfBoundsFlag` on the parent cursor will also automatically delegate to the linked cursor. * This way, the retry loop on CommonAbstractStore ends up covering both cursors with its calls to `shouldRetry` and its bounds check. * To simplify decoding of records that span multiple record-units, the CompositePageCursor is introduced: * A CompositePageCursor presents a seamless view of parts of two distinct page cursors. * Using a composite page cursor, double-records can be decoded as if they were a single record. * This simplifies record decoding logic. * This simplification has made the DataAdapter concept, and its SecondaryRead and WriteCursorAdapters redundant. * As part of this work the `PageCursor.getUnsignedInt()` method has been removed and inlined into all of its call sites. This simplifies testing of the cursor interface, which already has a large number of IO methods. * The `RecordFormatTest` was making assertions about the number of `PagedFile.io()` and `PageCursor.next()` calls the decoding logic makes during its work. These assertions have been removed because those measurements cannot be trusted, when we are also using an adversarial page cache implementation that causes random retries of reads, etc.
- Loading branch information