-
Notifications
You must be signed in to change notification settings - Fork 634
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove invalid math in ByteBuffer slice slow path #2075
Conversation
Motivation: ByteBuffer uses a synthetic UInt24 as the lower bound of the slice of memory that it can access. This necessitates a slow path when we create slices of ByteBuffers that would force the lower bound to be larger than a UInt24. This slow path has a bunch of math within it, and it turns out that some of it is invalid. This means in rare cases slicing a ByteBuffer can cause a crash. That's not great: the code should only crash if we broke an internal invariant, not because we did some invalid math. In particular, the relevant check is calculating the upper bound of the storage to be copied. This was: ``` min(storageRebaseAmount + _toCapacity(self._slice.count), self._slice.upperBound, storageRebaseAmount + capacity) ``` In this case, `storageRebaseAmount` is either `self._slice.lowerBound` or `self._slice.lowerBound + self._readerIndex`: that is, `storageRebaseAmount >= self._slice.lowerBound`. Given that `self._slice.count == self._slice.upperBound - self._slice.lowerBound`, we know that `storageRebaseAmount + self._slice.count` is always strictly greater than or equal to `self._slice.upperBound`. Calculating this is unnecessary. Unfortunately, it can also overflow: if `self._slice.upperBound + self._readerIndex` is greater than UInt32 then this will crash. Note that the other math, `storageRebaseAmount + capacity`, has been kept. `capacity` is always computed by us and this should only ever overflow if we are increasing the size of the buffer past UInt32.max, which we cannot handle anyway. Modifications: - Restructured and heavily commented _copyStorageAndRebase. - Removed the redundant crashing check. Result: Less trapping!
|
||
// Step 5: Fixup the indices. These can never underflow: `indexRebaseAmount` can only ever be 0 or `self._readerIndex`, so we either do nothing | ||
// or we move the indices backwards by a valid range, as `self._writerIndex` is always >= `self._readerIndex`. | ||
self._moveReaderIndex(to: self._readerIndex &- indexRebaseAmount) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for rewriting, commenting & fixing this! The only question I have is why we use unchecked math here. This function is the slow path and never inlined. So I don't think there's any harm caused by always doing checked math (even if it's clearly safe do use the unchecked math in this version of the code.)
The reason I'm mildly worried is because of potential future changes, they might cargo cult the unchecked math even in a place where it doesn't matter and so a new vulnerability or smth may slip in.
Feel free to ignore :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm definitely on the fence about that. Let's see what the others think.
// We only want to run this test on 64-bit systems: 32-bit systems can't allocate buffers | ||
// large enough to run this test safely. | ||
guard MemoryLayout<Int>.size >= 8 else { | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Can we use XCTSkip
here instead of just returning
return | |
throw XCTSkip("This test is only supported on 64-bit systems") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for the changes and the thorough comments.
Motivation:
ByteBuffer uses a synthetic UInt24 as the lower bound of the slice of
memory that it can access. This necessitates a slow path when we create
slices of ByteBuffers that would force the lower bound to be larger than
a UInt24.
This slow path has a bunch of math within it, and it turns out that some
of it is invalid. This means in rare cases slicing a ByteBuffer can
cause a crash. That's not great: the code should only crash if we broke
an internal invariant, not because we did some invalid math.
In particular, the relevant check is calculating the upper bound of the
storage to be copied. This was:
In this case,
storageRebaseAmount
is eitherself._slice.lowerBound
orself._slice.lowerBound + self._readerIndex
: that is,storageRebaseAmount >= self._slice.lowerBound
. Given thatself._slice.count == self._slice.upperBound - self._slice.lowerBound
,we know that
storageRebaseAmount + self._slice.count
is alwaysstrictly greater than or equal to
self._slice.upperBound
. Calculatingthis is unnecessary. Unfortunately, it can also overflow: if
self._slice.upperBound + self._readerIndex
is greater than UInt32 thenthis will crash.
Note that the other math,
storageRebaseAmount + capacity
, has beenkept.
capacity
is always computed by us and this should only everoverflow if we are increasing the size of the buffer past UInt32.max,
which we cannot handle anyway.
Modifications:
Result:
Less trapping!