Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove invalid math in ByteBuffer slice slow path #2075

Merged
merged 3 commits into from
Apr 13, 2022

Conversation

Lukasa
Copy link
Contributor

@Lukasa Lukasa commented Apr 11, 2022

Motivation:

ByteBuffer uses a synthetic UInt24 as the lower bound of the slice of
memory that it can access. This necessitates a slow path when we create
slices of ByteBuffers that would force the lower bound to be larger than
a UInt24.

This slow path has a bunch of math within it, and it turns out that some
of it is invalid. This means in rare cases slicing a ByteBuffer can
cause a crash. That's not great: the code should only crash if we broke
an internal invariant, not because we did some invalid math.

In particular, the relevant check is calculating the upper bound of the
storage to be copied. This was:

min(storageRebaseAmount + _toCapacity(self._slice.count), self._slice.upperBound, storageRebaseAmount + capacity)

In this case, storageRebaseAmount is either self._slice.lowerBound or
self._slice.lowerBound + self._readerIndex: that is,
storageRebaseAmount >= self._slice.lowerBound. Given that
self._slice.count == self._slice.upperBound - self._slice.lowerBound,
we know that storageRebaseAmount + self._slice.count is always
strictly greater than or equal to self._slice.upperBound. Calculating
this is unnecessary. Unfortunately, it can also overflow: if
self._slice.upperBound + self._readerIndex is greater than UInt32 then
this will crash.

Note that the other math, storageRebaseAmount + capacity, has been
kept. capacity is always computed by us and this should only ever
overflow if we are increasing the size of the buffer past UInt32.max,
which we cannot handle anyway.

Modifications:

  • Restructured and heavily commented _copyStorageAndRebase.
  • Removed the redundant crashing check.

Result:

Less trapping!

Motivation:

ByteBuffer uses a synthetic UInt24 as the lower bound of the slice of
memory that it can access. This necessitates a slow path when we create
slices of ByteBuffers that would force the lower bound to be larger than
a UInt24.

This slow path has a bunch of math within it, and it turns out that some
of it is invalid. This means in rare cases slicing a ByteBuffer can
cause a crash. That's not great: the code should only crash if we broke
an internal invariant, not because we did some invalid math.

In particular, the relevant check is calculating the upper bound of the
storage to be copied. This was:

```
min(storageRebaseAmount + _toCapacity(self._slice.count), self._slice.upperBound, storageRebaseAmount + capacity)
```

In this case, `storageRebaseAmount` is either `self._slice.lowerBound` or
`self._slice.lowerBound + self._readerIndex`: that is,
`storageRebaseAmount >= self._slice.lowerBound`. Given that
`self._slice.count == self._slice.upperBound - self._slice.lowerBound`,
we know that `storageRebaseAmount + self._slice.count` is always
strictly greater than or equal to `self._slice.upperBound`. Calculating
this is unnecessary. Unfortunately, it can also overflow: if
`self._slice.upperBound + self._readerIndex` is greater than UInt32 then
this will crash.

Note that the other math, `storageRebaseAmount + capacity`, has been
kept. `capacity` is always computed by us and this should only ever
overflow if we are increasing the size of the buffer past UInt32.max,
which we cannot handle anyway.

Modifications:

- Restructured and heavily commented _copyStorageAndRebase.
- Removed the redundant crashing check.

Result:

Less trapping!
@Lukasa Lukasa added the patch-version-bump-only For PRs that when merged will only cause a bump of the patch version, ie. 1.0.x -> 1.0.(x+1) label Apr 11, 2022

// Step 5: Fixup the indices. These can never underflow: `indexRebaseAmount` can only ever be 0 or `self._readerIndex`, so we either do nothing
// or we move the indices backwards by a valid range, as `self._writerIndex` is always >= `self._readerIndex`.
self._moveReaderIndex(to: self._readerIndex &- indexRebaseAmount)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for rewriting, commenting & fixing this! The only question I have is why we use unchecked math here. This function is the slow path and never inlined. So I don't think there's any harm caused by always doing checked math (even if it's clearly safe do use the unchecked math in this version of the code.)

The reason I'm mildly worried is because of potential future changes, they might cargo cult the unchecked math even in a place where it doesn't matter and so a new vulnerability or smth may slip in.

Feel free to ignore :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'm definitely on the fence about that. Let's see what the others think.

// We only want to run this test on 64-bit systems: 32-bit systems can't allocate buffers
// large enough to run this test safely.
guard MemoryLayout<Int>.size >= 8 else {
return
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Can we use XCTSkip here instead of just returning

Suggested change
return
throw XCTSkip("This test is only supported on 64-bit systems")

Copy link
Member

@FranzBusch FranzBusch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for the changes and the thorough comments.

@Lukasa Lukasa merged commit 8f60b52 into apple:main Apr 13, 2022
@Lukasa Lukasa deleted the cb-slice-from-big-end-buffer branch April 13, 2022 12:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
patch-version-bump-only For PRs that when merged will only cause a bump of the patch version, ie. 1.0.x -> 1.0.(x+1)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants