Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: fix pointer calculations to avoid overflows. Fixes #5713. #5716

Merged
merged 2 commits into from
Feb 10, 2021

Conversation

cimes-isi
Copy link
Contributor

These changes are based primarily on visual code inspection in areas identified with some simple grep'ing as potentially problematic, with a subset of them verified to fix #5713 (e.g., I have not tested the data copy functions). As I am not that familiar with the codebase, I ask that the reviewer please check the runtime sources to see if I might have missed other similarly affected lines, and I'll be happy to add such additional fixes to the PR.

I will also note that there are various data types being used for offset computations, including at least int64_t, uint64_t, and ptrdiff_t. It might be better to be more consistent and use the datatypes that are actually intended for pointer calculations, like ptrdiff_t. The only such changes included here are to avoid overflow, not to apply consistency in the codebase.

@steven-johnson
Copy link
Contributor

LGTM

@@ -371,7 +371,7 @@ class Buffer {
void crop_host(int d, int min, int extent) {
assert(dim(d).min() <= min);
assert(dim(d).max() >= min + extent - 1);
int shift = min - dim(d).min();
ptrdiff_t shift = min - dim(d).min();
if (buf.host != nullptr) {
buf.host += shift * dim(d).stride() * type().bytes();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there should be parentheses/casts here to avoid the possibility of dim(d).stride * type().bytes() overflowing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is necessary since shift (now a ptrdiff_t) should force promotion of the other field in the expression. Is that not correct?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking it would depend on the order in which the compiler associates this arithmetic. Even if it's well defined behavior for the compiler to implement it as (shift * dim(d).stride()) * type().bytes(), we should at least add parentheses to make that explicit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the associativity is guaranteed to be left-to-right: https://en.cppreference.com/w/c/language/operator_precedence

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be, but I still think we should not rely on this, by adding explicit parentheses. It's easy for things like this to unintentionally break later because it is so subtle.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. And BTW, thanks for reviewing this.
I can get back to modifying this PR tomorrow afternoon and add explicit parenthesis, though frankly I think it actually makes things more confusing. I would look at such code and do a double take, wondering why those parenthesis are there at all, thinking I was actually missing something subtle. The same applies to unnecessary casts. IMO, if your goal is to inform developers who run across this code later, it's better to just add appropriate comments. If for some reason you don't trust the compiler to follow the C++ spec, then the correct approach is to add unit tests to actually verify behavior, which can then also detect regressions.
These are also the sorts of problems that can often be detected by static analysis.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not worried about a non-conforming compiler, I'm more thinking that someone might come along and refactor the code somehow (e.g. pulling something into a temporary, reordering the expression) without realizing the order is important.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added parenthesis

@@ -409,7 +409,7 @@ class Buffer {
assert(d >= 0 && d < dimensions());
assert(pos >= dim(d).min() && pos <= dim(d).max());
buf.dimensions--;
int shift = pos - buf.dim[d].min;
ptrdiff_t shift = pos - buf.dim[d].min;
if (buf.host != nullptr) {
buf.host += shift * buf.dim[d].stride * type().bytes();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there should be parentheses/casts here to avoid the possibility of dim(d).stride * type().bytes() overflowing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same reply as above.

src/runtime/HalideBuffer.h Show resolved Hide resolved
Not needed for correctness, but @dsharletg feels it adds clarity.
@dsharletg
Copy link
Contributor

Thanks for the fix, and the changes.

@@ -89,7 +89,7 @@ WEAK device_copy make_buffer_copy(const halide_buffer_t *src, bool src_host,
// Offset the src base pointer to the right point in its buffer.
c.src_begin = 0;
for (int i = 0; i < src->dimensions; i++) {
c.src_begin += src->dim[i].stride * (dst->dim[i].min - src->dim[i].min);
c.src_begin += (uint64_t)src->dim[i].stride * (dst->dim[i].min - src->dim[i].min);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just noticed this on the last skim: should this be int64_t ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fields in struct device_copy are uint64_t type, including src_begin. Like I said - there's a lack of consistency in the codebase. This assumes that stride >= 0 and dst->dim[i].min >= src->dim[i].min. If these assumptions don't hold, then there is something more seriously wrong here I think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking that the RHS here could be signed (as opposed to the two examples below, where it seems harmless to cast to uint64). But I agree that there are other issues here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should indeed be int64_t. The operations required to compute an offset given some strides, etc are multiplications, subtractions, additions. None of those differ between signed and unsigned (yay modulo arithmetic), so none of that matters. But there are also upcasts, and those are the only thing that differs between sign and unsigned. The source field is an int32_t, so it must be sign-extended. This PR casts an int32_t to a uint64_t. I'm honestly not sure whether that sign or zero extends. Using an int64_t would make it clear that it's sign-extending it. The rest of the arithmetic is invariant to signed vs unsigned.

There may be bugs elsewhere when the strides are negative, but let's not add a new one.

Copy link
Contributor Author

@cimes-isi cimes-isi Feb 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is stride ever allowed to be a negative value? If not, then the PR should be fine as-is I think, otherwise we can do something like:

c.src_begin += (uint64_t)((int64_t)src->dim[i].stride * (dst->dim[i].min - src->dim[i].min));

The uint64_t cast is not strictly needed.
edit: actually, you could want src_begin to actually decrease in value?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we might want src_begin to decrease in value, and we may want to support negative strides in future. For now we should just write new code assuming they could be negative.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, but to be clear - stride is currently never negative? I want to make sure this change didn't break anything.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some parts of Halide support negative strides and some don't. device buffer copies currently do not. So in this code, stride is currently not going to be negative (or other things would break too).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In theory, negative strides should be fully supported. As you've noticed, though, this is undertested (probably for both normal and large_buffers mode). We should add some tests that specifically exercise negative strides to verify that this works (and stays working). (Or declare that we don't support negative strides...)

@@ -89,7 +89,7 @@ WEAK device_copy make_buffer_copy(const halide_buffer_t *src, bool src_host,
// Offset the src base pointer to the right point in its buffer.
c.src_begin = 0;
for (int i = 0; i < src->dimensions; i++) {
c.src_begin += src->dim[i].stride * (dst->dim[i].min - src->dim[i].min);
c.src_begin += (uint64_t)src->dim[i].stride * (dst->dim[i].min - src->dim[i].min);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking that the RHS here could be signed (as opposed to the two examples below, where it seems harmless to cast to uint64). But I agree that there are other issues here.

@dsharletg dsharletg merged commit 935b91e into halide:master Feb 10, 2021
@alexreinking alexreinking added this to the v12.0.0 milestone Feb 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Extern Func with parallel scheduling directive segfaults with large data sizes
5 participants