Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: fix pointer calculations to avoid overflows. Fixes #5713. #5716

Merged
merged 2 commits into from
Feb 10, 2021
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions src/runtime/HalideBuffer.h
Original file line number Diff line number Diff line change
Expand Up @@ -371,7 +371,7 @@ class Buffer {
void crop_host(int d, int min, int extent) {
assert(dim(d).min() <= min);
assert(dim(d).max() >= min + extent - 1);
int shift = min - dim(d).min();
ptrdiff_t shift = min - dim(d).min();
dsharletg marked this conversation as resolved.
Show resolved Hide resolved
if (buf.host != nullptr) {
buf.host += shift * dim(d).stride() * type().bytes();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there should be parentheses/casts here to avoid the possibility of dim(d).stride * type().bytes() overflowing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is necessary since shift (now a ptrdiff_t) should force promotion of the other field in the expression. Is that not correct?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking it would depend on the order in which the compiler associates this arithmetic. Even if it's well defined behavior for the compiler to implement it as (shift * dim(d).stride()) * type().bytes(), we should at least add parentheses to make that explicit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the associativity is guaranteed to be left-to-right: https://en.cppreference.com/w/c/language/operator_precedence

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be, but I still think we should not rely on this, by adding explicit parentheses. It's easy for things like this to unintentionally break later because it is so subtle.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. And BTW, thanks for reviewing this.
I can get back to modifying this PR tomorrow afternoon and add explicit parenthesis, though frankly I think it actually makes things more confusing. I would look at such code and do a double take, wondering why those parenthesis are there at all, thinking I was actually missing something subtle. The same applies to unnecessary casts. IMO, if your goal is to inform developers who run across this code later, it's better to just add appropriate comments. If for some reason you don't trust the compiler to follow the C++ spec, then the correct approach is to add unit tests to actually verify behavior, which can then also detect regressions.
These are also the sorts of problems that can often be detected by static analysis.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not worried about a non-conforming compiler, I'm more thinking that someone might come along and refactor the code somehow (e.g. pulling something into a temporary, reordering the expression) without realizing the order is important.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added parenthesis

}
Expand Down Expand Up @@ -409,7 +409,7 @@ class Buffer {
assert(d >= 0 && d < dimensions());
assert(pos >= dim(d).min() && pos <= dim(d).max());
buf.dimensions--;
int shift = pos - buf.dim[d].min;
ptrdiff_t shift = pos - buf.dim[d].min;
if (buf.host != nullptr) {
buf.host += shift * buf.dim[d].stride * type().bytes();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there should be parentheses/casts here to avoid the possibility of dim(d).stride * type().bytes() overflowing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same reply as above.

}
Expand Down
10 changes: 5 additions & 5 deletions src/runtime/device_buffer_utils.h
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ WEAK device_copy make_buffer_copy(const halide_buffer_t *src, bool src_host,
// Offset the src base pointer to the right point in its buffer.
c.src_begin = 0;
for (int i = 0; i < src->dimensions; i++) {
c.src_begin += src->dim[i].stride * (dst->dim[i].min - src->dim[i].min);
c.src_begin += (uint64_t)src->dim[i].stride * (dst->dim[i].min - src->dim[i].min);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just noticed this on the last skim: should this be int64_t ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fields in struct device_copy are uint64_t type, including src_begin. Like I said - there's a lack of consistency in the codebase. This assumes that stride >= 0 and dst->dim[i].min >= src->dim[i].min. If these assumptions don't hold, then there is something more seriously wrong here I think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking that the RHS here could be signed (as opposed to the two examples below, where it seems harmless to cast to uint64). But I agree that there are other issues here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should indeed be int64_t. The operations required to compute an offset given some strides, etc are multiplications, subtractions, additions. None of those differ between signed and unsigned (yay modulo arithmetic), so none of that matters. But there are also upcasts, and those are the only thing that differs between sign and unsigned. The source field is an int32_t, so it must be sign-extended. This PR casts an int32_t to a uint64_t. I'm honestly not sure whether that sign or zero extends. Using an int64_t would make it clear that it's sign-extending it. The rest of the arithmetic is invariant to signed vs unsigned.

There may be bugs elsewhere when the strides are negative, but let's not add a new one.

Copy link
Contributor Author

@cimes-isi cimes-isi Feb 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is stride ever allowed to be a negative value? If not, then the PR should be fine as-is I think, otherwise we can do something like:

c.src_begin += (uint64_t)((int64_t)src->dim[i].stride * (dst->dim[i].min - src->dim[i].min));

The uint64_t cast is not strictly needed.
edit: actually, you could want src_begin to actually decrease in value?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we might want src_begin to decrease in value, and we may want to support negative strides in future. For now we should just write new code assuming they could be negative.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, but to be clear - stride is currently never negative? I want to make sure this change didn't break anything.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some parts of Halide support negative strides and some don't. device buffer copies currently do not. So in this code, stride is currently not going to be negative (or other things would break too).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In theory, negative strides should be fully supported. As you've noticed, though, this is undertested (probably for both normal and large_buffers mode). We should add some tests that specifically exercise negative strides to verify that this works (and stays working). (Or declare that we don't support negative strides...)

}
c.src_begin *= c.chunk_size;

Expand All @@ -114,8 +114,8 @@ WEAK device_copy make_buffer_copy(const halide_buffer_t *src, bool src_host,
// in ascending order in the dst.
for (int i = 0; i < dst->dimensions; i++) {
// TODO: deal with negative strides.
uint64_t dst_stride_bytes = dst->dim[i].stride * dst->type.bytes();
uint64_t src_stride_bytes = src->dim[i].stride * src->type.bytes();
uint64_t dst_stride_bytes = (uint64_t)dst->dim[i].stride * dst->type.bytes();
uint64_t src_stride_bytes = (uint64_t)src->dim[i].stride * src->type.bytes();
// Insert the dimension sorted into the buffer copy.
int insert;
for (insert = 0; insert < i; insert++) {
Expand Down Expand Up @@ -172,7 +172,7 @@ WEAK device_copy make_device_to_host_copy(const halide_buffer_t *buf) {
ALWAYS_INLINE int64_t calc_device_crop_byte_offset(const struct halide_buffer_t *src, struct halide_buffer_t *dst) {
int64_t offset = 0;
for (int i = 0; i < src->dimensions; i++) {
offset += (dst->dim[i].min - src->dim[i].min) * src->dim[i].stride;
offset += (dst->dim[i].min - src->dim[i].min) * (int64_t)src->dim[i].stride;
}
offset *= src->type.bytes();
return offset;
Expand All @@ -181,7 +181,7 @@ ALWAYS_INLINE int64_t calc_device_crop_byte_offset(const struct halide_buffer_t
// Caller is expected to verify that src->dimensions == dst->dimensions + 1,
// and that slice_dim and slice_pos are valid within src
ALWAYS_INLINE int64_t calc_device_slice_byte_offset(const struct halide_buffer_t *src, int slice_dim, int slice_pos) {
int64_t offset = (slice_pos - src->dim[slice_dim].min) * src->dim[slice_dim].stride;
int64_t offset = (slice_pos - src->dim[slice_dim].min) * (int64_t)src->dim[slice_dim].stride;
offset *= src->type.bytes();
return offset;
}
Expand Down
2 changes: 1 addition & 1 deletion src/runtime/halide_buffer_t.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ halide_buffer_t *_halide_buffer_crop(void *user_context,
dst->dim[i] = src->dim[i];
dst->dim[i].min = min[i];
dst->dim[i].extent = extent[i];
offset += (min[i] - src->dim[i].min) * src->dim[i].stride;
offset += (min[i] - src->dim[i].min) * (int64_t)src->dim[i].stride;
}
if (dst->host) {
dst->host += offset * src->type.bytes();
Expand Down