-
Notifications
You must be signed in to change notification settings - Fork 10.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[HWASan] Prevent same tag for adjacent heap objects #69337
[HWASan] Prevent same tag for adjacent heap objects #69337
Conversation
void *c = malloc(size); | ||
|
||
// Confirm that no object can access adjacent objects | ||
assert(__hwasan_test_shadow(a, size + 1) != -1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could have a stricter check ==size+1, right?
Also, check left bound, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I modified the check to compare the return value with the size, and added checks to ensure the left side is inaccessible.
if (tagging_disabled_) | ||
return 0; | ||
tag_t tag; | ||
tag_t previous_tag = *(tag_t *)MemToShadow(prev_ptr); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out allocator fast path getting heavier and heavier. More memory loads and conditional branches here.
I would rather like to see us go in opposite detection, cleaning up conditions, even if it slightly increase probability of misses.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For sure it is a trade-off. For my work it was more important that the crash behavior stays as consistent as possible, to prevent spending time debugging. For what it's worth I did not measure a performance impact for these changes (using spec cpu). Of course that might be different for a larger fuzzing campaign with recurring metadata initialization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are other cases when HWASAN is non-deterministic, and they hard or impossible to solve. E.g. stack instrumentation. So instead I would rather recommend to adjust expectations.
In general I don't have other than performance concerns, which not yet confirmed/measured. So I am OK to land this, but and undo later, if it provides measurable perf benefits.
Still I don't like that users may start to rely on this behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feature if done right should not add any new branches though. We already exclude tag value zero; adding two more is just OR-merging two more conditions to the same branch.
But please do not duplicate the code. Either move tagging_disabled_ check to the callers as well, or factor out the common part into an inline helper function.
And I'd prefer if prev/next tags were passed by value. This function does not need to know about MemToShadow.
…multiple conditions
…lifting multiple conditions" This reverts commit 5fb5ff9.
tag = t->GenerateRandomTag(/*num_bits=*/8); | ||
} while (UNLIKELY(tag < kShadowAlignment || tag == pointer_tag || | ||
tag == previous_tag || tag == following_tag) && | ||
tag != 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think we need this (tag != 0) check here anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left the condition in to keep the previous functionality, where the tag after free could still be zero. Otherwise the tag < kShadowAlignment condition will continue the loop for tag == zero .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it was there because of the TaggingDisabled case where the tag generator would return zeroes repeatedly. But since you've removed the (!tag) condition from GenerateRandomTag, there can be "spurious" zero tags returned that the callee needs to filter out.
TBH I'm starting to agree with Vitaly - this is introducing too much complexity for a very soft guarantee, without evidence that it actually meaningfully improves things in practice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think I underestimated the additional logic needed to handle the TaggingDisabled case, since it didn't come up in my previous evaluation.
Thank you again for your time and for your feedback!
Previously, the random tag generation could lead to undetected heap under- and overflows due to tag collisions.
This modification to the tag generaten ensures that different tags are generated for adjacent heap objects.
For stack objects, additional changes in the instrumentation pass are necessary.