Skip to content

Commit

Permalink
hwasan: Improve precision of checks using short granule tags.
Browse files Browse the repository at this point in the history
A short granule is a granule of size between 1 and `TG-1` bytes. The size
of a short granule is stored at the location in shadow memory where the
granule's tag is normally stored, while the granule's actual tag is stored
in the last byte of the granule. This means that in order to verify that a
pointer tag matches a memory tag, HWASAN must check for two possibilities:

* the pointer tag is equal to the memory tag in shadow memory, or
* the shadow memory tag is actually a short granule size, the value being loaded
  is in bounds of the granule and the pointer tag is equal to the last byte of
  the granule.

Pointer tags between 1 to `TG-1` are possible and are as likely as any other
tag. This means that these tags in memory have two interpretations: the full
tag interpretation (where the pointer tag is between 1 and `TG-1` and the
last byte of the granule is ordinary data) and the short tag interpretation
(where the pointer tag is stored in the granule).

When HWASAN detects an error near a memory tag between 1 and `TG-1`, it
will show both the memory tag and the last byte of the granule. Currently,
it is up to the user to disambiguate the two possibilities.

Because this functionality obsoletes the right aligned heap feature of
the HWASAN memory allocator (and because we can no longer easily test
it), the feature is removed.

Also update the documentation to cover both short granule tags and
outlined checks.

Differential Revision: https://reviews.llvm.org/D63908

llvm-svn: 365551
  • Loading branch information
pcc committed Jul 9, 2019
1 parent a6548d0 commit 1366262
Show file tree
Hide file tree
Showing 17 changed files with 457 additions and 232 deletions.
64 changes: 52 additions & 12 deletions clang/docs/HardwareAssistedAddressSanitizerDesign.rst
Expand Up @@ -38,6 +38,30 @@ Algorithm

For a more detailed discussion of this approach see https://arxiv.org/pdf/1802.09517.pdf

Short granules
--------------

A short granule is a granule of size between 1 and `TG-1` bytes. The size
of a short granule is stored at the location in shadow memory where the
granule's tag is normally stored, while the granule's actual tag is stored
in the last byte of the granule. This means that in order to verify that a
pointer tag matches a memory tag, HWASAN must check for two possibilities:

* the pointer tag is equal to the memory tag in shadow memory, or
* the shadow memory tag is actually a short granule size, the value being loaded
is in bounds of the granule and the pointer tag is equal to the last byte of
the granule.

Pointer tags between 1 to `TG-1` are possible and are as likely as any other
tag. This means that these tags in memory have two interpretations: the full
tag interpretation (where the pointer tag is between 1 and `TG-1` and the
last byte of the granule is ordinary data) and the short tag interpretation
(where the pointer tag is stored in the granule).

When HWASAN detects an error near a memory tag between 1 and `TG-1`, it
will show both the memory tag and the last byte of the granule. Currently,
it is up to the user to disambiguate the two possibilities.

Instrumentation
===============

Expand All @@ -46,24 +70,40 @@ Memory Accesses
All memory accesses are prefixed with an inline instruction sequence that
verifies the tags. Currently, the following sequence is used:


.. code-block:: none
// int foo(int *a) { return *a; }
// clang -O2 --target=aarch64-linux -fsanitize=hwaddress -c load.c
// clang -O2 --target=aarch64-linux -fsanitize=hwaddress -fsanitize-recover=hwaddress -c load.c
foo:
0: 08 00 00 90 adrp x8, 0 <__hwasan_shadow>
4: 08 01 40 f9 ldr x8, [x8] // shadow base (to be resolved by the loader)
8: 09 dc 44 d3 ubfx x9, x0, #4, #52 // shadow offset
c: 28 69 68 38 ldrb w8, [x9, x8] // load shadow tag
10: 09 fc 78 d3 lsr x9, x0, #56 // extract address tag
14: 3f 01 08 6b cmp w9, w8 // compare tags
18: 61 00 00 54 b.ne 24 // jump on mismatch
1c: 00 00 40 b9 ldr w0, [x0] // original load
20: c0 03 5f d6 ret
24: 40 20 21 d4 brk #0x902 // trap
0: 90000008 adrp x8, 0 <__hwasan_shadow>
4: f9400108 ldr x8, [x8] // shadow base (to be resolved by the loader)
8: d344dc09 ubfx x9, x0, #4, #52 // shadow offset
c: 38696909 ldrb w9, [x8, x9] // load shadow tag
10: d378fc08 lsr x8, x0, #56 // extract address tag
14: 6b09011f cmp w8, w9 // compare tags
18: 54000061 b.ne 24 <foo+0x24> // jump to short tag handler on mismatch
1c: b9400000 ldr w0, [x0] // original load
20: d65f03c0 ret
24: 7100413f cmp w9, #0x10 // is this a short tag?
28: 54000142 b.cs 50 <foo+0x50> // if not, trap
2c: 12000c0a and w10, w0, #0xf // find the address's position in the short granule
30: 11000d4a add w10, w10, #0x3 // adjust to the position of the last byte loaded
34: 6b09015f cmp w10, w9 // check that position is in bounds
38: 540000c2 b.cs 50 <foo+0x50> // if not, trap
3c: 9240dc09 and x9, x0, #0xffffffffffffff
40: b2400d29 orr x9, x9, #0xf // compute address of last byte of granule
44: 39400129 ldrb w9, [x9] // load tag from it
48: 6b09011f cmp w8, w9 // compare with pointer tag
4c: 54fffe80 b.eq 1c <foo+0x1c> // if so, continue
50: d4212440 brk #0x922 // otherwise trap
54: b9400000 ldr w0, [x0] // tail duplicated original load (to handle recovery)
58: d65f03c0 ret
Alternatively, memory accesses are prefixed with a function call.
On AArch64, a function call is used by default in trapping mode. The code size
and performance overhead of the call is reduced by using a custom calling
convention that preserves most registers and is specialized to the register
containing the address and the type and size of the memory access.

Heap
----
Expand Down
79 changes: 27 additions & 52 deletions compiler-rt/lib/hwasan/hwasan_allocator.cpp
Expand Up @@ -16,6 +16,7 @@
#include "sanitizer_common/sanitizer_stackdepot.h"
#include "hwasan.h"
#include "hwasan_allocator.h"
#include "hwasan_checks.h"
#include "hwasan_mapping.h"
#include "hwasan_malloc_bisect.h"
#include "hwasan_thread.h"
Expand All @@ -42,13 +43,8 @@ enum RightAlignMode {
kRightAlignAlways
};

// These two variables are initialized from flags()->malloc_align_right
// in HwasanAllocatorInit and are never changed afterwards.
static RightAlignMode right_align_mode = kRightAlignNever;
static bool right_align_8 = false;

// Initialized in HwasanAllocatorInit, an never changed.
static ALIGNED(16) u8 tail_magic[kShadowAlignment];
static ALIGNED(16) u8 tail_magic[kShadowAlignment - 1];

bool HwasanChunkView::IsAllocated() const {
return metadata_ && metadata_->alloc_context_id && metadata_->requested_size;
Expand All @@ -58,8 +54,6 @@ bool HwasanChunkView::IsAllocated() const {
static uptr AlignRight(uptr addr, uptr requested_size) {
uptr tail_size = requested_size % kShadowAlignment;
if (!tail_size) return addr;
if (right_align_8)
return tail_size > 8 ? addr : addr + 8;
return addr + kShadowAlignment - tail_size;
}

Expand Down Expand Up @@ -95,30 +89,7 @@ void HwasanAllocatorInit() {
!flags()->disable_allocator_tagging);
SetAllocatorMayReturnNull(common_flags()->allocator_may_return_null);
allocator.Init(common_flags()->allocator_release_to_os_interval_ms);
switch (flags()->malloc_align_right) {
case 0: break;
case 1:
right_align_mode = kRightAlignSometimes;
right_align_8 = false;
break;
case 2:
right_align_mode = kRightAlignAlways;
right_align_8 = false;
break;
case 8:
right_align_mode = kRightAlignSometimes;
right_align_8 = true;
break;
case 9:
right_align_mode = kRightAlignAlways;
right_align_8 = true;
break;
default:
Report("ERROR: unsupported value of malloc_align_right flag: %d\n",
flags()->malloc_align_right);
Die();
}
for (uptr i = 0; i < kShadowAlignment; i++)
for (uptr i = 0; i < sizeof(tail_magic); i++)
tail_magic[i] = GetCurrentThread()->GenerateRandomTag();
}

Expand Down Expand Up @@ -172,29 +143,32 @@ static void *HwasanAllocate(StackTrace *stack, uptr orig_size, uptr alignment,
uptr fill_size = Min(size, (uptr)flags()->max_malloc_fill_size);
internal_memset(allocated, flags()->malloc_fill_byte, fill_size);
}
if (!right_align_mode)
if (size != orig_size) {
internal_memcpy(reinterpret_cast<u8 *>(allocated) + orig_size, tail_magic,
size - orig_size);
size - orig_size - 1);
}

void *user_ptr = allocated;
// Tagging can only be skipped when both tag_in_malloc and tag_in_free are
// false. When tag_in_malloc = false and tag_in_free = true malloc needs to
// retag to 0.
if ((flags()->tag_in_malloc || flags()->tag_in_free) &&
atomic_load_relaxed(&hwasan_allocator_tagging_enabled)) {
tag_t tag = flags()->tag_in_malloc && malloc_bisect(stack, orig_size)
? (t ? t->GenerateRandomTag() : kFallbackAllocTag)
: 0;
user_ptr = (void *)TagMemoryAligned((uptr)user_ptr, size, tag);
}

if ((orig_size % kShadowAlignment) && (alignment <= kShadowAlignment) &&
right_align_mode) {
uptr as_uptr = reinterpret_cast<uptr>(user_ptr);
if (right_align_mode == kRightAlignAlways ||
GetTagFromPointer(as_uptr) & 1) { // use a tag bit as a random bit.
user_ptr = reinterpret_cast<void *>(AlignRight(as_uptr, orig_size));
meta->right_aligned = 1;
if (flags()->tag_in_malloc && malloc_bisect(stack, orig_size)) {
tag_t tag = t ? t->GenerateRandomTag() : kFallbackAllocTag;
uptr tag_size = orig_size ? orig_size : 1;
uptr full_granule_size = RoundDownTo(tag_size, kShadowAlignment);
user_ptr =
(void *)TagMemoryAligned((uptr)user_ptr, full_granule_size, tag);
if (full_granule_size != tag_size) {
u8 *short_granule =
reinterpret_cast<u8 *>(allocated) + full_granule_size;
TagMemoryAligned((uptr)short_granule, kShadowAlignment,
tag_size % kShadowAlignment);
short_granule[kShadowAlignment - 1] = tag;
}
} else {
user_ptr = (void *)TagMemoryAligned((uptr)user_ptr, size, 0);
}
}

Expand All @@ -204,10 +178,10 @@ static void *HwasanAllocate(StackTrace *stack, uptr orig_size, uptr alignment,

static bool PointerAndMemoryTagsMatch(void *tagged_ptr) {
CHECK(tagged_ptr);
tag_t ptr_tag = GetTagFromPointer(reinterpret_cast<uptr>(tagged_ptr));
uptr tagged_uptr = reinterpret_cast<uptr>(tagged_ptr);
tag_t mem_tag = *reinterpret_cast<tag_t *>(
MemToShadow(reinterpret_cast<uptr>(UntagPtr(tagged_ptr))));
return ptr_tag == mem_tag;
return PossiblyShortTagMatches(mem_tag, tagged_uptr, 1);
}

static void HwasanDeallocate(StackTrace *stack, void *tagged_ptr) {
Expand All @@ -228,14 +202,15 @@ static void HwasanDeallocate(StackTrace *stack, void *tagged_ptr) {

// Check tail magic.
uptr tagged_size = TaggedSize(orig_size);
if (flags()->free_checks_tail_magic && !right_align_mode && orig_size) {
uptr tail_size = tagged_size - orig_size;
if (flags()->free_checks_tail_magic && orig_size &&
tagged_size != orig_size) {
uptr tail_size = tagged_size - orig_size - 1;
CHECK_LT(tail_size, kShadowAlignment);
void *tail_beg = reinterpret_cast<void *>(
reinterpret_cast<uptr>(aligned_ptr) + orig_size);
if (tail_size && internal_memcmp(tail_beg, tail_magic, tail_size))
ReportTailOverwritten(stack, reinterpret_cast<uptr>(tagged_ptr),
orig_size, tail_size, tail_magic);
orig_size, tail_magic);
}

meta->requested_size = 0;
Expand Down
33 changes: 29 additions & 4 deletions compiler-rt/lib/hwasan/hwasan_checks.h
Expand Up @@ -61,15 +61,29 @@ __attribute__((always_inline)) static void SigTrap(uptr p, uptr size) {
// __builtin_unreachable();
}

__attribute__((always_inline, nodebug)) static bool PossiblyShortTagMatches(
tag_t mem_tag, uptr ptr, uptr sz) {
tag_t ptr_tag = GetTagFromPointer(ptr);
if (ptr_tag == mem_tag)
return true;
if (mem_tag >= kShadowAlignment)
return false;
if ((ptr & (kShadowAlignment - 1)) + sz > mem_tag)
return false;
#ifndef __aarch64__
ptr = UntagAddr(ptr);
#endif
return *(u8 *)(ptr | (kShadowAlignment - 1)) == ptr_tag;
}

enum class ErrorAction { Abort, Recover };
enum class AccessType { Load, Store };

template <ErrorAction EA, AccessType AT, unsigned LogSize>
__attribute__((always_inline, nodebug)) static void CheckAddress(uptr p) {
tag_t ptr_tag = GetTagFromPointer(p);
uptr ptr_raw = p & ~kAddressTagMask;
tag_t mem_tag = *(tag_t *)MemToShadow(ptr_raw);
if (UNLIKELY(ptr_tag != mem_tag)) {
if (UNLIKELY(!PossiblyShortTagMatches(mem_tag, p, 1 << LogSize))) {
SigTrap<0x20 * (EA == ErrorAction::Recover) +
0x10 * (AT == AccessType::Store) + LogSize>(p);
if (EA == ErrorAction::Abort)
Expand All @@ -85,15 +99,26 @@ __attribute__((always_inline, nodebug)) static void CheckAddressSized(uptr p,
tag_t ptr_tag = GetTagFromPointer(p);
uptr ptr_raw = p & ~kAddressTagMask;
tag_t *shadow_first = (tag_t *)MemToShadow(ptr_raw);
tag_t *shadow_last = (tag_t *)MemToShadow(ptr_raw + sz - 1);
for (tag_t *t = shadow_first; t <= shadow_last; ++t)
tag_t *shadow_last = (tag_t *)MemToShadow(ptr_raw + sz);
for (tag_t *t = shadow_first; t < shadow_last; ++t)
if (UNLIKELY(ptr_tag != *t)) {
SigTrap<0x20 * (EA == ErrorAction::Recover) +
0x10 * (AT == AccessType::Store) + 0xf>(p, sz);
if (EA == ErrorAction::Abort)
__builtin_unreachable();
}
uptr end = p + sz;
uptr tail_sz = end & 0xf;
if (UNLIKELY(tail_sz != 0 &&
!PossiblyShortTagMatches(
*shadow_last, end & ~(kShadowAlignment - 1), tail_sz))) {
SigTrap<0x20 * (EA == ErrorAction::Recover) +
0x10 * (AT == AccessType::Store) + 0xf>(p, sz);
if (EA == ErrorAction::Abort)
__builtin_unreachable();
}
}

} // end namespace __hwasan

#endif // HWASAN_CHECKS_H
26 changes: 0 additions & 26 deletions compiler-rt/lib/hwasan/hwasan_flags.inc
Expand Up @@ -37,32 +37,6 @@ HWASAN_FLAG(
"HWASan allocator flag. max_malloc_fill_size is the maximal amount of "
"bytes that will be filled with malloc_fill_byte on malloc.")

// Rules for malloc alignment on aarch64:
// * If the size is 16-aligned, then malloc should return 16-aligned memory.
// * Otherwise, malloc should return 8-alignment memory.
// So,
// * If the size is 16-aligned, we don't need to do anything.
// * Otherwise we don't have to obey 16-alignment, just the 8-alignment.
// * We may want to break the 8-alignment rule to catch more buffer overflows
// but this will break valid code in some rare cases, like this:
// struct Foo {
// // accessed via atomic instructions that require 8-alignment.
// std::atomic<int64_t> atomic_stuff;
// ...
// char vla[1]; // the actual size of vla could be anything.
// }
// Which means that the safe values for malloc_align_right are 0, 8, 9,
// and the values 1 and 2 may require changes in otherwise valid code.

HWASAN_FLAG(
int, malloc_align_right, 0, // off by default
"HWASan allocator flag. "
"0 (default): allocations are always aligned left to 16-byte boundary; "
"1: allocations are sometimes aligned right to 1-byte boundary (risky); "
"2: allocations are always aligned right to 1-byte boundary (risky); "
"8: allocations are sometimes aligned right to 8-byte boundary; "
"9: allocations are always aligned right to 8-byte boundary."
)
HWASAN_FLAG(bool, free_checks_tail_magic, 1,
"If set, free() will check the magic values "
"to the right of the allocated object "
Expand Down

0 comments on commit 1366262

Please sign in to comment.