-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Safe-Linking to Single-Linked-Lists (SLL) #1161
base: master
Are you sure you want to change the base?
Conversation
Safe-Linking is a security mechanism that protects single-linked lists (such as the Free Lists) from being tampered by attackers. The mechanism makes use of randomness from ASLR (mmap_base), and when combined with object alignmnet integrity checks, it protects the pointers from being hijacked by an attacker. While Safe-Unlinking protects double-linked lists (such as the Small Bins in ptmalloc), there wasn't any similar protection for attacks against single-linked lists. This solution protects against three common attacks: * Partial pointer override: Modified the lower bytes (Little Endian) * Full pointer override: Hijacks the pointer to an attacker's location * Unaligned objects: pointing the list to an unaligned address The design assumes an attacker doesn't know where the heap is located, and uses the ASLR randomness to "sign" the signle-linked pointers. We mark the pointer as P and the location in which it is stored as L, and the calculation will be: * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P) * *L = PROTECT(P) This way, the random bits from the address L (which start at the bits in the PAGE_SHIFT position), will be merged with the LSB of the stored protected pointer. This protection layer prevents an attacker from modifying the pointer into a controlled value. An additional check that the objects are kAligned adds an important layer: * Attackers can't point to illegal (unaligned) memory address * Attackers must guess correctly the alignment bits Due to kAlignment being 8, an attacker will be directly fail 7 out of 8 times. And this could be improved in the future if the alignment will match the class size, per Free List. This proposed patch was benchmarked and on the worst test case it has an additional overhead of 1.5%, while the overhead for the average test case was a negligible 0.02%. In addition, in 2013 a similar mitigation was incorporated into Chromium's Free List (FL) implementation in their version of TCMalloc (branched out from gperftools 2.0). According to Chromium's documentation, the added overhead was less than 2%.
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed (or fixed any issues), please reply here with What to do if you already signed the CLAIndividual signers
Corporate signers
ℹ️ Googlers: Go here for more info. |
@googlebot I signed it! |
CLAs look good, thanks! ℹ️ Googlers: Go here for more info. |
Thanks for the patch, but I cannot take it as is. Because as is it ruins performance. I'm happy to accept it with #ifdef or something like that, that can be enabled at build time. |
Hi, thanks for the response. I benchmarked the patch using the repository's benchmarking suite, and as I stated above the results are 0.02% for the average test and 1.5% for the worst case test case. By the way, these results are better than the cost of the similar implementation that was already integrated into Chrome 8 years ago (standing on 2% overhead). Chrome's development team was actually referred to this pull request so they could borrow some of the optimizations back to their port of TCMalloc. Anyway, as I don't know what is the coding convention for the #ifdefs in this project, it would be great if you could make the necessary changes so that this feature could be integrated into this repo, so that users could choose to activate if they want to. Thanks again for your cooperation, |
I'll be happy to add all the autoconf plumbing as long as you make changes to #ifdef your new logic. |
See also #1035 directly contributed from chromium's fork (but needed a bit more work). Your work is of course more advanced, but perhaps you can "steal" define name used there. |
Safe-Linking will only be used if compiled with the flag TCMALLOC_USE_SLL_SAFE_LINKING.
Added the ifdef Safe-Linking is practically limited to these two SLL functions, as the rest of the changes in the first commit are:
It would be great if you could wire up the |
@@ -226,7 +226,7 @@ class ThreadCache { | |||
} | |||
|
|||
void* Next() { | |||
return SLL_Next(&list_); | |||
return SLL_Next(list_); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, why you're reading list_ here. Looks wrong. I think original code is right, you want list_'s location to be treated as object and so then sll_next reads list_.
::tcmalloc::Log(::tcmalloc::kCrash, __FILE__, __LINE__, #cond); \ | ||
UNREACHABLE; \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Next time please consider separating this out to distinct commit. This change is unrelated to safe linking. In general I'm slightly opposing this specific change. Not sure about you but those noreturn/"fatal" calls which crash program end up causing difficulties debugging crash. Because modern compilers tend to conflate call sites in those cases. Your change is slightly different and might work better. But in best world we'd be discussing (and perhaps testing) that separately.
BTW did you do that because it makes specific improvement in generated code or out of general desire to improve ?
// Special case for the first element | ||
span->objects = ptr; | ||
void* tail = ptr; | ||
ptr += size; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unfortunately this fragment doesn't apply cleanly anymore and for a reason. There was possibility of overflow in ptr + size computation which is now fixed. So your code will need to adapt since you're special casing first item and thus duplicating ptr + size logic (feel free keep duplication)
I apologize for delaying this. So I am wondering why special-casing first element. This causes difficulties, confusion and I think even bug in Next() implementation that you've introduced. Why not safe-link everywhere, without special-cases ? Another thing worth pointing out is more type "erasure" that happens in your change. I.e. tail variable was correctly typed void** and is now void* purely due to SLL_Next and SLL_SetNext hiding typecasting. I wonder if it'll be less confusion if we force typecasting out of those functions and have clear contract that SLL_{Set,}Next get void** as first argument (or even void * volatile *, but that will be uglier to typecast, but perhaps we can separate typecasting to separate function) |
Safe-Linking is a security mechanism that protects single-linked
lists (such as the Free Lists) from being tampered by attackers. The
mechanism makes use of randomness from ASLR (
mmap_base
), and whencombined with object alignment integrity checks, it protects the
pointers from being hijacked by an attacker.
While Safe-Unlinking protects double-linked lists (such as the Small
Bins in ptmalloc), there wasn't any similar protection for attacks
against single-linked lists. This solution protects against three
common attacks:
The design assumes an attacker doesn't know where the heap is located,
and uses the ASLR randomness to "sign" the single-linked pointers. We
mark the pointer as P and the location in which it is stored as L, and
the calculation will be:
PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
*L = PROTECT(P)
This way, the random bits from the address L (which start at the bits
in the PAGE_SHIFT position), will be merged with the LSB of the stored
protected pointer. This protection layer prevents an attacker from
modifying the pointer into a controlled value.
An additional check that the objects are kAligned adds an important
layer:
Due to kAlignment being 8, an attacker will be directly fail 7 out of 8
times. And this could be improved in the future if the alignment will
match the class size, per Free List.
This proposed patch was benchmarked and on the worst test case it has
an additional overhead of 1.5%, while the overhead for the average test
case was a negligible 0.02%. In addition, in 2013 a similar mitigation
was incorporated into Chromium's Free List (FL) implementation in their
version of TCMalloc (branched out from gperftools 2.0). According to
Chromium's documentation, the added overhead was less than 2%.
For more information, feel free to read our full white-paper (attached):
Safe-Linking-White-Paper.txt