Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement max total allocation size for frida asan #433

Merged
merged 2 commits into from
Dec 27, 2021

Conversation

s1341
Copy link
Collaborator

@s1341 s1341 commented Dec 26, 2021

Also sets the shadow-bit options to 44 and 36 instead of 46 and 36. 46 does not work on aarch64-linux.

@@ -205,6 +207,11 @@ impl Allocator {
}
let rounded_up_size = self.round_up_to_page(size) + 2 * self.page_size;

if self.total_allocation_size + rounded_up_size > self.options.asan_max_total_allocation() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we increment self.total_allocation_size only when we call mmap?
because if we successfully find a chunk from find_smallest_fit, we don't consume any additional memory in this alloc()

Copy link
Collaborator Author

@s1341 s1341 Dec 26, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no... we want to limit the total memory alloced per test, not the amount mmaped (subsequent tests will hardly ever mmap). I forgot to reset the total to zero when we reset the allocator.

@s1341 s1341 merged commit 6384f1d into main Dec 27, 2021
@s1341 s1341 deleted the frida_asan_max_total_allocation branch December 27, 2021 09:49
khang06 pushed a commit to khang06/LibAFL that referenced this pull request Oct 11, 2022
…tal_allocation

Implement max total allocation size for frida asan
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants