Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
GitHub is where the world builds software
Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
Adds frida script for gathering code coverage #17
This allows frida to filter inside the target by thread id. This is probably only useful if you have other introspection into the process to see which thread you're interested. However, on larger targets if you can use this it improves results significantly.
This looks like an awesome use-case for Stalker, great job!
Just a few notes on the Frida-specific bits:
After some optimizations a few months back, the code generated by Stalker should only be a little bit slower. I wrote a benchmark that measured it on LZMA compression, and the ratio is currently somewhere between 1.x and 2.x slowdown. This is however after it has "warmed up" and has all the basic blocks cached, and has applied the back-patching optimizations and warmed up the inline caches. It is careful not to re-use blocks in case of self-modifying code, though, so you pay performance overhead every time it context-switches into the runtime to look up the target of a branch, followed by checking if the original code changed since it was compiled. Stalker achieves high performance by back-patching branches and updating inline caches once they're considered stable. You can configure the
The recommended way to deal with this is to hook
This will require additional hooks, per OS. Each thread will be recompiling the code, though, so the warm-up cost isn't shared between all of them. (Though this won't matter if the threads happen to execute wildly different code – so this really depends on the application.)
Stalker has been designed with this in mind, but it depends on how you configure
Did you run into issues with the ModuleMap API? It supports providing a filter function if you only care about certain modules and want faster lookups.
You can safely use
Awesome, thanks @oleavr, that is insanely helpful.
I've implemented the non-OS specific changes, and it results in a roughly 60x speedup on my little toy benchmark. My guess is the remaining latency that's being introduced is due to me streaming the events as opposed to caching and flushing. One of the potential use cases for this might be tracing applications that crash, in which case streaming events should be a bit better, albeit at the cost of speed and with the same coverage dropping near termination as
For the OS specific things, I'm not really able to test them on all the platforms Frida supports, so I'm inclined to leave it OS agnostic and keep it as a general limitation, as it's not too onerous and the script it still quite useful without them.
Thanks again, that's exactly the type of feedback I was hoping for.
Incoperate @oleavr's feedback. Three changes: * Instead of instrumenting on 'block' events, we now instrument on 'compile' events. This dramatically improves the performance. Since we're only doing block coverage anyways, and since all the blocks get uniqued, we lose nothing from this change. * Set the Stalker trust threshhold to 0 . This means we're completely punting on self modifying code in favor speed, but IDA/Lighthouse can't visualize self modifying code anyways, so again, we lose nothing. * Use frida's ModuleMap api instead of making a worse version ourselves. The first time reading the docs I misunderstood this API, but @oleavr helpfully cleared it up!