-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Standardizing memcpy/memset/etc. as pcode operations? #5769
Comments
FWIW, I've been trying to work out what I'd like to do about some of the ... challenges, shall we say, that the MSVC Compiler "intrinsic" pragma pose in understanding decompiled code. The code in question is compiled with These are NOT "bug report" or "feature request" quality observations yet, but I think they /may/ add some value to your request. I fully expect incremental progress toward all that lot being "solved" too. :)
My expectations and/or desires around this:
|
Re "the decompiler will sometimes recognise and translate memcpy from the loop, which is great.": I don't think I've ever seen this happen! IDA will often do this, but I wasn't aware that Ghidra had the same functionality. Would you happen to have a sample that demonstrates this? I'd be really curious to see how that works :) |
It does not. The limitation of one Data type per stack slot cripples a lot of things. It would be nice to have variables that aren't at function scope (let's just declare 8000 variables at the start of the function before writing our code like in the 80s)... |
Relates to #4461 |
Many processors have bulk memory instructions: x86 has "rep movs/stos" and "repe cmps", AArch64 has "CPY*/SET*", WebAssembly has "memory.copy/memory.fill", etc. Some also have string operations, for example "rep scas" (strlen) in x86.
There are two implementation strategies for such operations. One is to create a small loop, which will be translated into a loop in the decompiler; x86 takes this approach, for example. The downside is that the decompiled output becomes harder to read. The second is to use a custom pcodeop, which is much more readable but then requires custom emulation and is opaque to the decompiler. In particular, when memcpying to the stack, not all effects will be visible to the decompiler.
Describe the solution you'd like
I propose to have a dedicated set of core pcodeops to implement bulk memory operations. These would be understood by SLEIGH and by the decompiler, allowing us to have much nicer decompilation for such constructs.
As an additional benefit, a dedicated pcodeop could be the target for a memory operation fusing pass in the decompiler, which recognizes common memory operation idioms (e.g. copying in a loop, or unrolled assignment to consecutive memory addresses) and replaces them with a single bulk memory op. This would enable further simplifications and optimizations. The decompiler could then properly model the effects of bulk memory operations.
I would suggest, at minimum, to have four bulk operations:
copy(dst, src, unit_size, count)
: copy fromsrc
todst
,unit_size
bytes at a time, a total ofcount
timesfill(dst, value, count)
: filldst
withcount
copies ofvalue
; the unit size is taken from the size of thevalue
varnoderead_effect(src, size)
: a no-op that indicates that the memory region [src, src+size) is read from. Used to support decompilation of custom pcodeops.write_effect(dst, size)
: a no-op that indicates to the decompiler that the memory region [dst, dst+size) is written to. Used to support decompilation of custom pcodeops.The latter two are added so that the decompiler can precisely model the memory effects of custom pcodeops.
Other operations are possible, e.g. memcmp, strlen, strcpy, but they are more situational and may be harder to implement, and thus are not part of this proposal.
Describe alternatives you've considered
As noted above, alternative implementation approaches include an inline loop or a custom pcodeop, but both have downsides.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: