-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failure to init Umbra due to segment-spanning 2TB reservation #2184
failure to init Umbra due to segment-spanning 2TB reservation #2184
Comments
Our Appveyor tests are on Windows 8.1 and none of them hit this. I did not hit this in my Windows 8.1 VM either. We will need more information about the address space layout on your machine. I think the logfile named |
From log above, one can see I'm running Windows 10: Ran with more verbose logging:
Failure seen in global.1108.log file
|
Right, had Windows 8.1 on the brain from your filing under issue #1693. Same problem though: I cannot reproduce any Umbra issue in my VM with 1803 build 17134 so this is specific to certain machines somehow. |
This just happened in a test on Appveyor: https://ci.appveyor.com/project/DynamoRIO/drmemory/builds/27571330
|
To help searches: the appveyor failure was in the x64 fuzz_buffer.mutator.o-b-3 test |
We're having problems using Dr.Memory on Windows 10 1903 x64 -- all applications we tried immediately terminate with some error.
Running without the debug switches the output is something like
Please let me know if you need any information or if there's something else I can do to help you track this down. |
This is almost certainly an address space layout issue, but I have not been able to reproduce it. If you could provide the address space layout in the culprit process? However you want to do it: one way would be in the windbg debugger attached to calc.exe to run |
OK...
If you need something else tell me what I need to do differently :) |
Thanks. I'm out for the next week but will try to look at it after that. |
@StephenHillAtViaivi pasted in the error message above which I am duplicating here:
From the vadump we see confirmation that there is indeed a massive (2 terabyte!) reservation:
Umbra's segments here are 1TB. |
My proposal is to eliminate the uniformly-sized-segments invariant from 64-bit Umbra. I believe that asymmetric, differently-sized regions should be fine, with the overlap checks identifying all conflicts. |
The 2TB reservation is probably caused by Control Flow Guard:
http://blogs.microsoft.co.il/sasha/2016/01/05/windows-process-memory-usage-demystified/ |
Please let me know if there's anything else I can do to help fix this. |
Thanks, I will try to reproduce by building with |
I had Control Flow Guard enabled...I can confirm that disabling this feature fixes this problem |
Just an update: It looks like there is no mapping offset that works for CFG's 0x7df0' or earlier (trying for 0x7c00') for UMBRA_MAP_SCALE_UP_2X. This is what I'm stuck on. Just bailing on that in combination with CFG is not simple since the app regions are set up first and expanding across segments is best done then, before any clients request particular map scalings. |
I came up with a solution for the SCALE_UP_2X by taking another bit for the mask. Now the problem is that DrM tries to set the shadow values for the entire 2TB region which needless to say takes a long time and a lot of memory...
|
Given that allocation_size for 0x00007ff5`bde4a5e8 returned the whole 2TB region (these are committed pieces inside it) we're going to have to change the strategy here for filling a unknown region. |
Adds support to Umbra for the 2TB below-libraries reservation made by Control Flow Guard. This requires a new mapping offset for DOWN_8 and DOWN_4 and a new mask for UP_2X, along with support for a single app allocation spanning multiple Umbra segments. Both the multi-segment support and extra mask bits should not cause any problems, but neither has been done before. Adds additional automated checking for a reserve region overlapping with the shadow region for a mapping. Adds a new test of every Umbra shadow scale factor. The test found bugs in the existing scheme: UP_2X didn't handle app 200'-300' properly. That is now fixed. Builds the umbra test app with /guard:cf to test Control Flow Guard. Unfortunately the 2TB reservation only seems to happen with VS2017. I did manual testing with a VS2017 build of the test app. Adds proper reset of Umbra state to allow tearing down and restarting Umbra with a new scheme. Changes DrM's -define_unknown_regions behavior to invoke mmap_walk() (and fixes a bug in mmap_walk) to only shadow-define committed memory, avoiding an attempt to shadow-define the whole 2TB region. Fixes #2184
Adds support to Umbra for the 2TB below-libraries reservation made by Control Flow Guard. This requires a new mapping offset for DOWN_8 and DOWN_4 and a new mask for UP_2X, along with support for a single app allocation spanning multiple Umbra segments. Both the multi-segment support and extra mask bits should not cause any problems, but neither has been done before. Adds additional automated checking for a reserve region overlapping with the shadow region for a mapping. Adds a new test of every Umbra shadow scale factor. The test found bugs in the existing scheme: UP_2X didn't handle app 200'-300' properly. That is now fixed. Builds the umbra test app with /guard:cf to test Control Flow Guard. Unfortunately the 2TB reservation only seems to happen with VS2017. I did manual testing with a VS2017 build of the test app. Adds proper reset of Umbra state to allow tearing down and restarting Umbra with a new scheme. Changes DrM's -define_unknown_regions behavior to invoke mmap_walk() (and fixes a bug in mmap_walk) to only shadow-define committed memory, avoiding an attempt to shadow-define the whole 2TB region. Fixes #2184
Using DrMemory-Windows-2.1.0-1.msi, I see this with all my Cpputest exe's built with MSVS2017:
The text was updated successfully, but these errors were encountered: