Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compiler Hardening Guide should include -ftrivial-auto-var-init #245

Open
thomasnyman opened this issue Oct 5, 2023 · 5 comments
Open

Comments

@thomasnyman
Copy link
Contributor

Brought up during the C/C++ Compiler BP Call 2023-09.27.

The -ftrivial-auto-var-init=[uninitialized|pattern|zero] option will make the compiler initialize automatic variables with either a pattern or with zeroes (depending on the chosen option argument) to increase program security by preventing uninitialized memory disclosure and use.

This flag is supported since GCC 12 and Clang 8.0.

Considerations and references

@siddhesh
Copy link
Contributor

See also: performance regression in systemd due to this option: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111523

@thesamesam
Copy link
Contributor

There's nuance as to whether or not one should use zero or pattern as well:

zero benefits from dead store elimination as well as not "looking valid" so it's possibly more likely it'll take some innocuous path rather than one which tries to operate on real data and write to disk or something.

But pattern is easier to spot in a backtrace or when it leaks into output.

@Ext3h
Copy link

Ext3h commented Oct 30, 2023

And yet both zero and pattern have the major drawback that they will actually prevent external tools such as valgrind from being able to detect unsafe, uninitialized stack reads.

It's a gives a dangerous feeling of false safety, which results in less portable code when applied incorrectly during development. It's only a somewhat reasonable choice for a release build. But dead store elimination is the only benefit you should take from it, you should still never rely on this as a "feature".

In comparison, MSVC had enabled /GS by default for about a decade now, and by personal experience outside the OSS sector, this was one of the worst offenders (even ahead of confusing volatilefor atomic) when it came to masking errors which rendered code unsafe to port to conforming compilers. Not just once, but repeatedly. Senior developers who didn't even knew you had to initialize local variables thanks to this "feature" were a real thing.

The bad part starts when QA relies heavily on tooling like valgrind / Intel Inspector to check for uninitialized memory accesses (not everyone is a developer which also renders asan a non-option), and these "helpful" compiler options just render you blind to issues you could have seen a long time ago.

Not to mention that these options also have a heavy impact on the performance of these tools as well. Unlike the real
processor, there's no dead store elimination to safe you in emulation.

The impact is even bigger when this causes errors in pre-built 3rd-party libraries to be masked, which are surprisingly often unmasked by validating an integrating application with real world data, rather than the (unfortunately often insufficient) data the application was tested against.

Even when Microsoft is talking about the success story of /GS in their own product, they fail to mention the fallout of how long it took them to actually get their own code base safe again, and how much they had to invest into static code analysis passes to find all the bugs which had crept in, hidden among an overwhelming number of false positives.


Simple counter-example why this is a bad idea:

You do TDD, and allocate your object/struct on the stack. You are rescued by MSVC style stack-zeroing, and your exhaustive test suite passes with flying flags. You followed every best practice you know about external tooling (everything short of asan).

Your library is integrated, and allocated on the heap instead. It works at first because you end up in pristine memory. It suddenly breaks at an entirely random point in time because you got placed into non-pristine memory.

That also happened. More than once.

Inconsistent behavior between storage classes is a trap. And in the C world, you always have to expect containers/patterns/macros which will not zero memory, even if you rerouted every malloc to calloc and forced stack zeroing. At least C++ is a little less prone here, as you can force member zeroing to be symmetrical.

@thomasnyman
Copy link
Contributor Author

The contraindicators brought up by @Ext3h and @siddhesh seem to substantial enough that I would propose we do not recommend this option in the guide at this point in time, but include it in the List of Considered Compiler Options with a brief summary of Ext3h's comment.

@SecurityCRob
Copy link
Contributor

Has this been addressed by the C/C++ Compiler Hardening options guide? @gkunz @thomasnyman @david-a-wheeler

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants