Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing C code that abort()s #31

Closed
chatziko opened this issue Jan 2, 2020 · 5 comments
Closed

Testing C code that abort()s #31

chatziko opened this issue Jan 2, 2020 · 5 comments

Comments

@chatziko
Copy link
Contributor

chatziko commented Jan 2, 2020

Thanks a lot for this super neat test library!

One useful feature I feel is missing is the ability to test a function that is meant to abort(), typically due to some failing assert(...) in the code. It can be viewed as the poor cousin of TEST_EXCEPTION for C. It's useful for testing that problematic conditions are properly detected and aborted. Moreover, such tests are useful for reaching 100% branch coverage, otherwise one branch of the assert is never hit.

This should be easily implementable by trapping SIGABRT and longjmping back in the test. It can be named TEST_ABORT, TEST_SIGNAL (maybe for testing any signal) or something like that.

Your thoughts? If you're interested I can give it a shot.

@mity
Copy link
Owner

mity commented Jan 2, 2020

When the test is executed in a child process, it should sort of work already because the parent process should see strange exit code. (Maybe it leads to a sub-optimal message because currently the parent process assumes that calling abort() went from the new TEST_ASSERT. But the message could be reword to cover the assert as well.)

When in the same process, it's completely different story. I can't see any other potential way then the SIGABRT handler and longjmp if we want to support that.

However right now I am somewhat reserved because I see it as a quite crazy thing a process can do and I have some doubts about it:

  1. Is it safe to make longjmp from a signal handler? AFAIK, at least on some systems/architectures, kernel implants a different stack into the thread for handling the signal. Is it supported by POSIX standard so we can use it for other POSIX systems as well? Or, is it guaranteed to work on some important system so that it would be worth to add such support conditionally just on that system?

  2. What if some people install their own signal handlers as part of their unit tests? I think we should allow them.

  3. Last but not least, I am afraid how big impact it would have for ease of debugging of such failing test. It should not harm the usability of Acutest with gdb or valgrind and I am not sure how well those tools can handle processes which do such crazy things at all or how big nuisance it would be for people any such tool in order to find the culprit what's going wrong in their test.

To conclude, feel free to go ahead and implement it, but be prepared that those questions will have to be answered in some reasonable way before anything like that gets merged.

@mity
Copy link
Owner

mity commented Jan 2, 2020

Oops. Sorry. Ignore the previous post. I missed the point you want to make your test succeed when it does abort().

@mity
Copy link
Owner

mity commented Jan 2, 2020

I think my doubt (2) could be solved by installing/uninstalling the handler just for the time of the macro.

But my doubts (1) and (3) still hold.

Also, if we want a new macro like TEST_THAT_THIS_ABORTS(my_func()), I think we would have to support it everywhere, including Windows. I don't think we can have it as an optional feature like being able to start a subprocess or colorize the output. It would be part of the Acutest API and so the macro should have the same semantics everywhere. When people port their code to a new platform they should not need to rewrite the unit tests to avoid the macro.

To be honest, I'm somewhat skeptical in that regard but feel free to argue or show a code proving I'm wrong ;-)

@mity
Copy link
Owner

mity commented Jan 2, 2020

Also see #18. Maybe some of the arguments used there apply here too.

@mity
Copy link
Owner

mity commented Jan 4, 2020

Thinking about this a little bit more, I now believe it is a bad idea. Let me react to some statements in the original post.

It can be viewed as the poor cousin of TEST_EXCEPTION for C.

I disagree. In C++, exceptions are more or less normal error handling mechanism and, for example, part of various library APIs. I.e. it is important that for a given known and documented error condition (e.g. an invalid parameter passed to a function) some API signals the error to the caller in some expected way (here, some particular exception type) and that this can be tested.

On the other hand, abort() is a last resort when a program does not know any better then to kill itself in order to prevent a potential bigger damage. It's not a normal error handling mechanism. Calling abort() generally means the program itself says "Ohh. I am broken. Come and debug me."

From this perspective, C++ exceptions are rather a cousin of some special return values like -1 or NULL which are used as some error indicators in C.

Moreover, such tests are useful for reaching 100% branch coverage, otherwise one branch of the assert is never hit.

I agree good test coverage is generally a nice thing to have. But it is not the goal of the testing per se. It's only some helper metrics. Value of that metrics has its limits. This question of abort() is in my opinion one such limit.

I argue that in a correct program, if there are code paths leading to abort(), they are actually unreachable (at least for any input in some supported domain) and this property of such a correct program prevents you to really reach the 100% code test coverage of it.

If you are able to write a test causing an assertion in the tested code, you should likely rather go and fix the code so it does not have a reason to raise the assertion in the first place (and hence change it to the program which cannot have 100% code coverage in principle as stated above).

To conclude, the very fact that it asserts means the program has some problem. Providing a mechanism saying "hey, the test passes" in such case imho makes no real sense.

@mity mity closed this as completed Jan 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants