Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

program crash as a way to pass the test #18

Closed
ghikio opened this issue Feb 13, 2019 · 3 comments
Closed

program crash as a way to pass the test #18

ghikio opened this issue Feb 13, 2019 · 3 comments
Labels

Comments

@ghikio
Copy link

ghikio commented Feb 13, 2019

In the examples folder you have this piece of code which shows a test crashing:

void
test_crash(void)
{
    int* invalid = ((int*)NULL) + 0xdeadbeef;

    *invalid = 42;
    TEST_CHECK_(1 == 1, "We should never get here, due to the write into "
                        "the invalid address.");
}

However, it makes the test fail. I didn't find a way to make it pass while crashing.
Is this feature available? If not, do you like the idea? I could try to write a patch for it if so. :)

@mity
Copy link
Owner

mity commented Feb 14, 2019

Is this feature available?

No, it is not. Actually even the idea that someone might need this never came to my mind.

If not, do you like the idea?

Right now, I am more at the "no" side. But feel free to argue with me.

My point is that functioning program is expected to provide correct output within some domain of expected input. Yes, it may possibly crash, when it is fed with an input outside of the domain. But why should I care whether it does really crash or not?

Another disadvantage could be that running such a crashing test under debugger or valgrind or any other runtime checker will then typically lead to its interference and I am not sure that in general good test suites should have this property.

So, when exactly would be such feature useful?

I could try to write a patch for it if so.

If you can show me it would be good feature to have, feel free to try. But it may not be as simple as you think. Consider that:

  • Test subprocesses work very differently on Windows then on a posix systems with fork().
  • Running the tests as a subprocess is optional (although by default on, if supported, and not running under a debugger);
  • On other systems (not Windows, not Posix-compliant), running a test in a subprocesses is not supported at all.

@mity
Copy link
Owner

mity commented Feb 14, 2019

One more problem I can see with it:

If you have a test which is passing. Then you expect it provides some quite good guarantees at least as provided by all the checks it performs.

But with a crash, how would you distinguish "expected crash" and "unexpected crash" (i.e. when the test crashes elsewhere or differently)?

@ghikio
Copy link
Author

ghikio commented Feb 14, 2019

Thanks for the reply!

Actually you do have some good points. I wrote the issue thinking about how I wanted to test my code without first thinking if it was a good idea.

Yes, it may possibly crash, when it is fed with an input outside of the domain. But why should I care whether it does really crash or not?

The original intend of the test was to assure that incompatible flags introduced by commandline would exit the program and not execute incorrect tasks but probably I can implement this in a better way.

running such a crashing test under debugger or valgrind or any other runtime checker will then typically lead to its interference.

I agree with this and it would be difficult & tedious to debug.

how would you distinguish "expected crash" and "unexpected crash".

I can't think of a way that makes it 100% accurate but providing a expected error code to the test may be good enough. So If we are expecting an error and the exit code is the one provided in the test, it's an expected crash.


Anyway I think your points are good enough to discard this as a posible feature, since it's not the best way to test a functionality and it would be hard to get it to work the way we expect.

Feel free to close this issue if you want or provide any other thoughs if any. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants