-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
program crash as a way to pass the test #18
Comments
No, it is not. Actually even the idea that someone might need this never came to my mind.
Right now, I am more at the "no" side. But feel free to argue with me. My point is that functioning program is expected to provide correct output within some domain of expected input. Yes, it may possibly crash, when it is fed with an input outside of the domain. But why should I care whether it does really crash or not? Another disadvantage could be that running such a crashing test under debugger or valgrind or any other runtime checker will then typically lead to its interference and I am not sure that in general good test suites should have this property. So, when exactly would be such feature useful?
If you can show me it would be good feature to have, feel free to try. But it may not be as simple as you think. Consider that:
|
One more problem I can see with it: If you have a test which is passing. Then you expect it provides some quite good guarantees at least as provided by all the checks it performs. But with a crash, how would you distinguish "expected crash" and "unexpected crash" (i.e. when the test crashes elsewhere or differently)? |
Thanks for the reply! Actually you do have some good points. I wrote the issue thinking about how I wanted to test my code without first thinking if it was a good idea.
The original intend of the test was to assure that incompatible flags introduced by commandline would exit the program and not execute incorrect tasks but probably I can implement this in a better way.
I agree with this and it would be difficult & tedious to debug.
I can't think of a way that makes it 100% accurate but providing a expected error code to the test may be good enough. So Anyway I think your points are good enough to discard this as a posible feature, since it's not the best way to test a functionality and it would be hard to get it to work the way we expect. Feel free to close this issue if you want or provide any other thoughs if any. :) |
In the examples folder you have this piece of code which shows a test crashing:
However, it makes the test fail. I didn't find a way to make it pass while crashing.
Is this feature available? If not, do you like the idea? I could try to write a patch for it if so. :)
The text was updated successfully, but these errors were encountered: