-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Additional test function for string comparison? #85
Comments
Yeah, in general I would say that using I agree that it might not be as convenient as a special function though, but you might want to create one based on
in your case you can create for example:
and use that in your tests. About the See https://github.com/git/git/blob/master/t/t5001-archive-attr.sh#L9-L11 for an example of how such a function is implemented in Git. Thanks! |
In the case of the debian bug, I think the test could contain something like:
See also #14 that I would like to work on soon. |
Thank you for this hint - indeed this sounds like a good approach. |
Each test script is run in a directory named "trash directory.$script" where $script is the test script name (or some part of it). When a test fails this directory is not removed so one can cd into it and debug what happened. For example if you modify one test in https://github.com/chriscool/sharness/blob/master/test/simple.t so that it fails, you will find a "trash directory.simple" directory. When tests succeed, the directory is removed, so there is no need to clean up the temporary files. |
Great - thank you! |
In one of my tests I use the following approach:
The above is simple and good, as long as the tests succeed. But in case of failure, it is hard to understand the issue, since the (unexpected) content of
$foo
is not visible in the output (and thus not in build logs, etc.).Instead I would prefer something like the existing
test_expect_code
, which outputs the real exit code of the command, in case it does not match the expected exit code.How about the following function?
In case of failure, sharness would be able to report the expected and the received texts that are supposed to be equal.
What do you think? Should I prepare a pull request for such a function?
Or would you prefer something similar to the existing
test_cmp
instead?Or do you have another idea, how we could get more helpful output regarding failing tests?
(the issue causing this question was https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=910684)
The text was updated successfully, but these errors were encountered: