Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Additional test function for string comparison? #85

Closed
sumpfralle opened this issue Oct 17, 2018 · 5 comments
Closed

Additional test function for string comparison? #85

sumpfralle opened this issue Oct 17, 2018 · 5 comments

Comments

@sumpfralle
Copy link
Contributor

In one of my tests I use the following approach:

test_expect_success "compare string content" '
    foo=$(do_something)
    expected="foo bar baz"
    [ "$foo" = "$expected" ]
'

The above is simple and good, as long as the tests succeed. But in case of failure, it is hard to understand the issue, since the (unexpected) content of $foo is not visible in the output (and thus not in build logs, etc.).

Instead I would prefer something like the existing test_expect_code, which outputs the real exit code of the command, in case it does not match the expected exit code.

How about the following function?

test_expect_equal_stdout 'foo bar baz' '
    do_something
'

In case of failure, sharness would be able to report the expected and the received texts that are supposed to be equal.

What do you think? Should I prepare a pull request for such a function?

Or would you prefer something similar to the existing test_cmp instead?

Or do you have another idea, how we could get more helpful output regarding failing tests?
(the issue causing this question was https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=910684)

@chriscool
Copy link
Collaborator

Yeah, in general I would say that using test_cmp is probably more flexible, as it works for example with multi-line output, and more helpful when debugging as both the expected and actual ouputs are in some files and the diff is shown if the comparison fails.

I agree that it might not be as convenient as a special function though, but you might want to create one based on test_cmp. See for example https://github.com/git/git/blob/master/t/t6101-rev-parse-parents.sh#L10-L14 which contains:

test_cmp_rev_output () {
	git rev-parse --verify "$1" >expect &&
	eval "$2" >actual &&
	test_cmp expect actual
}

in your case you can create for example:

test_cmp_output () {
	echo "$1" >expect &&
	eval "$2" >actual &&
	test_cmp expect actual
}

and use that in your tests.

About the test_expect_equal_stdout you suggest, I wonder what the test description would be when the tests are run? Could you show how it would be implemented (maybe using the above test_cmp_output)?

See https://github.com/git/git/blob/master/t/t5001-archive-attr.sh#L9-L11 for an example of how such a function is implemented in Git.

Thanks!

@chriscool
Copy link
Collaborator

In the case of the debian bug, I think the test could contain something like:

printf "%s\n" list quit | nc localhost munin | grep -v "^#" >all_plugins &&
sed "s/if_\w\+//g" all_plugins >all_plugins_wo_if &&
printf "%s\n" $(echo "cpu df df_inode entropy forks fw_packets interrupts irqstats load memory netstat open_files open_inodes proc_pri processes swap threads uptime users vmstat") >expected &&
test_cmp expected all_plugins_wo_if

See also #14 that I would like to work on soon.

@sumpfralle
Copy link
Contributor Author

Thank you for this hint - indeed this sounds like a good approach.
Just one detail: this would require the creation of temporary files in the current directory, which would need to be removed afterwards, or? How do you usually handle this?

@chriscool
Copy link
Collaborator

Each test script is run in a directory named "trash directory.$script" where $script is the test script name (or some part of it). When a test fails this directory is not removed so one can cd into it and debug what happened.

For example if you modify one test in https://github.com/chriscool/sharness/blob/master/test/simple.t so that it fails, you will find a "trash directory.simple" directory.

When tests succeed, the directory is removed, so there is no need to clean up the temporary files.

@sumpfralle
Copy link
Contributor Author

Great - thank you!
In this case your proposed solution is indeed just the proper one - thus I think, there is no need for a new function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants