-
Notifications
You must be signed in to change notification settings - Fork 12
Description
Right now we use pattern similar to the one below in some of our bats tests:
SETUP_VENV_RESULTS=$(st2 run packs.setup_virtualenv packs=examples -j)
run eval "echo '$SETUP_VENV_RESULTS' | jq -r '.result.result'"
assert_success RESULTS=$(st2 rule list -p chatops -j)
run eval "echo '$RESULTS' | jq -r '.[].ref'"
assert_successetc.
This pattern is far from ideal because if assertion fails (but the original command succeeds), it won't include any output of the original command (st2 run packs.setup_virtualenv packs=examples -j) which means it will be impossible to know why the tests have failed.
Only way to get more insight is by re-running those tests and that's very slow and painful process. In a lot of cases due to server not being available anymore, etc. this could mean wasting at least another 20 minutes of our valuable time.
We should establish the same pattern as we had with robot framework - always print command stdout, stderr and exit code.
Without that, debugging test failures is a nightmare and not an improvement over robot. In fact, I would even view it as a regression and a big step backward.
It could be as simply as adding bats utility function which runs a command, returns stdout and print stdout, stderr and exit code to console for debugging.