Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test discrepancies #10

Closed
tstearns opened this issue Jun 16, 2016 · 3 comments
Closed

test discrepancies #10

tstearns opened this issue Jun 16, 2016 · 3 comments

Comments

@tstearns
Copy link
Contributor

The "runTests" script has a few items that I'm curious about:

  1. The MD5 hashes don't reveal what the expected output is, so when they don't match, I can't tell why -- because I don't know what the original output*.txt files were that generated those numbers. (this is meaningful because I haven't been able to get the hashes on master or v1.2.2 to match on any system -- either Python or Perl on OSX 10.11, CentOS 5.11, or OpenBSD 5.9). Does it make sense to instead include the expected output files in the repo, and have the runTests script run MD5 on those files rather than storing the hashes directly in the script, so that it's possible to view discrepancies? I'd be happy to submit a PR for that, but can't include the expected files because I've never been able to generate them such that they match the existing hashes.
  2. The Python and Perl scripts appear to have slightly different formatting output, which means that their output hashes differ, which makes me wonder which script the tests are supposed to be run against. For example, the alignment of the key field is different when running tests with Python 2.7 or Perl 5.18.

Python:

^[[32m /etc/mateconf|^[[34m7780758 ^[[35m(44.60%) ^[[37m••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••^[[0m

Perl:

/etc/mateconf^[[0m |^[[32m7780758 ^[[35m(44.60%) ^[[34m••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••^[[0m

@time-less-ness
Copy link
Collaborator

The test framework was hacked together quickly and as someone who doesn't know how to hack together test frameworks.

Any suggestions on either tweaking it a little to make it work, or completely revamping it would be welcomed.

@tstearns
Copy link
Contributor Author

Makes sense. I've started to play around with environment portability for the test script, which doesn't totally resolve the above questions, but does make testing more easily reproducible. So I'm going to hack away at the following branch for a bit, and will open a PR referencing this ticket when I'm done:
https://github.com/tstearns/distribution/tree/testing-refactor

@tstearns
Copy link
Contributor Author

tstearns commented Nov 4, 2016

I believe that #18 closes this issue, as the Perl and Python version give matching test results for me on macOS and OpenBSD using Python 2.7 and Perl 5.20.

@tstearns tstearns closed this as completed Nov 4, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants