This repository was archived by the owner on Mar 28, 2019. It is now read-only.
Refactor acceptance tests #123
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The first iteration of acceptance tests lumped all tests into one big run on a single VM, which caused problems because the cleanup tasks were imperfect. For example, a failed cleanup could leave a process listening on port 8140, which caused the subsequent test to fail incorrectly.
This moves the tests into separate files and tags them, so that rspec tags can be used to run an more limited set of tests (e.g. just unicorn, just passenger, just the agent tests, etc). I've also removed the cleanup tasks because they are no longer necessary and I don't think we should support both approaches to testing.
The advantage here is no chance of bleed-over between test cases, whereas the previous way required imperfect automated cleanup between test cases. There are two major downsides: first, comprehensive testing of all cases requires considerably longer because a new VM must be provisioned for each case. Running all tests will take ~1.5 hours instead of ~20 minutes. Second, tests can no longer be run via a convenient rake task. The new syntax has been documented in
CONTRIBUTING.md butit's less obvious.I believe the tradeoff is acceptance because this is a step toward running reliable, trustable integration tests via jenkins (or similar). In that case, the tests can be parallelized by test case and OS. Given sufficient parallelization, this approach will run faster than the more linear approach, because each individual test is faster.
A minor side change is the the default test environment switches from Ubuntu 12.04 to Debian Wheezy, because Debian is the only fully supported target environment at this point. (And, I don't think anybody but me runs the tests anyway, and I always run them against Debian anyway)
I've also modestly improved the quality of the tests by adding an agent run to each server test, and doing sanity checks to confirm that the other services aren't running (e.g. nginx shouldn't be running in the webrick test case).