Skip to content
This repository has been archived by the owner on Nov 27, 2018. It is now read-only.

test the configure command (fixes #35) #60

Merged
merged 12 commits into from
Oct 17, 2015

Conversation

mykmelez
Copy link
Contributor

@marco-c , @brendandahl This is a WIP of a test for the configure command. It uses nock to mock an HTTP request/response, so we can test the parts of the command that call the GitHub and Travis APIs without actually needing network access.

So far the test just checks that the introductory message is printed, the user is prompted for a username and password, and the user is prompted to re-enter the username/password if it's incorrect. (That last check is the only one that makes a network request so far, but this is a proof-of-concept that we could use mock those requests).

@brendandahl
Copy link
Contributor

This approach seems good. One thing I've just discovered with my bootstrap test, is testing the module instead of doing it from the command line (oghliner bootstrap) can miss problems. I included through2 in the dev dependencies, but it was actually needed in the regular dependencies, so the test were passing, even though the command was broken.

@mykmelez mykmelez changed the title WIP: test the configure command (fixes #35) test the configure command (fixes #35) Oct 15, 2015
@mykmelez
Copy link
Contributor Author

This approach seems good. One thing I've just discovered with my bootstrap test, is testing the module instead of doing it from the command line (oghliner bootstrap) can miss problems. I included through2 in the dev dependencies, but it was actually needed in the regular dependencies, so the test were passing, even though the command was broken.

Mmm, good point. I wonder if there's something we can do to test the CLI interface specifically.

@mykmelez
Copy link
Contributor Author

This should now be ready to review, so I've removed the "WIP" prefix and squashed the changes.

@@ -26,6 +26,10 @@ var readYaml = require('read-yaml');
var travisEncrypt = promisify(require('travis-encrypt'));
var writeYaml = require('write-yaml');

// Uncomment this to record network requests/responses for automated tests.
// var nock = require('nock');
// nock.recorder.rec();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we ever implement live tests with real GitHub repositories, we should enable this by default (when the test is running) to have more info when such a test fails.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we ever implement live tests with real GitHub repositories, we should enable this by default (when the test is running) to have more info when such a test fails.

Indeed. On the other hand, we can't enable this when running automated tests against the mocks, since enabling it disables those mocks. I've added some commentary to explain these use cases in c28db8a.

@marco-c
Copy link
Contributor

marco-c commented Oct 16, 2015

Looks good to me, I have a single doubt. Some tests are testing corner cases and test configure up to a certain step. In theory the code that handles the corner cases shouldn't have side effects, but I think it'd be better to always test up to the end (.travis.yml check included) to make sure that's actually the case.

You could add a function that checks everything and call it instead of cancel.

@marco-c
Copy link
Contributor

marco-c commented Oct 16, 2015

Needs conflict resolution.

@mykmelez
Copy link
Contributor Author

Looks good to me, I have a single doubt. Some tests are testing corner cases and test configure up to a certain step. In theory the code that handles the corner cases shouldn't have side effects, but I think it'd be better to always test up to the end (.travis.yml check included) to make sure that's actually the case.

Indeed, it's a good point.

You could add a function that checks everything and call it instead of cancel.

Hmm, yes. I've been looking at the code, and it isn't completely obvious how best to do this, but I'll tackle it next.

@mykmelez
Copy link
Contributor Author

The test failure is apparently because the tests all stall and time out on gulp test, even though they work fine with mocha. I'm going to dig into that too.

@mykmelez
Copy link
Contributor Author

Hmm, yes. I've been looking at the code, and it isn't completely obvious how best to do this, but I'll tackle it next.

I've done this in 00314a9, so now all tests (except for one) allow the configure command to run to completion and check that it completed successfully and wrote a correct .travis.yml file. This also enabled me to consolidate some tests that became duplicates once they all ran to completion.

The exception is the first test, which checks that the command outputs a welcome message with the repository slug. Because the command immediately prompts the user to enter their credentials after displaying that message, it races the await call that would see that prompt and respond to it. And I'm not sure quite how to restructure the code to eliminate that race condition. So I left that one alone.

In 5dbed58, I also added a test for one more case that wasn't being tested: the repository is listed in Travis, but it isn't active, so the command activates it.

Finally, in d5c4096, I worked around the test failures by running mocha directly in Travis via node node_modules/mocha/bin/mocha test. It's a hack, and we should figure out a real solution, but I'm not sure how to go about it at the moment.

@mykmelez
Copy link
Contributor Author

The exception is the first test, which checks that the command outputs a welcome message with the repository slug. Because the command immediately prompts the user to enter their credentials after displaying that message, it races the await call that would see that prompt and respond to it. And I'm not sure quite how to restructure the code to eliminate that race condition. So I left that one alone.

Right after I wrote that, I figured it out, so b97f0b9 restructures the code to win the race and let that test run to completion.

@marco-c
Copy link
Contributor

marco-c commented Oct 17, 2015

It's a hack, and we should figure out a real solution, but I'm not sure how to go about it at the moment.

Looks like gulp test works if you comment out pipe(istanbul.hookRequire()).

marco-c pushed a commit that referenced this pull request Oct 17, 2015
test the configure command (fixes #35)
@marco-c marco-c merged commit 1fe5421 into mozilla:master Oct 17, 2015
@marco-c
Copy link
Contributor

marco-c commented Oct 17, 2015

Filed #100 to investigate the gulp test issue.

@mykmelez mykmelez deleted the test-configure branch November 3, 2015 00:00
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants