-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix some tests and add a workflow configuration to run them on PRs #40
Fix some tests and add a workflow configuration to run them on PRs #40
Conversation
The current dependencies and version of Node.js do not support Apple M1 processors. These are the minimum updates needed to work. (But not the maximum updates! For example, Node.js 14 is past its end-of-life, and Chromdriver is many major versions out of date.)
The `test:api` script for the server would previously leave the test server running if any tests failed. This makes sure the server gets stopped even if the tests failed, and still ensures the correct exit code gets surfaced so the whole test script shows up correctly as an error if it failed. This also makes sure that misconfiguration of the test is an an actual error, rather than printing an error message but then exiting successfully.
It turns out there was a global teardown for the API tests that would cause Mocha to always exit successfully, even if there was a failed test. I've corrected this by doing what I *think* this was trying to handle: shutting down the global database connection(s) created in `utils.js` and that are otherwise left hanging and keep the test process running. This also simplifies `resetDb()` by allowing it to just throw an error, since the only place it's called from just manually throws the error that it caught.
We have a bunch of incorrect e-mail addresses in the tests, but it was really unclear what the problem was. This surfaces a better error in those cases.
The API tests rely on certain users from the seeds, and it looks like we removed or changed a bunch of them in b465e77. That commit probably broke pretty much all the API tests. This adds new, more explicitly obvious test users and changes the tests to use them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test-server
and test-server-db
are failing, but that’s expected. We can either comment them out, or do the work to fix them before merging.
cd packages/server | ||
yarn lint | ||
|
||
test-server: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This job is currently failing, which is expected. See the PR description for details about what needs to be done here.
cp .env.example .env | ||
yarn run test:api | ||
|
||
test-server-db: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This job is currently failing, which is expected. See the PR description for details about what needs to be done here.
@@ -99,13 +99,13 @@ describe('`/api/users` endpoint', async () => { | |||
const response = await fetchApi(`/users`, agencies.own, fetchOptions.admin); | |||
expect(response.statusText).to.equal('OK'); | |||
const json = await response.json(); | |||
expect(json.length).to.equal(11); | |||
expect(json).to.have.lengthOf(7); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed the expect()
style here to give a more verbose and helpful message while I was at it. There’s a lot of room to improve the expectations with things like this all over these test suites. The chai-as-promised
plugin can also help clean things up and provide better messaging around async code as well.
@@ -12,7 +12,7 @@ | |||
"db:seed": "knex seed:run", | |||
"test": "mocha __tests__/*.test.js", | |||
"test:db": "__tests__/db/run-tests.sh", | |||
"test:api": "NODE_ENV=test SUPPRESS_EMAIL=TRUE node src > test.log & NODE_ENV=test mocha --require __tests__/api/fixtures.js __tests__/api/*.test.js && pkill -fn 'node src'", | |||
"test:api": "NODE_ENV=test SUPPRESS_EMAIL=TRUE node src > test.log & NODE_ENV=test mocha --require __tests__/api/fixtures.js __tests__/api/*.test.js; exit_code=$?; pkill -fn 'node src'; exit $exit_code", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is pretty messy! A better pattern might be to use before/after each/all hooks in the tests to start and stop the dev server on a random port.
We have some nice tooling in Univaf for this (although ours is designed for Jest instead of Mocha, so it might require some slight tweaks to work here):
Use it like so:
It looks like GitHub is now automatically picking up on output formatted this way and displaying checks! Very cool.
It looks like GitHub has added some extra magic to automatically detect these kind of warnings and add them, so this actually just works now! Nevermind this bullet point. :) |
const body = await newSession.json(); | ||
if (!newSession.ok || !body.success) { | ||
throw new Error(body.message || `Error creating session: ${JSON.stringify(body)}`); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Total side note, but it seems like this should have gotten a 4xx response (in practice, it was a 200, which is why I added the || !body.success
bit).
291fa0a
to
b1a6a0a
Compare
Oops, the broken tests this fixes I fixed in #63 not realizing this PR existed |
@mattbroussard can we close this one? We should be able to salvage the github ci workflow and readme changes. |
This was originally meant as a bit of quick help for @cmroberts before she moved on. If it’s pretty hard to integrate at this point, definitely feel free to salvage what’s useful and close it! |
Closing for now. Will pull out parts of it in separate PRs as we continue to improve the test coverage. |
This is a first cut at fixing some of the tests, since none of them ran successfully when I cloned this repo. It’s also a follow-on to #39 and needs the commits from it to work. This is also incomplete; I’m not sure I’ll have time to keep working on it, but hopefully it’s a good start for someone else.
What I did so far:
Still to do:
.env
file; I’m not sure what the history or reasoning was here, but it seems bad at first glance. Please LMK if this is for a good reason and shouldn’t change!)Other bits worth noting:
client
package has two test scripts (test:unit
andtest:e2e
), but neither has actual tests behind them, so they just fail if you try and run them.Where possible, I like to use https://github.com/wearerequired/lint-action for linting, since it surfaces lint warnings and errors as check objects on the actual line they occur on in the files tab on PRs (it’s really nice!). That doesn’t work with all the extra magicMake sure you click the “details” link for the lint job, since there are already several warnings! https://github.com/usdigitalresponse/usdr-gost/runs/5281030868?check_suite_focus=true (Update: it looks like GitHub added some magic to make this work automatically now, so the special action isn’t needed.)vue-cli-service lint
adds in the client package, though, so I didn’t use it here. Lint errors will still fail the build, but warnings won’t, so they’ll be easier to miss.