Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix some tests and add a workflow configuration to run them on PRs #40

Conversation

Mr0grog
Copy link
Contributor

@Mr0grog Mr0grog commented Feb 21, 2022

This is a first cut at fixing some of the tests, since none of them ran successfully when I cloned this repo. It’s also a follow-on to #39 and needs the commits from it to work. This is also incomplete; I’m not sure I’ll have time to keep working on it, but hopefully it’s a good start for someone else.

What I did so far:

  • Fixed the API tests in the server so that they run.
    • Changed a few things in the scripts so that errors are handled correctly instead of swallowed.
    • Added some error-checking/logging to the test utilities so I could see what was going wrong.
    • Ensure the database is disconnected correctly so tests don’t hang.
    • Use the correct users. The tests rely on a lot of users listed in the seeds, and several of those users were changed or removed in b465e77. I’ve added them back, but hopefully with more obvious names that mark them as needed for tests. It doesn’t look like the seeds are meant to be used in production, but if they are being used in production, this change is not OK, and should not be merged as-is!
  • Fixed a lint warning that I think was also causing some test issues.
  • Added a workflow file to run tests and linters on GitHub actions.

Still to do:

  • Fix DB tests so they can run in a non-interactive environment (it always displays a password prompt right now, but otherwise the tests run successfully).
  • Fix DB tests to use the same database as the API tests. (There is a different database name for two groups of tests; one is hardcoded and does not obey the .env file; I’m not sure what the history or reasoning was here, but it seems bad at first glance. Please LMK if this is for a good reason and shouldn’t change!)
  • Add credentials and configuration so the e-mail tests can run on GitHub Actions. I’m not sure I’m the right person to do this and don’t have whatever credentials are supposed to be used right now. Who does? (These tests pass if you have good credentials! I made my own accounts and verified everything works.)

Other bits worth noting:

  • The client package has two test scripts (test:unit and test:e2e), but neither has actual tests behind them, so they just fail if you try and run them.
  • The server DB tests have their own set of seeds, which is a little complicated.
  • I added test users to the server seeds. I think this is OK since they don’t look like they’re meant to be used in production, but if we are using them in production, that’s not good.
  • Where possible, I like to use https://github.com/wearerequired/lint-action for linting, since it surfaces lint warnings and errors as check objects on the actual line they occur on in the files tab on PRs (it’s really nice!). That doesn’t work with all the extra magic vue-cli-service lint adds in the client package, though, so I didn’t use it here. Lint errors will still fail the build, but warnings won’t, so they’ll be easier to miss. Make sure you click the “details” link for the lint job, since there are already several warnings! https://github.com/usdigitalresponse/usdr-gost/runs/5281030868?check_suite_focus=true (Update: it looks like GitHub added some magic to make this work automatically now, so the special action isn’t needed.)

The current dependencies and version of Node.js do not support Apple M1
processors. These are the minimum updates needed to work. (But not the maximum
updates! For example, Node.js 14 is past its end-of-life, and Chromdriver is
many major versions out of date.)
The `test:api` script for the server would previously leave the test server
running if any tests failed. This makes sure the server gets stopped even if the
tests failed, and still ensures the correct exit code gets surfaced so the whole
test script shows up correctly as an error if it failed.

This also makes sure that misconfiguration of the test is an an actual error,
rather than printing an error message but then exiting successfully.
It turns out there was a global teardown for the API tests that would cause
Mocha to always exit successfully, even if there was a failed test. I've
corrected this by doing what I *think* this was trying to handle: shutting
down the global database connection(s) created in `utils.js` and that are
otherwise left hanging and keep the test process running.

This also simplifies `resetDb()` by allowing it to just throw an error, since
the only place it's called from just manually throws the error that it caught.
We have a bunch of incorrect e-mail addresses in the tests, but it was really
unclear what the problem was. This surfaces a better error in those cases.
The API tests rely on certain users from the seeds, and it looks like we removed
or changed a bunch of them in b465e77. That
commit probably broke pretty much all the API tests. This adds new, more
explicitly obvious test users and changes the tests to use them.
Copy link
Contributor Author

@Mr0grog Mr0grog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test-server and test-server-db are failing, but that’s expected. We can either comment them out, or do the work to fix them before merging.

cd packages/server
yarn lint

test-server:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This job is currently failing, which is expected. See the PR description for details about what needs to be done here.

cp .env.example .env
yarn run test:api

test-server-db:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This job is currently failing, which is expected. See the PR description for details about what needs to be done here.

@@ -99,13 +99,13 @@ describe('`/api/users` endpoint', async () => {
const response = await fetchApi(`/users`, agencies.own, fetchOptions.admin);
expect(response.statusText).to.equal('OK');
const json = await response.json();
expect(json.length).to.equal(11);
expect(json).to.have.lengthOf(7);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the expect() style here to give a more verbose and helpful message while I was at it. There’s a lot of room to improve the expectations with things like this all over these test suites. The chai-as-promised plugin can also help clean things up and provide better messaging around async code as well.

@@ -12,7 +12,7 @@
"db:seed": "knex seed:run",
"test": "mocha __tests__/*.test.js",
"test:db": "__tests__/db/run-tests.sh",
"test:api": "NODE_ENV=test SUPPRESS_EMAIL=TRUE node src > test.log & NODE_ENV=test mocha --require __tests__/api/fixtures.js __tests__/api/*.test.js && pkill -fn 'node src'",
"test:api": "NODE_ENV=test SUPPRESS_EMAIL=TRUE node src > test.log & NODE_ENV=test mocha --require __tests__/api/fixtures.js __tests__/api/*.test.js; exit_code=$?; pkill -fn 'node src'; exit $exit_code",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pretty messy! A better pattern might be to use before/after each/all hooks in the tests to start and stop the dev server on a random port.

We have some nice tooling in Univaf for this (although ours is designed for Jest instead of Mocha, so it might require some slight tweaks to work here):

https://github.com/usdigitalresponse/univaf/blob/2fca59d558de365d84057bf09892d5e73e891e71/server/test/lib.ts#L18-L47

Use it like so:

https://github.com/usdigitalresponse/univaf/blob/2fca59d558de365d84057bf09892d5e73e891e71/server/test/api.edge.test.ts#L20-L36

It looks like GitHub is now automatically picking up on output formatted this
way and displaying checks! Very cool.
@Mr0grog
Copy link
Contributor Author

Mr0grog commented Feb 21, 2022

Where possible, I like to use https://github.com/wearerequired/lint-action …that doesn’t work with all the extra magic vue-cli-service lint adds in the client package, though, so I didn’t use it here.

It looks like GitHub has added some extra magic to automatically detect these kind of warnings and add them, so this actually just works now! Nevermind this bullet point. :)

Comment on lines +17 to +20
const body = await newSession.json();
if (!newSession.ok || !body.success) {
throw new Error(body.message || `Error creating session: ${JSON.stringify(body)}`);
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Total side note, but it seems like this should have gotten a 4xx response (in practice, it was a 200, which is why I added the || !body.success bit).

@cmroberts cmroberts force-pushed the i-tried-to-set-up-this-project-and-i-had-some-problems branch from 291fa0a to b1a6a0a Compare March 23, 2022 13:19
Base automatically changed from i-tried-to-set-up-this-project-and-i-had-some-problems to main March 23, 2022 13:20
@mattbroussard
Copy link
Contributor

Oops, the broken tests this fixes I fixed in #63 not realizing this PR existed

@Rapol
Copy link
Contributor

Rapol commented May 15, 2022

@mattbroussard can we close this one? We should be able to salvage the github ci workflow and readme changes.

@Mr0grog
Copy link
Contributor Author

Mr0grog commented May 16, 2022

This was originally meant as a bit of quick help for @cmroberts before she moved on. If it’s pretty hard to integrate at this point, definitely feel free to salvage what’s useful and close it!

@as1729
Copy link
Contributor

as1729 commented Oct 21, 2022

Closing for now. Will pull out parts of it in separate PRs as we continue to improve the test coverage.

@as1729 as1729 closed this Oct 21, 2022
@as1729 as1729 deleted the tests-are-only-useful-if-they-can-succeed-when-things-are-ok branch October 21, 2022 15:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants