Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Test Cleanup #492

Open
9 of 23 tasks
MitMaro opened this issue Jun 14, 2018 · 6 comments
Open
9 of 23 tasks

Proposal: Test Cleanup #492

MitMaro opened this issue Jun 14, 2018 · 6 comments

Comments

@MitMaro
Copy link
Contributor

MitMaro commented Jun 14, 2018

The test/ directory is rather difficult to follow at the moment. When adding new tests it is not always clear where the test should be added. I had some difficulties when working on #437.

I'm willing to progressively fix up the test/ directory in a series of PRs, but since it would be a fair bit of effort, I do not want to start the process if there is not much of a chance that the PRs would be merged.

This would go nicely with the coverage report that will be added with #468.

Tests

  • algorithm ECDSA
  • algorithm HMAC
  • algorithm invalid
  • algorithm none
  • algorithm RSA
  • claim audience/aud
  • claim expiresIn/exp (ignoreExpiration, clockTolerance, clockTimestamp)
  • claim iat/maxAge (clockTolerance, clockTimestamp)
  • claim invalid
  • claim issuer/iss
  • claim jwtid/jti
  • claim notBefore/nbf (ignoreNotBefore, clockTolerance, clockTimestamp)
  • claim private
  • claim subject/sub
  • header keyId/kid
  • option clockTimestamp
  • option clockTolerance
  • option encoding
  • option header
  • option ignoreExpiration
  • option ignoreNotBefore
  • option mutatePayload
  • option noTimestamp
@ziluvatar
Copy link
Contributor

I just merged the other PR (coverage report). Yes, indeed, the test folder is a mess, we have been added tests without thinking in real structure.

I'd like to hear your proposal before you make the effort on the individual PRs. Do you have in mind the structure you'd like to get at the end or similar thoughts?

@ziluvatar
Copy link
Contributor

Sorry, never mind. I didn't realize the PR you created. I'll comment there.

@MitMaro
Copy link
Contributor Author

MitMaro commented Jun 16, 2018

After looking through the tests again this is what I came up with. I'm sure there are probably a few use cases that would not be captured, but they could be added later. I would add a PR for each test file created, moving any existing tests into that file as I go.

Claim tests

A set of tests that are related to the claims. This would include an options for sign/verify/decode that would add a claim to the payload (expiresIn, notBefore, etc.) as well as tests that. I think these tests would use the none algorithm, and verify that the decoded token is as expected. These tests would all be independent of the algorithm used. It might be worth breaking these tests up into several files, depending on how you feel about file length. One option is to have a test file per claim, so when a new claim is added, a new file can be created. If a new claim or option that effects a claim is added, then there should only need to be tests added to this file/files.

Algorithm tests

A set of tests that ensure that a token can be created, decoded and verified for each of the supported algorithms ('RS256', 'RS384', 'RS512', 'ES256', 'ES384', 'ES512', 'HS256', 'HS384', 'HS512', 'none'). This would include adding tests for any options that would effect the token signing. I would avoid tests that are already covered in the Claim tests, as to avoid unneeded duplication of tests. I think for simplicity, there should be four test files here, one each to handle RS*, ES*, HS* and none algorithms,

Other tests

There are other tests related to the payload that wouldn't fit within the other test categories. I'm thinking about options like mutatePayload and encoding in particular, which effect the overall token but are independent of the claims and algorithm. In these cases a file should be added for each of these options.

Sample file listing

test/claim-nbf.tests.js
    /claim-exp.tests.js
    /claim-private.tests.js
    /claim-invalid.tests.js
    /...
    /algorithm-invalid.tests.js
    /algorithm-rs.tests.js
    /algorithm-es.tests.js
    /algorithm-hs.tests.js
    /algorithm-none.tests.js
    /option-mutatePayload.tests.js
    /option-encoding.tests.js
    /option-headers.tests.js

@ziluvatar
Copy link
Contributor

It sounds really good!

@MitMaro
Copy link
Contributor Author

MitMaro commented Jul 12, 2018

@ziluvatar

Certain options operate on several claims (ex. clockTimestamp). In those cases I have been placing a test for that option with the claim test (ex. 'should verify "nbf" using "clockTimestamp"') in nbf.test.js.

When it comes to validating these options I was going to create another test file for just that option (ex. clockTimestamp.test.js). This does mean that when adding a new option, it may be necessary to modify multiple test files. The other option is to consolidate all the tests for a particular option into a single file for that option. This does mean that the tests for a claim will be spread across multiple files. There are trade-offs to a developer trying to add/update tests with both options.

Is there a particular pattern you would prefer?

Side note: I'm getting a good feel for what you are looking for with the PRs. Once I'm confident that the PRs can be reviewed without many modifications, I will put up several at a time.

@ziluvatar
Copy link
Contributor

Certain options operate on several claims (ex. clockTimestamp). In those cases I have been placing a test for that option with the claim test (ex. 'should verify "nbf" using "clockTimestamp"') in nbf.test.js.

I like this pattern ^
If an option will modify how a claim is handled I think that is under the responsibility of that feature / claim support.
One thing you can do is to prepare a group of utility functions for testing regarding that option that can be reused in every claim.

Even if I like more that option, it is not like I hate the other one it is a matter of decide what we think it is the best.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants