Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run subset of tests and address performance issues #11

Closed
nickevansuk opened this issue Jan 22, 2020 · 9 comments
Closed

Run subset of tests and address performance issues #11

nickevansuk opened this issue Jan 22, 2020 · 9 comments
Labels
integration-tests This issue is related to the integration-tests project proj-2-continued Work continuing on from Project 2 Contract

Comments

@nickevansuk
Copy link
Collaborator

nickevansuk commented Jan 22, 2020

Can we specify parameters somehow to only run some of the tests, or otherwise pipe more of the output into the log files?

For example could we add a category (e.g. "C1only", "endtoend") to each of the tests, and use parameters to run only tests in a certain category?

The results are fairly overwhelming on first run

@nickevansuk nickevansuk added the integration-tests This issue is related to the integration-tests project label Jan 27, 2020
@nickevansuk nickevansuk changed the title Run subset of tests Run subset of tests and address performance issues Feb 18, 2020
@nickevansuk
Copy link
Collaborator Author

nickevansuk commented Feb 18, 2020

Additionally the test suite currently takes around 60 seconds to startup, before it runs any tests.

It outputs jest and then appears to wait for 30 seconds, before outputting Found 7 test suites and then waiting for a further 30 seconds.

Can we improve performance here? What are we doing to slow things down?

@nickevansuk
Copy link
Collaborator Author

Also we should make parallel execution of tests configurable, as this can have the effect of slowing down testing for a larger number of tests against a database

@ylt
Copy link
Member

ylt commented Feb 20, 2020

This is already configurable, just needs documenting. Defaults are in the package.json, and can also be changed via commandline args.

npm run test -- --maxWorkers 1

@ylt
Copy link
Member

ylt commented Feb 20, 2020

Running a subset of tests can be done just by specifying the test path, this can be a directory (run everything within).

npm run test -- test/flows/book-and-cancel

(run all book and cancel flows).

all the way down to specifying an individual test file:

npm run test -- test/flows/book-and-cancel/book-and-customer-cancel-success/book-free-test.js

@nickevansuk
Copy link
Collaborator Author

nickevansuk commented Feb 21, 2020

Ok great! Can we also use wildcards?

The usecase I'm thinking of is where we have (following discussions this morning and notes in #34):

  • test/flows/book-and-cancel/book-and-customer-cancel-success/book-free-test-scheduledsession.js (booking a SessionSeries)
  • test/flows/book-and-cancel/book-and-customer-cancel-success/book-free-test-slot.js (booking a Slot)
  • test/flows/book-and-cancel/book-and-customer-cancel-success/book-free-test-slot-and-scheduledsession.js (booking multiple OrderItems - a SessionSeries and a Slot together - as per Multiple improvements to test suite #34 (comment))

and we perhaps only want to run those that only involve "ScheduledSession" (because they haven't implemented Slots in their system)

Unless there's another approach we can take here?

@thill-odi thill-odi added the proj-2-continued Work continuing on from Project 2 Contract label Mar 2, 2020
@nickevansuk
Copy link
Collaborator Author

Need to allow a subset of the tests to be run for only SessionSeries, or FacilityUse, or both or more depending on what feeds have been implemented by the booking system

@ylt
Copy link
Member

ylt commented Mar 6, 2020

Trying to figure out whether we can implement some sort of filtering system - being able to define them much like rspec would be nice: https://relishapp.com/rspec/rspec-core/docs/hooks/filters

@nickevansuk
Copy link
Collaborator Author

nickevansuk commented Mar 25, 2020

@ylt to scope this issue down to something achievable within a reasonable timebox, could we just cover the following two in a config file:

Configuration spec

The test suite can be configured to test "optional" features, by indicating whether they are either:

  • implemented - the tests will run to confirm proper implementation of the feature
  • not-implemented - the tests will run to confirm the feature is correctly advertised as "not implemented"
  • disable-tests - disable all tests for this feature (only recommended during development)

Additionally, bookable opportunity types can be configured, to indicate which types the implementation is expected to support:

  • sessions (dual feeds of SessionSeries and ScheduledSession)
  • facilities (dual feeds of FacilityUse and Slot)
  • events (feeds of Event)
  • headline-events (feeds of HeadlineEvent with embedded Event)
  • courses (feeds of CourseInstance)

Configuration example

An example config file could look like this:

{
  "features": {
    "opportunity-feed": "implemented",
    "dataset-site": "implemented",
    "availability-check": "not-implemented",
    "simple-book-free-opportunities": "not-implemented",
    "simple-book-with-payment": "not-implemented",
    "payment-reconciliation-detail-validation": "not-implemented",
    "booking-window": "not-implemented",
    "customer-requested-cancellation": "not-implemented",
    "seller-requested-cancellation": "not-implemented",
    "seller-requested-cancellation-message": "disable-tests",
    "cancellation-window": "implemented",
    "seller-requested-replacement": "implemented",
    "named-leasing": "implemented",
    "anonymous-leasing": "implemented"
  },
  "opportunity-types": {
    "sessions": true
    "facilities": true
    "events": false
    "headline-events": false
    "courses": false
  }
}

Logic example

An example of how this should work:

If "simple-book-with-payment": "implemented", the following test should be run:

  • within openactive-integration-tests/test/features/payment/simple-book-with-payment/implemented

If "simple-book-with-payment": "not-implemented", the following test should be run:

  • within openactive-integration-tests/test/features/payment/simple-book-with-payment/not-implemented

If "simple-book-with-payment": "disable-tests"

  • neither tests should be run

Depending on the boolean values of opportunity-types, the array in the following should be populated:

  • openactive-integration-tests/test/features/payment/simple-book-with-payment/implemented/book-random-test.js

@nickevansuk
Copy link
Collaborator Author

All the issues mentioned here have been resolved / implemented

JoshuaLevett pushed a commit to JoshuaLevett/openactive-test-suite that referenced this issue Oct 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
integration-tests This issue is related to the integration-tests project proj-2-continued Work continuing on from Project 2 Contract
Projects
None yet
Development

No branches or pull requests

3 participants